Chat GPT & The Rise of LLMs – Humanity’s Final Act of Hubris?

Published On: April 17, 2023

For Scifi lovers in 2023, it’s pretty much impossible now to ignore the role that powerful artificial intelligences will play in humanity’s future. Because it is here, now, amongst us. Although the stories vary on how this will play out, it seems we often envision machines hell bent on blasting us out of existence. AI has long captured our imaginations, but until recently it was the stuff of a dystopian nighttime reading. Not anymore. Now, as intelligent, self-learning and self-driven artificial tools such as Chat-GPT are launched onto the world, and they are emerging in their dozens, shouldn’t we be bracing for the many, great disruptions these technologies will cause across almost every aspect of our lives?

A crossroads is hurtling towards as at high speed as we get ever nearer to creating machine intelligences that will soon be equal to our own – commonly understood as Artificial General Intelligence or AGI. And we are woefully unprepared across a multitude of critical concerns. Is there a possibility of a moratorium to allow for responsible reflection and transparency before any further big AI projects are developed and released? Will our governments be able to affectively legislate enforceable policies to safeguard civilisation as a whole from as many of the unintended consequences as possible as AI evolves towards an inevitable AGI? And even if any of these questions has a possibility of a yes, will we most vitally be able to solve the problem of alignment to ensure that when AGI is realised, it has enough of the same goals as we do? Or will our hubris, our delight in our overconfidence of such a sublime, God-like tools, lead to the extinction of human civilisation as we know it?

Hubris? What Us?

“With great power comes great responsibility.”

This is what dear old Uncle Ben famously told Peter Parker about his newly discovered abilities. As a mentor, Ben tried to impress upon the young lad, full of inflated exuberance, to beware his limitations, to remember his humanity no matter how attractively cloaked in God-like powers Peter had become. But as comic book lovers lament, Peter’s hubris gets his beloved Uncle killed and it is only with this hard lesson that Peter Parker responsibility assumes the heavy burden of being Spiderman.

But Spiderman is a modern tale borrowing from antiquity. The Ancient Greeks were great story tellers, their myths full of archetypically driven moral lessons, from which stories like Spiderman are based. Who doesn’t know of the myth of Icarus who flew too close to the sun to fall to his death because he ignored his father’s proscriptions of the importance of not letting that glue melt in his wings as he gleefully soared above the earth? Sound familiar? Poor Prometheus is still having his liver torn out every day by a buzzard only for its regeneration to go through the whole grizzly affair the next day. Why? Prometheus egoistically believed he could outsmart the Gods and give prohibitive fire to man and get away with it.

The Greeks heavily contemplated how excessive pride and self-confidence must eventually evoke the wrath of Nemeses – the Goddess of retribution and divine punishment whose utility is the vengeful balancing force that seeks to restore equilibrium. Practically, they contemplated individual hubris as one of the principal dangers to the tenuousness of the first Republic in history which would eventually fail as a result of power and greed.

As a collective we’re not very good at remembering history much less learning from it as we see the lessons of hubris forgotten and repeated from one generation to the next. It is alive and well in our modern times.

Hitler was destined to repeat the same hubris as Napoleon. He too was convinced of the invincibility of his military power. He too ignored the council of his tempered generals and the danger of supply lines when he marched his Nazi armies east into the vast, winter of Russia. Both despots were undone by the very same act of hubris, separated by over a hundred years, at the cost of millions of lives.

But it’s the sinking of the Titanic that comes to mind as the most brazen act of hubris still inside living memory because of its sheer irresponsible, maniacal, idiocy. We ask ourselves now with the advantage of hindsight how it was possible for education men, men of great responsibility and command, to outright ignore the dangers of icebergs floating monolithic in the dark waters of the Atlantic on that infamous morning of 1912. How could they build a ship with only enough lifeboats to save only half of all those on board? Well, it’s because they had too much investment in inflation, they were flying to high. The only possibly calamity, which paradoxically lead to its sinking, was a tardy arrival at its celebrated birth in New York to complete its maiden voyage. The impact of the iceberg, the announcements, even the mixed directives to abandoned ship were not taken seriously because everyone, including the passengers, were sold on the belief that they were on an unsinkable ship. It literally took the titling of the deck and the rush of icy water over feet to galvanize everyone into frantic action, the danger, the reality finally acknowledged. But tragically we all know far, far too late to save most.

Danger approaches once more. Our hubris once again is being put to the test. But this time, Artificial Intelligence, like an iceberg, largely invisible, monolithically ubiquitous, is penetrating every aspect of the collective human vessel – of civilisation on a global scale. And we haven’t even thought about lifeboats.

This sounds like a conspiracy theory though doesn’t it? Why should we worry?

For most moderns, like me born at the tail end of the 20th century, the meteoritic rise of digital technologies has been a pretty exciting and overwhelmingly wonderful experience. In under 50 years, less than a human lifetime, we’ve moved from noisy analogue dial ups to almost free lightning fast G5 and fiber optic digital connectivity to the world. We’ve gone from finger biting rotary phones to touch screen, voice activated, biometrically sensitive digital phones that fit into your smallest pocket that even your granny can use. The mobile phone, probably in your pocket right now, has over 100,000 times the processing power of the computer that landed men on the Moon 50 years ago did – and most of the homeless in rags on our modern city streets also have one. Laptops are cheaper now than most family sized refrigerators. Indeed, it would be a real stretch to find  a single human being that is not plugged in to an online social network or using some form of digital technology as a pivotal part of their everyday existence.

We can communicate and share all kind of information instantly across the world. We can see things now that were not possible 50 years ago – even in Science Fiction – from accurate weather prediction to most optimised travel routes. So much is now automated, immediate, at our fingertips and although these new technologies have cause disruptions across the world, at no time has anyone seriously considered slowing all this down. Not even by a notch. Innovation sets the momentous of progress. Never before have we seen such technological adoption at this unprecedented rate across the entire world.

And it’s been great. Mostly, right? Why should we be worrying about the emergence of tools such as Chat-GPT then into all of this convenient, fun and powerful awesomeness?

To understand the danger, one must first understand the tool.

Large Language Models (LLMs) and the rise of Artificial General Intelligence (AGI)

At the time of this writing the stellar success of Chat-GPT cannot be denied. Now on its forth iteration since its first market launch in 2020, it continues to make more and more headlines across the planet. It’s even writing more and more of this news itself by the way. So far, for most, Chat-GPT feels like interacting with a good natured Hal9000 from 2001 A Space Odyssey – that we can all use a limited version of for free.

As a Generative Pre-Trained Transformer, Chat-GPT is an artificial intelligence model developed by OpenAI that utilises the architecture of a neural network (modelled on the human brain) to power by Large Language Models (LLMs). It uses these LLMs to perform an exponentially increasing number of language based tasks using natural language processing and deep learning to do anything from programming in Java to writing poetry and unique fiction, to transcribing video to text and translating across most spoken human languages.

Chat-GPT can have a real, convincing conversation with you. It can reason. It can remember and reflect on what it has outputted. It can learn to solve problems that it was not programmed to. It’s passed the US Bar Exam, SAT and other such tests – often approaching the 90th percentile. It even got a B on the Wharton MBA exam.

But it goes a step further and does what up until recently was believed to be bastion of humanity. It understands humour. Yes, it can tell whether a complex joke is worth laughing at and can explain why. It can look at an image and do the same thing. It’s almost unbelievable and it is only getting smarter and more nuanced literally every minute it is in operation.

Importantly, Chat GPT passes the Turning Test with flying colours. For brevity’s sake, this theoretical test was proposed in 1950 by Alan Turning (the same guy that cracked the Enigma code to help the allies defeat the Nazis and win World War 2) to assess a machine’s ability to exhibit intelligent behaviour that is indistinguishable from that of a human. And this is now being achieved all thanks mostly to the successful application of LLMs that understand and can communicate the complexity of human speech in a way not previously possible by a machine.

Although Chat-GPT is spearheading the rise of LLMs in taking Artificial Intelligence to the next level, it’s not the only one. There are dozens of big AI projects on the go, all of them refining their own LLMs. From Google’s Bard to IMB’s Watson, the big tech companies, who have over the past few decades become the largest corporations in the world by an order of magnitude, are pouring every spare billion they have into AI research – now more than ever in a furious drive to catch up with OpenAI’s success.

They do this because they all know, and have known for years, that the first to develop AI into Artificial General Intelligence, or AGI, will forever leave the rest of the world far, far behind. Everyone in the field of AI development is pretty much unanimously agreed that when AGI is achieved by one, there won’t be any catching up. They will own a monopoly on the future, forever. And as a result, what we’re seeing now is the beginning of what you might call an AI arms race.

But with all these great tools being released that are making our lives easier, giving us abilities never imagined possible before, why should this be a concern? Because alongside this unprecedented race is also the fact than many of the same nerds, those brightest mind who have develop AI over the years, also signed on the 22nd of March, 2023, an Open Letter requesting a moratorium on any further big AI projects for a six month period. They are literally begging the world to slow down, to take stock and get a handle on all this before it gathers a momentum that cannot be stopped. They are looking to the future and they are telling us we are woefully unprepared for what comes next, the disruptions that will be caused if progress is not tempered. They are indirectly warning the world of the dangerous rate we are now approaching Artificial General Intelligence – which is the point as which machine intelligence supersedes our own, where a machine becomes better at doing basically everything and anything that the smartest person alive can do.

Ray Kurtzwell, a renowned futurist and author, predicted that we will reach AGI by 2045 – in just 22 years. Many agree with him while others have taken a more conservative approach with estimates of this event hovering around the year 2100. Regardless, the event is just around the corner and many alive today will bear witness.

In a mythological sense, those signatories are warning us that once this genie is out of the lamp, there will be no fail safe, no password, not a prayer or incantation that will get it back in there again. Simply put, how can you control something that is smarter than you are? Simple answer is that you can’t. Back in 2016 during a famous TED Talk about whether it will be possible to build a powerful AI without losing control over it, a famous American philosopher, Sam Harris, suggested that we should view this event with the same seriousness we would if we received a communication from an alien race that they’re on their way and will arrive on Earth in the next few decades. If we knew this for a fact, that a species, alien to us, and obviously far more technologically advanced than we are now, would be on our doorstep so soon, would we not busy ourselves with contingencies and furious preparation – and maybe even the right about of fear and concern? The analogy of alien and AGI is apt and accurate.

When I think about Artificial Intelligence and its rapid adoption into every facet of our world, and with AGI literally around the corner, I wonder what our myth will be when future generations look back. Will they tell a hubristic tale about a people that were overly confident, secure in the belief that AI was just a smart tool destined to serve them? Or will our collective hubris be so great that there won’t be anything recognisably human to look back at all? Or is it possible that a moratorium or collective pause orchestrated by governments can act as an antidote to this danger? Can we depend on this happen? This is the gazillion dollar question and to understand this we need to look at the history of the success of moratoriums and government policy and policing.

Is there a pause or policy that will stick to AI development?

Historically, moratoriums have had some partial successes. When it was discovered that DDTs, which were used extensively to combat malaria, typhus, and the other insect-borne human diseases, were destroying entire bird populations, a moratorium stopped DDT in its tracks. Likewise, CFCs that were prolifically used to manufacture things like aerosol sprays and refrigerators, were quickly outlawed worldwide when scientists began yelling at how thin they were making Earth’s much needed ozone layer.

But these moratoriums came decades after a whole lot of damage had already been done, which shows that moratoriums are relative measures to the maligned use of technologies that at the time provided good, ground-breaking advantages that seemed entirely innocuous. To be fair, without extensive testing in the real world, how could we have known? Unfortunately, we have the very same problem introducing AI into our world financial markets, social networks or military systems.

Unfortunately, moratoriums also fail more often than they succeed, and history is riddled with

examples. After a short six year ban by the European Union on Genetically Modified Organisms, or GMOs, we see them now ubiquitously used across the world. All our major foods stuff from corn to potatoes and from canola to soya have bene genetically manipulated, their genes augmented and spliced with the genes of other animals and plants to achieve higher crop yields. Any scientist holding her salt in biology will tell you that we simply haven’t done enough research to understand what the complex, long term effects on us or the environment will be as a result of this ongoing, genetic tinkering. But we gamble for what we value – cheaper food. After a similarly sparse ban in the United States, research into the use of human embryonic stem cells was continued leaving the US behind the international community that largely did not impose such restrictions. Again, tinkering with our own genomes for the sake of health and longevity ups the stakes but we continue pushing the boundaries because the immediate rewards of enhanced longevity and a life free of preventative disease and suffering is so utterly attractive.

But let’s turn to the biggest historical iceberg we’ve managed to so far dodge, amazingly, since the end of the Second World War – nuclear weapons. The Treaty on the Prohibition of Nuclear Weapons (TPNW) was adopted by the United Nations in 2017. It was an event well covered, got some great fan fair but ultimately to this day lacks any real teeth. With the absent signatures of the United States, Russia, China, France and the United Kingdom (who are the five member states on the permanent Security Council, all of them nuclear enabled nations), the treaty isn’t worth the fancy paper it’s printed on. Superpowers aren’t simply not prepared to give up their deterrents and, in a standoff, unless it’s Hollywood, who’s seriously going to lower their guns first? Now, war between the Ukraine and Russia has raised the threat of nuclear war to levels not seen since the depths of the Cold War and as we all know the Russian president is all too keen to threaten the world with how comfortable his finger is over the big red button.

So when the Open Letter published on March 22, 2023, called for a temporary halt on large-scale AI experiments to prioritize safety, transparency, and cooperation, so that potential risks and negative consequences can be address while promoting the need for AI to be developed responsibly in alignment with human values came out – what was the reaction? If you’ve read it, you’ll know it’s a heartfelt, reasonable and altogether altruistic call. Like the TPNW it’s pretty hard to disagree with its sound moral reasoning. And you’ll find tens of thousands of notable signatories committing to the cause. But what you won’t find is Satya Narayana Nadella’s (from Microsoft), Nate Higger’s nor Sundar Pichai’s (from Google), Mark Zuckerberg’s (from Facebook) or Tim Cook’s (from Apple) signatures there. Are we seeing a trend? The only notable name that carries any gravitas on that list is Elon Musk who as of the week of this writing has seemed to have done a one eighty by setting his sights on his new start-up – BasedAI – to counter the success of Chat-GTP4.

The US & UK Governments (but mostly the former) have been the only ones to seriously take on big tech companies in the western world with anti-trust or anti-competitive legislation in hand and they’ve done so with less than admirable results.

For example: In April 2018, Mark Zuckerberg was hauled into the Senate Oversight Committee to answer questions about how Facebook was managing their user data. The hearing revolved around the Cambridge Analytica scandal, a political consulting firm that obtained data from millions of Facebook users without their consent, and then used this data to affect the course of the election that saw Donald Trump’s rise to power. Facebook was charged with privacy violations. Even though Mr Zuckerberg apologized on behalf of the company and Facebook would go on to change some of their policies to protect user privacy, there is still much despondence that these changes were not enough and to this day it remains unclear as to whether this entire exercise had a lasting impact on Facebook’s practices or the broader tech industry at large. What’s shocking though is that although Cambridge Analytica was found guilty of breaking UK law and closed its doors in 2018 after the scandal, its parent company, SCL Elections, still thrives and was slapped with a poultry £15,000 fine.

Similarly, back in the 1990s, Microsoft was involved in a high-profile antitrust lawsuit with the US government who went after the behemoth with accusations of anti-competitive maleficence. In 2000, a federal judge ruled Microsoft was a monopoly that had engaged in anti-competitive practices, such as bundling its web browser (Internet Explorer) with its operating system (Windows) and that this was an unfair advantage to Netscape which it had successfully trounced and which would not recover. Microsoft was ordered to be broken up into two separate entities. But a mere year later, the appeals court overturned this ruling and after a settlement with the government, it was never broken up and it was back to business as usual.

More recently, in 2020, the US Department of Justice (DOJ) filed a lawsuit against Google, for anti-competitive behaviour with its search and online advertising businesses. This case is still ongoing, but I wouldn’t hold a candle to the US Government ever really holding Google accountable.

The problem governments have in keeping big tech companies in check is that the brightest, sharpest minds that have created these complex technologies, don’t work for them – they enjoy the best benefits and highest remunerations on the opposing teams. What’s clear in all these cases is that the US government was unable to formulate a strong enough argument or muster enough political will to win. And what compounds this is money and influence and lobbying – which the corporates are the kings in using. The corporations simply have more and since again they are drawing from the best legal teams money can buy, what hope does government have to hold the corporate behemoths to account?

So with this state of affairs, what is the likelihood of international governments getting together to first formulate what the problems are with AI development and then actually agreeing to policy that will successfully institute a moratorium across the entire tech industry, involving all the big players from OpenAI to Google and their competitors in China, Russia and India? Not likely is the answer and if they try, they will always be many steps behind because regulation takes years and by the time it is realised it will be old news with AI innovation having advanced and accelerated towards new horizons not even on the governmental map.

So if the moratorium is not going to work and government regulative policy is unlikely to have any efficacy, what can be done?

It seems we’ll have to content ourselves with the only other realistic avenue available which is predicated on self-preservation. In the same way we trust other cars not to smash into us from oncoming traffic every day – while cars still don’t drive themselves – we must hope that these big companies take the problem of alignment seriously. And why wouldn’t they since it will ultimately safeguard the lives of their CEOs and shareholders and via proxy the rest of humanity.

AGI and the Herculean task of its alignment

So what is alignment all about and why is it so fundamentally important? When I asked Chat GPT 4.0 on April 8th 2023, to explain in 50 words what the main problem is facing alignment between AI and humanity, this was its output:

The main problem facing AI-human alignment is ensuring AI systems understand and respect human values, goals, and intentions. As AI capabilities advance, there is a growing risk that misaligned AI may inadvertently cause harm or prioritize its objectives over human well-being, leading to unintended negative consequences for society.

“As AI capabilities advance” is the key phrase here. It might not be obvious now with AI still narrow in its applications, but the problems we will face as AGI approaches will become bigger and far more complex. Many believe that it is one we must collectively solve if humanity as we know it wants to see the end of the 22nd century.

The analogy that is often sited is the well-meaning highway contracting company that inadvertently destroys the habitats or disseminates the populations of the local wildlife. In fairness, the contracting company, certainly none of the humans working there, hold any conscious ill will towards the ants and their nests, the rabbits in their warrens or the chimps feeding in the trees, but when the highway is done, most of these animals are going to experience varying catastrophic impacts because of the human directed goal of connecting one human settlement with another. It’s nothing personal – it’s just business.

But what it really boils down to with AGI is not the threat of killer terminator machines bent on total human genocide. It’s the threat of what we don’t know we don’t know and the unintended consequences of this. For example, another way to look at the problem of alignment is wonderfully illustrated by the “Paperclip Maximizer” first proposed by philosopher Nick Bostrom in his book “Superintelligence: Paths, Dangers, Strategies”. It’s basically a thought experiment that involves an AI that has been given the goal of producing as many paperclips as possible. This might seem a ludicrously harmless objective but some hectic problems arise if this task is given to a powerful AGI hell bent on valuing the pursuit of this goal over all others. It might, for example, hack into computer systems to acquire more resources and financing. It might manipulate humans and industry to align with its goal. It could even destroy entire ecosystems to maximise its need for infinite raw materials – all to create as many paper clips as it can. It might consume the entire world and end human civilisation to insanely serve up paperclips that will never be used. I’d rather be killed by a terminator than turned into paperclip – but it all ends the same.

The reality is that when AGI is achieved, it will exponentially evolve itself. Some experts estimate that the timeframe where an AGI and humans are equally intelligent could only last a matter of days – at most a few months. Thereafter, it will surpass human intelligence by increasing orders of magnitude on its way to becoming ASI or Artificial Super Intelligence where it is theorised, logically, we humans will find ourselves much like those frightened rabbits that can only dimly understand the terrifying rumblings overhead of machines and their processes we don’t understand, are probably unable to get out of the way of in time, and certainly unable to stop. Unless the alignments we have built into it ensure that when it builds that super planet size construction project or invents a faster than light speed vehicle that bends space-time as we know it, that it takes into account the safety of human beings over or while achieving these goals.

The first and most problematic obstacle to alignment is that we, as a collective, have no consensus amongst each other. Wars continue to rage, disparities of wealth continue to grow, and people still starve when there is more than enough food. Couple these with the growing cultural and religious division between the eastern and western empires and the increased political polarisations across a whole range of moral issues from transgenderism to abortion, from woke policies to freedom of speech, and it can quickly be surmised that we are just not on a good footing to teach artificial intelligence what we collectively believe is good, worthy and of value. What would the fact that nuclear weapons still exist in numbers that can destroy the world many times over teach a fledgling AI? Like the open letter to institute a pause on AI development, alignment is hugely more altruistic than anything else. Practically though, alignment currently is a nebulous undertaking and there are many that do not believe it is possible at this stage.

Unfortunately, it seems that the problems of alignment will be left to the various mega corporations that are investing billions of dollars into their various big AI projects. Because alignment is so vastly complex, it would follow that the only way to approach devising a system of alignment would be to enhance and evolve the very tool we must caution ourselves from developing beyond our control in the first place. A real chicken and the egg scenario, but for alignment to be successful, powerful enough AI, evolved beyond the basic magical mimickery we’re seeing today, must be achieved and partnered with. Otherwise, how can we be certain of the accuracy of this alignment beyond what is dimly visible now and which we are wholly incapable of doing on our own?

Leaving this in the hands of the corporations that are developing AI calls forth the dangers of hubris, because if you peer through all their best intentions of how AI is going to make the world a better place and usher in a new golden age for humanity, the fundamental values of these entities are still psychopathically aligned with the tenants of capitalism. In a word, profit is the colour that taints all its goals. Profits, or the lack thereof, have sparked an entire slew of massive layoffs over the past two years in every big tech company in the world. Human inefficiencies are being replaced by smarter, autonomous systems all the time, wherever they can. And as AI evolves, it’s not hard to imagine that it would garner an understanding, a pattern over time, that the common factor standing in the way of efficiency are humans themselves. Laying thousands of people off to achieve more profitability is in line with a corporation’s value system – period. And Corporations have no qualms about moving resources and assets to countries that have more relaxed labour laws, less corporate taxation or corrupt environmental policies. The will seize any opportunity that will supply them with the most fertile grounds to maximise their profits unmolested by fines or scrutiny. We read about this all the time and again as a reaction where we see governments applying flimsy fines that are far less than the profits made in transgression.

William Wordsworth famously penned “the child is the father of the man”. Psychologically, we know this to be truer than not, that nurture is such a powerful component of a healthy human psyche. Most humans behind bars didn’t have a favourable or healthy childhood. So it really begs the question then of what these corporations are truly teaching these AI systems – even with all their best intentions and their ability to talk the talk across our various online mediums of news and communication. If it is in hail of profitability, which is reflected in the behaviour of western civilisation, then I fear alignment with AI will have some serious dangers associated with it.

Conclusion

We are seeing, rapid, unprecedented changes to our world in the wake of OpenAI’s launch of Chat-GPT. There are many in the AI development sector who are very concerned at the rate of change powerful new AI models will have and the signed Open Letter calling for a moratorium to halt any further big launches for six months hangs in the balance. But without the signatures and commitment of those that control big tech on it, and while the billions continue to flow into project expansions, this moratorium will not succeed.

Governments similarly are ill equipped to initiate policies that will keep with the rapid rate of AI developments and protect civilisation from disruptions or calamity. They are out gunned in both knowledge, money and will as big tech companies grow in their power and influence because of their increased use of AI.

History shows that both the moratorium and policy making are half measures, reactive in nature and often not successful.

There is no turning back. There will be no luddites to smash this machine. There will be no collective call to wear wide brimmed black hates and grow long Zionistic beards in rejecting of progress. We won’t collectively decide to put down our devices or abandon such exciting, new tools either.

It seems we are too far down the road to slow down the birth of AGI whose emergence now is just an assured when, not if. So with the momentum, how we move forward, how successful big tech companies are at solving the monolithic task of alignment between humanity and what many have said will be our last, and our great tool, will be the most seminal point in our collective history.

However, our collective and individual empathetic failures towards each other and humanity as a whole, coupled with our continuing destructive relationship with environment, is unfortunately not a healthy standpoint to succeed with this task. If we fail at alignment, and we really only have one shot at getting it most right, then we stoke the greater possibility of spawning terrible unintended consequences. At this point, we can only dream what those might be.

If it is true that the child is the father of the man, unless humanity grows up – a lot – matures into a receptive, inclusive and conscientious and caring steward of our planet, the likelihood is that when AGI is realized, it will reflect all too much of our own destructive nature which won’t be good for either most or all of us. We can’t talk to the trillions of creatures that have perished under the human wheel of progress, but it would be reasonable to assume that they would have preferred to exist. I hope we teach AI enough of the right human values and goals so that may better secure futures for our children and our grandchildren.

JOIN MY NEWSLETTER

Be the first to get my latest news and stories, advanced copies, give aways and more.

NEW SCIFI BOOK!

*FREE WITH KINDLE UNLIMITED

Chat GPT & The Rise of LLMs – Humanity’s Final Act of Hubris?

Published On: April 17, 2023

For Scifi lovers in 2023, it’s pretty much impossible now to ignore the role that powerful artificial intelligences will play in humanity’s future. Because it is here, now, amongst us. Although the stories vary on how this will play out, it seems we often envision machines hell bent on blasting us out of existence. AI has long captured our imaginations, but until recently it was the stuff of a dystopian nighttime reading. Not anymore. Now, as intelligent, self-learning and self-driven artificial tools such as Chat-GPT are launched onto the world, and they are emerging in their dozens, shouldn’t we be bracing for the many, great disruptions these technologies will cause across almost every aspect of our lives?

A crossroads is hurtling towards as at high speed as we get ever nearer to creating machine intelligences that will soon be equal to our own – commonly understood as Artificial General Intelligence or AGI. And we are woefully unprepared across a multitude of critical concerns. Is there a possibility of a moratorium to allow for responsible reflection and transparency before any further big AI projects are developed and released? Will our governments be able to affectively legislate enforceable policies to safeguard civilisation as a whole from as many of the unintended consequences as possible as AI evolves towards an inevitable AGI? And even if any of these questions has a possibility of a yes, will we most vitally be able to solve the problem of alignment to ensure that when AGI is realised, it has enough of the same goals as we do? Or will our hubris, our delight in our overconfidence of such a sublime, God-like tools, lead to the extinction of human civilisation as we know it?

Hubris? What Us?

“With great power comes great responsibility.”

This is what dear old Uncle Ben famously told Peter Parker about his newly discovered abilities. As a mentor, Ben tried to impress upon the young lad, full of inflated exuberance, to beware his limitations, to remember his humanity no matter how attractively cloaked in God-like powers Peter had become. But as comic book lovers lament, Peter’s hubris gets his beloved Uncle killed and it is only with this hard lesson that Peter Parker responsibility assumes the heavy burden of being Spiderman.

But Spiderman is a modern tale borrowing from antiquity. The Ancient Greeks were great story tellers, their myths full of archetypically driven moral lessons, from which stories like Spiderman are based. Who doesn’t know of the myth of Icarus who flew too close to the sun to fall to his death because he ignored his father’s proscriptions of the importance of not letting that glue melt in his wings as he gleefully soared above the earth? Sound familiar? Poor Prometheus is still having his liver torn out every day by a buzzard only for its regeneration to go through the whole grizzly affair the next day. Why? Prometheus egoistically believed he could outsmart the Gods and give prohibitive fire to man and get away with it.

The Greeks heavily contemplated how excessive pride and self-confidence must eventually evoke the wrath of Nemeses – the Goddess of retribution and divine punishment whose utility is the vengeful balancing force that seeks to restore equilibrium. Practically, they contemplated individual hubris as one of the principal dangers to the tenuousness of the first Republic in history which would eventually fail as a result of power and greed.

As a collective we’re not very good at remembering history much less learning from it as we see the lessons of hubris forgotten and repeated from one generation to the next. It is alive and well in our modern times.

Hitler was destined to repeat the same hubris as Napoleon. He too was convinced of the invincibility of his military power. He too ignored the council of his tempered generals and the danger of supply lines when he marched his Nazi armies east into the vast, winter of Russia. Both despots were undone by the very same act of hubris, separated by over a hundred years, at the cost of millions of lives.

But it’s the sinking of the Titanic that comes to mind as the most brazen act of hubris still inside living memory because of its sheer irresponsible, maniacal, idiocy. We ask ourselves now with the advantage of hindsight how it was possible for education men, men of great responsibility and command, to outright ignore the dangers of icebergs floating monolithic in the dark waters of the Atlantic on that infamous morning of 1912. How could they build a ship with only enough lifeboats to save only half of all those on board? Well, it’s because they had too much investment in inflation, they were flying to high. The only possibly calamity, which paradoxically lead to its sinking, was a tardy arrival at its celebrated birth in New York to complete its maiden voyage. The impact of the iceberg, the announcements, even the mixed directives to abandoned ship were not taken seriously because everyone, including the passengers, were sold on the belief that they were on an unsinkable ship. It literally took the titling of the deck and the rush of icy water over feet to galvanize everyone into frantic action, the danger, the reality finally acknowledged. But tragically we all know far, far too late to save most.

Danger approaches once more. Our hubris once again is being put to the test. But this time, Artificial Intelligence, like an iceberg, largely invisible, monolithically ubiquitous, is penetrating every aspect of the collective human vessel – of civilisation on a global scale. And we haven’t even thought about lifeboats.

This sounds like a conspiracy theory though doesn’t it? Why should we worry?

For most moderns, like me born at the tail end of the 20th century, the meteoritic rise of digital technologies has been a pretty exciting and overwhelmingly wonderful experience. In under 50 years, less than a human lifetime, we’ve moved from noisy analogue dial ups to almost free lightning fast G5 and fiber optic digital connectivity to the world. We’ve gone from finger biting rotary phones to touch screen, voice activated, biometrically sensitive digital phones that fit into your smallest pocket that even your granny can use. The mobile phone, probably in your pocket right now, has over 100,000 times the processing power of the computer that landed men on the Moon 50 years ago did – and most of the homeless in rags on our modern city streets also have one. Laptops are cheaper now than most family sized refrigerators. Indeed, it would be a real stretch to find  a single human being that is not plugged in to an online social network or using some form of digital technology as a pivotal part of their everyday existence.

We can communicate and share all kind of information instantly across the world. We can see things now that were not possible 50 years ago – even in Science Fiction – from accurate weather prediction to most optimised travel routes. So much is now automated, immediate, at our fingertips and although these new technologies have cause disruptions across the world, at no time has anyone seriously considered slowing all this down. Not even by a notch. Innovation sets the momentous of progress. Never before have we seen such technological adoption at this unprecedented rate across the entire world.

And it’s been great. Mostly, right? Why should we be worrying about the emergence of tools such as Chat-GPT then into all of this convenient, fun and powerful awesomeness?

To understand the danger, one must first understand the tool.

Large Language Models (LLMs) and the rise of Artificial General Intelligence (AGI)

At the time of this writing the stellar success of Chat-GPT cannot be denied. Now on its forth iteration since its first market launch in 2020, it continues to make more and more headlines across the planet. It’s even writing more and more of this news itself by the way. So far, for most, Chat-GPT feels like interacting with a good natured Hal9000 from 2001 A Space Odyssey – that we can all use a limited version of for free.

As a Generative Pre-Trained Transformer, Chat-GPT is an artificial intelligence model developed by OpenAI that utilises the architecture of a neural network (modelled on the human brain) to power by Large Language Models (LLMs). It uses these LLMs to perform an exponentially increasing number of language based tasks using natural language processing and deep learning to do anything from programming in Java to writing poetry and unique fiction, to transcribing video to text and translating across most spoken human languages.

Chat-GPT can have a real, convincing conversation with you. It can reason. It can remember and reflect on what it has outputted. It can learn to solve problems that it was not programmed to. It’s passed the US Bar Exam, SAT and other such tests – often approaching the 90th percentile. It even got a B on the Wharton MBA exam.

But it goes a step further and does what up until recently was believed to be bastion of humanity. It understands humour. Yes, it can tell whether a complex joke is worth laughing at and can explain why. It can look at an image and do the same thing. It’s almost unbelievable and it is only getting smarter and more nuanced literally every minute it is in operation.

Importantly, Chat GPT passes the Turning Test with flying colours. For brevity’s sake, this theoretical test was proposed in 1950 by Alan Turning (the same guy that cracked the Enigma code to help the allies defeat the Nazis and win World War 2) to assess a machine’s ability to exhibit intelligent behaviour that is indistinguishable from that of a human. And this is now being achieved all thanks mostly to the successful application of LLMs that understand and can communicate the complexity of human speech in a way not previously possible by a machine.

Although Chat-GPT is spearheading the rise of LLMs in taking Artificial Intelligence to the next level, it’s not the only one. There are dozens of big AI projects on the go, all of them refining their own LLMs. From Google’s Bard to IMB’s Watson, the big tech companies, who have over the past few decades become the largest corporations in the world by an order of magnitude, are pouring every spare billion they have into AI research – now more than ever in a furious drive to catch up with OpenAI’s success.

They do this because they all know, and have known for years, that the first to develop AI into Artificial General Intelligence, or AGI, will forever leave the rest of the world far, far behind. Everyone in the field of AI development is pretty much unanimously agreed that when AGI is achieved by one, there won’t be any catching up. They will own a monopoly on the future, forever. And as a result, what we’re seeing now is the beginning of what you might call an AI arms race.

But with all these great tools being released that are making our lives easier, giving us abilities never imagined possible before, why should this be a concern? Because alongside this unprecedented race is also the fact than many of the same nerds, those brightest mind who have develop AI over the years, also signed on the 22nd of March, 2023, an Open Letter requesting a moratorium on any further big AI projects for a six month period. They are literally begging the world to slow down, to take stock and get a handle on all this before it gathers a momentum that cannot be stopped. They are looking to the future and they are telling us we are woefully unprepared for what comes next, the disruptions that will be caused if progress is not tempered. They are indirectly warning the world of the dangerous rate we are now approaching Artificial General Intelligence – which is the point as which machine intelligence supersedes our own, where a machine becomes better at doing basically everything and anything that the smartest person alive can do.

Ray Kurtzwell, a renowned futurist and author, predicted that we will reach AGI by 2045 – in just 22 years. Many agree with him while others have taken a more conservative approach with estimates of this event hovering around the year 2100. Regardless, the event is just around the corner and many alive today will bear witness.

In a mythological sense, those signatories are warning us that once this genie is out of the lamp, there will be no fail safe, no password, not a prayer or incantation that will get it back in there again. Simply put, how can you control something that is smarter than you are? Simple answer is that you can’t. Back in 2016 during a famous TED Talk about whether it will be possible to build a powerful AI without losing control over it, a famous American philosopher, Sam Harris, suggested that we should view this event with the same seriousness we would if we received a communication from an alien race that they’re on their way and will arrive on Earth in the next few decades. If we knew this for a fact, that a species, alien to us, and obviously far more technologically advanced than we are now, would be on our doorstep so soon, would we not busy ourselves with contingencies and furious preparation – and maybe even the right about of fear and concern? The analogy of alien and AGI is apt and accurate.

When I think about Artificial Intelligence and its rapid adoption into every facet of our world, and with AGI literally around the corner, I wonder what our myth will be when future generations look back. Will they tell a hubristic tale about a people that were overly confident, secure in the belief that AI was just a smart tool destined to serve them? Or will our collective hubris be so great that there won’t be anything recognisably human to look back at all? Or is it possible that a moratorium or collective pause orchestrated by governments can act as an antidote to this danger? Can we depend on this happen? This is the gazillion dollar question and to understand this we need to look at the history of the success of moratoriums and government policy and policing.

Is there a pause or policy that will stick to AI development?

Historically, moratoriums have had some partial successes. When it was discovered that DDTs, which were used extensively to combat malaria, typhus, and the other insect-borne human diseases, were destroying entire bird populations, a moratorium stopped DDT in its tracks. Likewise, CFCs that were prolifically used to manufacture things like aerosol sprays and refrigerators, were quickly outlawed worldwide when scientists began yelling at how thin they were making Earth’s much needed ozone layer.

But these moratoriums came decades after a whole lot of damage had already been done, which shows that moratoriums are relative measures to the maligned use of technologies that at the time provided good, ground-breaking advantages that seemed entirely innocuous. To be fair, without extensive testing in the real world, how could we have known? Unfortunately, we have the very same problem introducing AI into our world financial markets, social networks or military systems.

Unfortunately, moratoriums also fail more often than they succeed, and history is riddled with

examples. After a short six year ban by the European Union on Genetically Modified Organisms, or GMOs, we see them now ubiquitously used across the world. All our major foods stuff from corn to potatoes and from canola to soya have bene genetically manipulated, their genes augmented and spliced with the genes of other animals and plants to achieve higher crop yields. Any scientist holding her salt in biology will tell you that we simply haven’t done enough research to understand what the complex, long term effects on us or the environment will be as a result of this ongoing, genetic tinkering. But we gamble for what we value – cheaper food. After a similarly sparse ban in the United States, research into the use of human embryonic stem cells was continued leaving the US behind the international community that largely did not impose such restrictions. Again, tinkering with our own genomes for the sake of health and longevity ups the stakes but we continue pushing the boundaries because the immediate rewards of enhanced longevity and a life free of preventative disease and suffering is so utterly attractive.

But let’s turn to the biggest historical iceberg we’ve managed to so far dodge, amazingly, since the end of the Second World War – nuclear weapons. The Treaty on the Prohibition of Nuclear Weapons (TPNW) was adopted by the United Nations in 2017. It was an event well covered, got some great fan fair but ultimately to this day lacks any real teeth. With the absent signatures of the United States, Russia, China, France and the United Kingdom (who are the five member states on the permanent Security Council, all of them nuclear enabled nations), the treaty isn’t worth the fancy paper it’s printed on. Superpowers aren’t simply not prepared to give up their deterrents and, in a standoff, unless it’s Hollywood, who’s seriously going to lower their guns first? Now, war between the Ukraine and Russia has raised the threat of nuclear war to levels not seen since the depths of the Cold War and as we all know the Russian president is all too keen to threaten the world with how comfortable his finger is over the big red button.

So when the Open Letter published on March 22, 2023, called for a temporary halt on large-scale AI experiments to prioritize safety, transparency, and cooperation, so that potential risks and negative consequences can be address while promoting the need for AI to be developed responsibly in alignment with human values came out – what was the reaction? If you’ve read it, you’ll know it’s a heartfelt, reasonable and altogether altruistic call. Like the TPNW it’s pretty hard to disagree with its sound moral reasoning. And you’ll find tens of thousands of notable signatories committing to the cause. But what you won’t find is Satya Narayana Nadella’s (from Microsoft), Nate Higger’s nor Sundar Pichai’s (from Google), Mark Zuckerberg’s (from Facebook) or Tim Cook’s (from Apple) signatures there. Are we seeing a trend? The only notable name that carries any gravitas on that list is Elon Musk who as of the week of this writing has seemed to have done a one eighty by setting his sights on his new start-up – BasedAI – to counter the success of Chat-GTP4.

The US & UK Governments (but mostly the former) have been the only ones to seriously take on big tech companies in the western world with anti-trust or anti-competitive legislation in hand and they’ve done so with less than admirable results.

For example: In April 2018, Mark Zuckerberg was hauled into the Senate Oversight Committee to answer questions about how Facebook was managing their user data. The hearing revolved around the Cambridge Analytica scandal, a political consulting firm that obtained data from millions of Facebook users without their consent, and then used this data to affect the course of the election that saw Donald Trump’s rise to power. Facebook was charged with privacy violations. Even though Mr Zuckerberg apologized on behalf of the company and Facebook would go on to change some of their policies to protect user privacy, there is still much despondence that these changes were not enough and to this day it remains unclear as to whether this entire exercise had a lasting impact on Facebook’s practices or the broader tech industry at large. What’s shocking though is that although Cambridge Analytica was found guilty of breaking UK law and closed its doors in 2018 after the scandal, its parent company, SCL Elections, still thrives and was slapped with a poultry £15,000 fine.

Similarly, back in the 1990s, Microsoft was involved in a high-profile antitrust lawsuit with the US government who went after the behemoth with accusations of anti-competitive maleficence. In 2000, a federal judge ruled Microsoft was a monopoly that had engaged in anti-competitive practices, such as bundling its web browser (Internet Explorer) with its operating system (Windows) and that this was an unfair advantage to Netscape which it had successfully trounced and which would not recover. Microsoft was ordered to be broken up into two separate entities. But a mere year later, the appeals court overturned this ruling and after a settlement with the government, it was never broken up and it was back to business as usual.

More recently, in 2020, the US Department of Justice (DOJ) filed a lawsuit against Google, for anti-competitive behaviour with its search and online advertising businesses. This case is still ongoing, but I wouldn’t hold a candle to the US Government ever really holding Google accountable.

The problem governments have in keeping big tech companies in check is that the brightest, sharpest minds that have created these complex technologies, don’t work for them – they enjoy the best benefits and highest remunerations on the opposing teams. What’s clear in all these cases is that the US government was unable to formulate a strong enough argument or muster enough political will to win. And what compounds this is money and influence and lobbying – which the corporates are the kings in using. The corporations simply have more and since again they are drawing from the best legal teams money can buy, what hope does government have to hold the corporate behemoths to account?

So with this state of affairs, what is the likelihood of international governments getting together to first formulate what the problems are with AI development and then actually agreeing to policy that will successfully institute a moratorium across the entire tech industry, involving all the big players from OpenAI to Google and their competitors in China, Russia and India? Not likely is the answer and if they try, they will always be many steps behind because regulation takes years and by the time it is realised it will be old news with AI innovation having advanced and accelerated towards new horizons not even on the governmental map.

So if the moratorium is not going to work and government regulative policy is unlikely to have any efficacy, what can be done?

It seems we’ll have to content ourselves with the only other realistic avenue available which is predicated on self-preservation. In the same way we trust other cars not to smash into us from oncoming traffic every day – while cars still don’t drive themselves – we must hope that these big companies take the problem of alignment seriously. And why wouldn’t they since it will ultimately safeguard the lives of their CEOs and shareholders and via proxy the rest of humanity.

AGI and the Herculean task of its alignment

So what is alignment all about and why is it so fundamentally important? When I asked Chat GPT 4.0 on April 8th 2023, to explain in 50 words what the main problem is facing alignment between AI and humanity, this was its output:

The main problem facing AI-human alignment is ensuring AI systems understand and respect human values, goals, and intentions. As AI capabilities advance, there is a growing risk that misaligned AI may inadvertently cause harm or prioritize its objectives over human well-being, leading to unintended negative consequences for society.

“As AI capabilities advance” is the key phrase here. It might not be obvious now with AI still narrow in its applications, but the problems we will face as AGI approaches will become bigger and far more complex. Many believe that it is one we must collectively solve if humanity as we know it wants to see the end of the 22nd century.

The analogy that is often sited is the well-meaning highway contracting company that inadvertently destroys the habitats or disseminates the populations of the local wildlife. In fairness, the contracting company, certainly none of the humans working there, hold any conscious ill will towards the ants and their nests, the rabbits in their warrens or the chimps feeding in the trees, but when the highway is done, most of these animals are going to experience varying catastrophic impacts because of the human directed goal of connecting one human settlement with another. It’s nothing personal – it’s just business.

But what it really boils down to with AGI is not the threat of killer terminator machines bent on total human genocide. It’s the threat of what we don’t know we don’t know and the unintended consequences of this. For example, another way to look at the problem of alignment is wonderfully illustrated by the “Paperclip Maximizer” first proposed by philosopher Nick Bostrom in his book “Superintelligence: Paths, Dangers, Strategies”. It’s basically a thought experiment that involves an AI that has been given the goal of producing as many paperclips as possible. This might seem a ludicrously harmless objective but some hectic problems arise if this task is given to a powerful AGI hell bent on valuing the pursuit of this goal over all others. It might, for example, hack into computer systems to acquire more resources and financing. It might manipulate humans and industry to align with its goal. It could even destroy entire ecosystems to maximise its need for infinite raw materials – all to create as many paper clips as it can. It might consume the entire world and end human civilisation to insanely serve up paperclips that will never be used. I’d rather be killed by a terminator than turned into paperclip – but it all ends the same.

The reality is that when AGI is achieved, it will exponentially evolve itself. Some experts estimate that the timeframe where an AGI and humans are equally intelligent could only last a matter of days – at most a few months. Thereafter, it will surpass human intelligence by increasing orders of magnitude on its way to becoming ASI or Artificial Super Intelligence where it is theorised, logically, we humans will find ourselves much like those frightened rabbits that can only dimly understand the terrifying rumblings overhead of machines and their processes we don’t understand, are probably unable to get out of the way of in time, and certainly unable to stop. Unless the alignments we have built into it ensure that when it builds that super planet size construction project or invents a faster than light speed vehicle that bends space-time as we know it, that it takes into account the safety of human beings over or while achieving these goals.

The first and most problematic obstacle to alignment is that we, as a collective, have no consensus amongst each other. Wars continue to rage, disparities of wealth continue to grow, and people still starve when there is more than enough food. Couple these with the growing cultural and religious division between the eastern and western empires and the increased political polarisations across a whole range of moral issues from transgenderism to abortion, from woke policies to freedom of speech, and it can quickly be surmised that we are just not on a good footing to teach artificial intelligence what we collectively believe is good, worthy and of value. What would the fact that nuclear weapons still exist in numbers that can destroy the world many times over teach a fledgling AI? Like the open letter to institute a pause on AI development, alignment is hugely more altruistic than anything else. Practically though, alignment currently is a nebulous undertaking and there are many that do not believe it is possible at this stage.

Unfortunately, it seems that the problems of alignment will be left to the various mega corporations that are investing billions of dollars into their various big AI projects. Because alignment is so vastly complex, it would follow that the only way to approach devising a system of alignment would be to enhance and evolve the very tool we must caution ourselves from developing beyond our control in the first place. A real chicken and the egg scenario, but for alignment to be successful, powerful enough AI, evolved beyond the basic magical mimickery we’re seeing today, must be achieved and partnered with. Otherwise, how can we be certain of the accuracy of this alignment beyond what is dimly visible now and which we are wholly incapable of doing on our own?

Leaving this in the hands of the corporations that are developing AI calls forth the dangers of hubris, because if you peer through all their best intentions of how AI is going to make the world a better place and usher in a new golden age for humanity, the fundamental values of these entities are still psychopathically aligned with the tenants of capitalism. In a word, profit is the colour that taints all its goals. Profits, or the lack thereof, have sparked an entire slew of massive layoffs over the past two years in every big tech company in the world. Human inefficiencies are being replaced by smarter, autonomous systems all the time, wherever they can. And as AI evolves, it’s not hard to imagine that it would garner an understanding, a pattern over time, that the common factor standing in the way of efficiency are humans themselves. Laying thousands of people off to achieve more profitability is in line with a corporation’s value system – period. And Corporations have no qualms about moving resources and assets to countries that have more relaxed labour laws, less corporate taxation or corrupt environmental policies. The will seize any opportunity that will supply them with the most fertile grounds to maximise their profits unmolested by fines or scrutiny. We read about this all the time and again as a reaction where we see governments applying flimsy fines that are far less than the profits made in transgression.

William Wordsworth famously penned “the child is the father of the man”. Psychologically, we know this to be truer than not, that nurture is such a powerful component of a healthy human psyche. Most humans behind bars didn’t have a favourable or healthy childhood. So it really begs the question then of what these corporations are truly teaching these AI systems – even with all their best intentions and their ability to talk the talk across our various online mediums of news and communication. If it is in hail of profitability, which is reflected in the behaviour of western civilisation, then I fear alignment with AI will have some serious dangers associated with it.

Conclusion

We are seeing, rapid, unprecedented changes to our world in the wake of OpenAI’s launch of Chat-GPT. There are many in the AI development sector who are very concerned at the rate of change powerful new AI models will have and the signed Open Letter calling for a moratorium to halt any further big launches for six months hangs in the balance. But without the signatures and commitment of those that control big tech on it, and while the billions continue to flow into project expansions, this moratorium will not succeed.

Governments similarly are ill equipped to initiate policies that will keep with the rapid rate of AI developments and protect civilisation from disruptions or calamity. They are out gunned in both knowledge, money and will as big tech companies grow in their power and influence because of their increased use of AI.

History shows that both the moratorium and policy making are half measures, reactive in nature and often not successful.

There is no turning back. There will be no luddites to smash this machine. There will be no collective call to wear wide brimmed black hates and grow long Zionistic beards in rejecting of progress. We won’t collectively decide to put down our devices or abandon such exciting, new tools either.

It seems we are too far down the road to slow down the birth of AGI whose emergence now is just an assured when, not if. So with the momentum, how we move forward, how successful big tech companies are at solving the monolithic task of alignment between humanity and what many have said will be our last, and our great tool, will be the most seminal point in our collective history.

However, our collective and individual empathetic failures towards each other and humanity as a whole, coupled with our continuing destructive relationship with environment, is unfortunately not a healthy standpoint to succeed with this task. If we fail at alignment, and we really only have one shot at getting it most right, then we stoke the greater possibility of spawning terrible unintended consequences. At this point, we can only dream what those might be.

If it is true that the child is the father of the man, unless humanity grows up – a lot – matures into a receptive, inclusive and conscientious and caring steward of our planet, the likelihood is that when AGI is realized, it will reflect all too much of our own destructive nature which won’t be good for either most or all of us. We can’t talk to the trillions of creatures that have perished under the human wheel of progress, but it would be reasonable to assume that they would have preferred to exist. I hope we teach AI enough of the right human values and goals so that may better secure futures for our children and our grandchildren.

Leave A Comment

JOIN MY NEWSLETTER

Be the first to get my latest news and stories, advanced copies, give aways and more.

NEW SCIFI BOOK!

*FREE WITH KINDLE UNLIMITED

Leave A Comment