top of page

Everybody Be Cool: Why We’re the AI Problem

May 2023 and after the first one in March, a further open letter went out on demanding a moratorium on the use of artificial intelligence, presumably because it has quickly evolved into a vehicle for humans outsourcing their thinking to a machine.

I’ve seen Idiocracy. I know how this story ends, however, being cryogenically preserved to emerge into a hilariously apocalyptic future aside, Idiocracy isn't about AI. It’s about the demise of the human race because of us. From Ow! My Balls, to Brawndo we may laugh at the sub-intelligentsia of the future but this isn’t new. In 1915 the term dysgenic came into use as an antonym of eugenic to describe the degeneration of what we might today call emotional intelligence. Therefore Idiocracy is more about what we’re doing to ourselves than anything AI is doing.

Dysgenesis is not my favourite band anymore

Artificial Intelligence may well have captured the imagination of humanity, provoking a range of emotions from awe and excitement to fear and apprehension. The latter sentiment stems from concerns about AI's potential to replace humans in the workforce, invade privacy, or even turn against its creators. But come on everybody, it is essential to recognise that the unease surrounding AI often arises not from the technology itself, but from our own negligence in how we share information. By promoting diligent information sharing practices, we can foster a healthier relationship with AI, alleviating unnecessary fears and embracing its vast potential.

AI has made remarkable strides, enabling machines to perform complex tasks that were once solely within the realm of human capabilities. From language translation to image recognition and even medical diagnoses, AI has proven its efficacy in numerous domains. Nevertheless, the fear of AI stems from its perceived ability to exceed human intelligence and render certain professions obsolete—and that is the big fear—not the human intelligence part, that went in 1915 when someone found a word to describe its demise. No, no, I’m talking about rendering certain professions obsolete. Some professions should be obsolete, I mentioned this already in my article about Industry 4.0. Jobs that involve repetitive and predictable manual tasks, such as assembly line work or basic data entry, may see a higher level of automation with the help of AI-powered robots and machines. This isn’t new, this has been happening since the 1960s or before, but everything from customer service and routine analytical jobs can be done wonderfully with AI. In the UK BT has already started cutting jobs—that isn’t very kind, but who is training the AI? More than likely it’s the customer service folks who are now prompt engineers and goodbye middle management. It is important to emphasise that AI is not designed to replace humans but rather to augment our abilities and enhance productivity.

AI systems like ChatGPT learn from vast amounts of data to generate responses and make informed decisions. Data plays a crucial role in training these models, and the quality and diversity of the data profoundly impact their capabilities. When concerns arise about AI behaving inappropriately or generating biased responses, it often reflects biases present in the data it was trained on. Ensuring the ethical use of data and employing robust methodologies for data collection and curation can help mitigate these concerns.

Wait, stop. Just because data is used in creating information generatively, does it really follow that somehow we should stop thinking about what we’re sharing? One of my biggest nightmares of this AI age is the relationship between Large Language Models (namely OpenAI’s LLM) and Microsoft (in which it has spent billions of dollahs—say it like RuPaul—-on OpenAI). Why would a genius like me be bothered by the paperclip guy? Well, in May 2023, Microsoft delivered a keynote at Build, which is Microsoft's own developer conference in Seattle, in which they underlined the possibilities of how OpenAI’s LLM would be deployed in every single Microsoft product. They’re not alone: Atlassian, Amazon, Google, all these tech giants and fintech supremos like Morgan Stanley, they're all using some LLM.

Halo/Turd polishing

One common fear associated with AI, particularly chatbots like ChatGPT, is their potential to spread misinformation. As though somehow we live in an honest and transparent world. Wow. This is a massive Idiocracy moment for me. In a world of war, intolerance, famine or malnutrition and the abject depravity towards children and vulnerable adults, are we really equipped to judge a computer which is really a database of study against our virtues? I lol hard at this, especially when my friends get triggered about AI. Seriously though, if that’s even possible after that last statement, it is crucial to recognise that the source of misinformation lies in the data we feed these models. AI systems do not possess inherent biases; rather, they learn from the information we provide. We possess inherent biases though, don’t we? If you want to explain this to your mum, it’s like this: use your brain when sharing information, do your fact-checking, and verifying the accuracy of the data you receive in your head through your eyes. This is the only way we can prevent the propagation of misinformation through AI systems.

Your own / personal / data

AI's ability to analyse vast amounts of personal data raises legitimate concerns about privacy and security. While AI models require access to data to learn and improve, it is imperative to establish robust safeguards to protect sensitive information. Make some rules about how you manage data, better yet (if you’re web 2) use GDPR and GDPR-K. If you are web3-focused: self-police or DAO yourself towards comprehensive data protection measures, ensuring data anonymisation where necessary, and fostering transparent practices in data handling can alleviate these concerns and build trust in AI systems.

To address the fears surrounding AI, it is essential to prioritise responsible AI development. This includes promoting transparency in AI algorithms—yes, you can control algorithms—actively engaging in ethical AI practices, and involving diverse perspectives in the development process. By involving experts, policymakers, and the general public, we can collectively shape AI systems that align with our values and aspirations. Establishing and adhering to ethical guidelines for AI development and deployment is crucial. Tech companies should have robust review processes in place to ensure that AI systems and algorithms are not used in ways that violate user privacy or perpetuate biases. Ethical considerations should be an integral part of the AI development lifecycle.

I love Idiocracy, but if you haven’t seen it, I won’t spoil it for you—because you are already living it. AI is transformative as a technology and has the potential to revolutionise numerous industries and improve our lives in countless ways. However, unfounded fears about AI often stem from our own negligence in how we share information—I’m talking about taking AI across all your applications, I’m talking about trusting your AI because you’ve “put so much of myself into it, ChatGPT just gets me” (if this is you, please, let’s end our connections on any platform). By adopting diligent information sharing practices, and actively engaging with AI technologies to stop dysgenic rule-making, we can foster a more harmonious relationship with AI, mitigating unnecessary fears, and embracing the vast potential it holds for humanity's future.

Wait, I’m not sure this is a good enough ending. Let me defer to President Camacho for this one: “Shit. I know shit's bad right now, with all that starving bullshit, and the dust storms, and we are running out of french fries and burrito coverings. But I got a solution.”


bottom of page