Blog
Blog
Share
April 12, 2024

AI: Awkwardly Intelligent

AI models are an incredible and unprecedented tool. We’re only just working out how much they can do for us - so now is the time to ask: how much should they do for us?

In the exploration of the concept that AI is not ‘creative’, AI generated images have been used in this blog post.

Artificial Intelligence (AI) cannot  create 'new' ideas . Granted, generative AI is particularly good at appearing to create, but all it's really doing is regurgitating information in an attractive manner. Generative AI can only reassemble ingested information, in a style it already recognises.

Image 1: AI depiction of AI information regurgitation (This image was created by AI)

An interesting extrapolation of this regurgitation would be training an AI model on AI-generated content: continuing this process iteratively could ultimately lead to an approach to solution generation, or 'synthetic thought', that is dissimilar to that of the original training content. Would this be original ideation? Or would this just be a repeated transformation of information? It's a point of uncertainty that alludes to the question: how do humans learn? Are we capable of original thought, or do we just connect experiences, learn from others, and build up knowledge and thought patterns over days, months, years, and generations? In fact, what happens when AI models are trained upon AI generated data is much less existential - the model performance enters a bit of a tailspin (for now, at least) (Rao, 2023).

Something that we innately do is prioritise information: morals, values, and ideals that individuals live by are prioritised frameworks that ultimately shape decisions that we make every day. These decisions, small or large, are a product of our individual intrinsic ideals. And individual they certainly are: although there are moral similarities amongst people the world over, our varied cultures, societies and experiences shape the way that we perceive situations and hence the decisions we make. There is no aligned global moral compass, and no ‘correct’ way to behave.

Image 2: AI depiction of a global moral compass (This image was created by AI)

Suppose that I trained a generative AI model with a programmed moral framework, using some prioritisation algorithm to ensure that the output of the model had a concept of ‘right’ and ‘wrong’. I would develop it aligned with my ethical ideals, otherwise, I would be knowingly and willingly compromising upon what I think is correct. So, why would anyone else want to use my hypothetical AI model for decision-making on a moral level? Would it be ethical to even allow people to use the model to make decisions? I certainly would not suggest that there is a correct set of morals, but I do think that holding conviction in your own moral system is a good starting point. I am no better placed than anyone else to force my ideals upon anyone else, so therefore, why should I expect anyone else to blindly follow what I believe is correct?

Image 3: AI depiction of an AI assisted decision making system, vaguely inspired by the ‘Trolley Problem’ (This image was created by AI).

I don't think anyone can rightly force a school of thought upon another, yet with automated AI decision making, in some way this would have to happen. Ultimately, I think this is why ethics in AI is a thorny subject. It’s possible to train a model free of bias for a specific purpose or a discrete task. But once you delve into the realm of decision-making on a more general level, there must be some generalised rules, boundaries, and priorities that the model considers.A model trained to prioritise universal human wellbeing would produce a different response to a model trained to prioritise the earth if asked to determine an optimal solution for global transport.

The point I'm trying to make is that in order for AI to make decisions, somewhere along the way, the model must have picked up cues as to what solution to head towards. In reality, it isn't so linear: models that have ingested vast quantities of human-created information have an awareness of different moral frameworks, and thus can assimilate different personas. Does this mean AI is capable of evil? We are already seeing that it is possible with ‘jailbroken’ AI models that have been cleverly and maliciously prompted, where the model returns uncharacteristic answers to satisfy the user. Now, this is a misuse of the tool - but it does prove that the breadth of knowledge contained within the model parameters does encompass the facility for wrongdoing.

A hard-coded set of morals implies the force-feeding of one single moral framework to all users, whilst the idea of a generalised AI model with suggestible morals that morph to the desires of the user becomes very problematic, very quickly. Should people trust a model that attempts to make decisions that they, as an individual, would make? Do you then have a tool that, by virtue of serving the user, assimilates their moral stance and has the capacity to cause harm to others? It raises another question around who is actually responsible for a model’s decision, when a large model is a labyrinth of data that could have been trained by one body, designed by another, hosted by yet another and then used by an independent end user. The simplest solution is that AI is never used to make decisions with ethical implications.

Currently, generative AI models, such as ChatGPT, are built with the sole aim to create answers that provide satisfaction to the user. By training models upon human feedback, their sole aim (and hence top priority in a 'moral framework') is to provide a response that the user likes. This method of model training, ‘Reinforcement Learning with Human Feedback (RLHF), was a real breakthrough in getting complex models to behave in the ‘correct’ way (Christiano et al., 2017). As a result of rapid AI development progression in the past few years, we now have access to fantastically useful tools. Tools that can reference a thousand libraries referenced in the blink of an eye, answer complex questions quickly and, ultimately, satisfy us with their response. But should these tools be trusted to make decisions that dictate our lives as individuals, a community, or society?

Referencing my own moral framework, I think not. Maybe AI will have the answer. But should we listen?

Author: Morgan Walters

Morgan is always interested in having discussion about AI utilisation, and in fact, there’s another blog on the horizon to consider how we currently use AI. Get in touch with him at:

Morgan.walters@empirisys.io

linkedin.com/in/morganrwalters

References:

Christiano, P. et al. (2017) ‘Deep reinforcement learning from human preferences’.

Rao, R. (2023) ‘AI-Generated Data Can Poison Future AI Models’, Scientific American, 28 July.

Back