top of page
Writer's pictureJonathan Kahan

Creative Chimeras and Evolved Antennas: Can LLMs be Creative?



Lately I listened to François Chollet, the AI luminary who invented Keras, the ARC test and a host of other things, on the latest Sean Carroll podcast. It’s a highly recommended listen.


Chollet argues that LLMs cannot be creative, because all their output is somehow contained in their training data. All an LLM does, according to Chollet, is to “fetch programs” or patterns from its training set. Everything an LLM “knows”, it has memorised. It’s an argument that many people make, in line with the metaphor of LLMs as a “stochastic parrot”.

I have to say, I find this statement puzzling.

  • LLMs certainly seem to be able to come up with creative outputs, and ones that most definitely were not in their training set. For example, they can come up with a story about an imaginary creature they have never heard about.

  • Taking semantic distance of an output text from a standard corpus as a measure for creativity, current state of the art LLMs already perform as well as humans in several tests (Hubert et al. 2024, Bellemare-Papin et al 2024). These results have been found be correlated with human-made creativity evaluations. Other research has found that appropriately-prompted LLMs are actually pretty good at ideation tasks (Mollick et al 2023)

Chollet would argue that the machine does not need to have seen my specific case. It is enough that it has seen similar ones, and it can follow a similar pattern in its response. But is this really so different from what humans do?

Isn’t all creativity the recombination of pre-existing inputs in some way? Take the most creative person in the world, say Pablo Picasso. Isn’t all of his work reusing elements Picasso had somehow experienced in art or in real life? After all, it is Picasso who allegedly said “great artists steal”.




“Well today we’re introducing three revolutionary products of this class the first one is a widescreen iPod with touch controls the second is a revolutionary mobile and the third is a breakthrough internet communications device. Oh, and this is one device”. Creative breakthroughs are novel recombination of pre-existing concepts (or products)



So what is the kind of creativity that people holding this position are expecting from machines?

In the podcast, Chollet gives the example of an antenna created by genetic algorithm, which is allegedly a truly novel finding. In what way is this different from what LLMs do?

Premising that I am in no position whatsoever to challenge an authority like Chollet, I humbly put forward a couple of explanations for what I think is going on.

  1. A misunderstanding of creation problems. The problems that LLMs can’t solve are not really creativity problems at all. ARC problems (Abstraction and reasoning corpus- a set of problems humans do very well and machines very poorly at) have a limited number of clearly defined parameters (pixels) and one correct answer. The antenna problem similarly has a limited number of clearly defined parameters (the shape and size of the antenna in three dimensions) and while it may not have a single answer, it can be modelled as an optimisation problem.


    This is not at all how most creation problems look like. Whenever we talk about human creativity, we think of problems where there is a near-infinite number of input parameters, or where parameters are porous or poorly defined. For example, defining a strategy, creating a product, writing a story. Within each of these problems there might well be optimisation problems as well, but the problem as a whole has way too many “dimensions” to be modelled deterministically. On the contrary, humans tend to solve these problems by recombining concepts and patterns from other problems they have solved. This type of problem solving or creativity is definitely amenable to LLMs. In fact, it is precisely “recombination” — what LLMs are good at, that we think of as creative genius.

  2. Ignoring emergent effects. By what mechanism are LLMs able to perform such recombination? Are they actually recombining concepts or simply tokens? Indeed, LLMs do fetch patterns from their training set, and our intuitions are challenged by the sheer amount of examples LLMs have seen. But as the latest research from Anthropic has demonstrated, in the activation patterns of LLM neurons, concepts are spontaneously formed. These do not necessarily correspond to our concepts: after all, we form our model of the world based on multi-sensory interactions, whereas all LLMs see is text (or images). But at this point, it seems hard to deny t§here is some model there. Embeddings can be created to boost, diminish or crucially, combine these concepts.


    So if at one level it is true that all the LLM does is to “fit the curve”, at a higher level of coarse-graining LLMs create an answer to a query by combining concepts it has learned in new ways. This actually seems much closer to human creativity than genetic algorithms, which are creative in a different way (in the same way nature is: by creating semi-random mutations).



In short: it may very well be that LLMs are not the path to AGI, and it is true that there is a large category of problems they struggle with. But here is the thing: LLMs are “controlled hallucination” machines. Creative recombination is really at their core. So I am extremely optimistic in terms of the LLMs’ ability to help us break news ground in arts and humanities, as well as for a swath of scientific problems that rely on recombination (eg. protein engineering).

If you found this post interesting, you might want to join the Linkedin group 🤖 Generative AI for Problem-Solving & Innovation 💡!

Or, follow me on Linkedin.


Thanks for reading!

6 views0 comments

Comentarios


bottom of page