Teaching A Computer To Draw_
2025
Code, Screen, Mac Mini, Plexiglass
In
the 1960s, engineers at Bell Labs were obsessed with teaching computers to make
art. They believed that if you could teach a machine to draw, you could teach
it to think.
I’ve
coded this funny and absurd algorithmic artwork as an attempt to get the
computer to draw using its core desktop functions as a standalone close network
unit. It sits in its Plexiglass box, separated from the room, not
connected to the internet — quietly scanning images looking for smiley faces,
hearts, eyes. The kind of symbols you see every day and forget about. Like a
car number plate that seems to spell out initials of a loved one. Or a cloud that
looks a bit like a dog. You think it’s coincidence. The machine reads it as
data.
It’s
trained from an ambiguous data set—imagery that can be interpreted, or misinterpreted,
in multiple ways depending on the viewer. Symbols that coincidentally appear in
our everyday lives. Like a rock that resembles a love heart. Or, birds in the
sky that look like a cartoon face. But the machine just looks for patterns. Codified
symbols; like the smiley face, it doesn’t see the birds or the sunset. What
does it understand of the world if it only looks for patterns and symbols? What
does the computer see when it scans images—and how different are we, as humans?
We,
too, instinctively see things for more than they physically are. Seeking
meaning and understanding from a chaotic and noisy set of sensory stimuli,
looking for something we recognise, but sometimes reading into these things
codes that they may not inherently posses. Through language, we have the unique
ability to share our understanding of the world, collectively shaping and
assigning meaning and reimagining symbols that come to define our shared
reality and understanding of the present moment in time. This search for
meaning is how we make sense of the world relative to our place in it. In doing
so, we participate in the ongoing flux of culture. But what is lost when we use
the computer as the lens to see the world through? And, now with AI, we don’t just
use computers to see the world, we talk about our understanding of reality with
them, and expect a response?
That’s
what Sherry Turkle — a researcher at MIT — is worried about. She said that if
we let machines into our emotional lives, we’ll end up in relationships that
are only about ourselves. She thought that talking to a robot — even a helpful,
sympathetic one — would make us worse at being human. It sounds melodramatic.
But it turns out to be weirdly prescient.
“those who succumb to the seductions of robot
companionships will be stranded in relationships that are only about one person…the
absence of the emotion [on the part of the computer] reduces the scope of
rationality [for the human] because we literally think with our feelings.”
Turkle
believes
that our feelings shape a part of how we think.
And if we strip those out — because machines don’t have them — we’re left with
something cold and closed off to the world. As humans, we don’t see clearly. We
are social animals, tainted by our being and our proximity to each other. We are
very different from computers. But what does the computer see?
“Into the head? Down into the heart? Does it
see into me? Into us? Clearly or darkly? I hope it does see clearly, because I
can’t any longer see into myself. I see only murk. I hope for everyone’s sake
the scanners do better, because if the scanner sees only darkly, the way I do,
then I’m cursed, and cursed again."
Philip K. Dick stole that from Apostle Paul, well the idea
at least. It’s from his novel A Scanner Darkly, but it first appeared in 1 Corinthians 13:12 (King James
Version), "For now we see through a glass, darkly; but then face to
face: now I know in part; but then shall I know even as also I am known."Paul originally wrote in Greek before the King James translation to English in 1611.
At the time of writing and translation, a glass referred to what we
know today as a mirror.
In A Scanner Darkly, Bob Arctor, the protagonist, is literally split
between his identities: the narcotics agent “Fred” and the addict “Arctor.” He
sees footage of himself, surveils himself, and begins to misrecognize himself
in the most literal and painful sense. Jacques Lacan would have loved
this. He thought we’re all trapped in some kind of symbolic maze. That we think
we see ourselves in the mirror — but actually it’s just a fantasy. A version of
us we wish was real.
In Lacanian
psychoanalysis, the mirror stage is when a child first identifies with its
reflection, creating a misrecognized but cohesive image of the self—a fantasy
of unity that papers over the actual fragmented, chaotic experience of being. Lacan
would insist that the subject, in this case Bob Arctor, is always
divided—between the symbolic (language, law, societal identity) and the real (raw experience, unrepresentable truth). For both Arctor, and yourself reading
this, you are not self-contained. You are spoken, seen, and structured by
something beyond you—and that something is always partly unknown.
The self is seen but never truly grasped; unity is illusion.
We could interchange self for truth in this:
Truth is experienced but never
truly grasped; understanding is illusion.
Both, the self and truth, are prejudiced and relative illusions,
built on top of the framework of learnt language, societal traditions and subjective
interpretation of sensory stimuli. We’re
thrown into a state of confrontation—with radical choice (we are free, so must
choose who we are, even without certainty) and radical otherness (we are shaped
by forces that we don’t control—language, desire, the unconscious—and can’t
fully understand). Lacan may argue, Arctor is a subject split across the Imaginary (how he sees himself), Symbolic (the role he plays in society),
and Real (the raw, ungraspable truth
of his breakdown), unable to unify his identity, and so he becomes lost to
others and to himself.
Through societal alienation and individual cognitive
architecture, we are cursed. Unable to know anything fully. Trapped into only
having one personal truth and one personal self at a moment in
time. In a world of multiple registers of reality. Condemning us to a life of altered states. Alone together,
wondering; now may be all there is.
Confusing stuff, but how does this relate to the computer?
The thing is, this curse is relevant only to us, as humans. The
way one brain is wired, at a particular moment in time, is painfully alien to
all other human brains, and its past and future. Our way of thinking is unique,
constructed through lived experience. This means, unlike computers, it’s near
impossible to transplant ideas from one person to another cleanly. Our best
attempts are slow, murky and clumsy, completely abstracted and above all,
entirely tainted by the receiver.
Here’s Geoffrey Hinton talking about this:
We
do this by using verbal and visual language as our way of sharing our relative perception
of the world. Then we figured out how to distil information in
the form of symbols, writing, records, audio and data.
This
meant we could build a wealth of knowledge that can surpass our finite moment
in time or what one person could learn in a lifetime. The science fiction
author William Gibson thinks this tradition becomes like a prosthetic memory, completely
changing what it means to be human.
It captures something of the present moment, compresses it into something duplicatableand distributable, and sends it into the future so that it is another self, a
different self, can consume, interpret and
learn the distilled information relative to their brain.
This,
in a way, is probably the most important thing we do as humans. We are all wired
so
differently, allowing
for an array of cognitive diversity within our
species, and if we have learnt anything from evolution, then diversity, along
with an enormous amount of luck, is the key to survival.
But it would seem this might turn out to be our Achilles Heel, cognitive diversity and unique
data training sets make it demonstrably inefficient to share information.
Here’s William Gibson talking about this:
Back to computers, who are really fantastic at sharing information. Computers
and AI systems—particularly large neural networks—can be trained using massive
parallelism. This means they can be split across thousands of GPUs or TPUs,
processing enormous datasets in parallel. Once trained, the resulting model
contains the "knowledge" distributed across its weights, and can be
copied and deployed anywhere. They have the same thinking architecture, so knowledge
can be copied and stored. Once known, once. Known by all, forever.
But
here’s the thing: the computer doesn’t care.
It doesn’t have a self to get confused about. It doesn’t care if the smiley
face means happiness, irony, or sarcasm. It just logs the curve of the mouth and
tries to replicate it. That’s the difference. Because humans don’t just process
information. We project. We fantasise. We misunderstand — in ways that
are rich and complex and culturally shaped, and completely stupid. That is what
culture is, and what we have been building since our ancestors developed
language. Even if a computer can draw, I’m not sure we could recognise it’s thinking
unless it could also experience feelings and wonder about itself and its place
in the world.
It just draws. Because that’s what I told it to do.
References: Alone Together by Sherry Turkle; Geoffrey Hinton’s research on ambiguous
line drawings and probabilistic models in AI learning; internet culture; Stalker by Andrei Tarkovsky; Philip K. Dick’s novels A Scanner Darkly and VALIS;
Jean-Paul Sartre’s concept of le regard; and Jacques Lacan’s theory of the
gaze.