By now it is well-trodden ground that Artificial Intelligence is becoming more and more advanced and autonomous in its capabilities. Whether one believes that the singularity will happen or is just a storm in Ray Kurzweil’s teacup, the fact remains that we need to start thinking more critically about the knowledge base of the computer. Computers have had the ability to amass knowledge for decades now, and machine learning enables them to build knowledge from their surroundings and apply it in their decision-making processes.

Perhaps it is time to start thinking of an epistemology of the computer, a theory of knowledge that has methods, criteria and scope. To make this leap, comes, in my view, with a responsibility to bring to bear all the critical frameworks that we apply to human modes of knowledge. In this blog post I want to touch on a feminist approach.

I recently attended a fascinating seminar by Arthur Miller, an emeritus professor in Science and Technology Studies at UCL. Miller’s view was that computers categorically demonstrate creativity and will to do so in increasingly impressive ways.

Miller talked of the deep neural networks that develop in the most advanced computers today and the hidden layers that are produced alongside the standard input and output layers. The deeper the layer, the more ‘abstract’ the representations become, much like in the brain. These layers often prove difficult for human beings to decipher, as they often don’t appear to correlate to the questions being asked of the computer. In Miller’s view, this imbues the computer with creative potential.[i]

All computers develop deep neural networks through ‘training’. In Google Deep Dream, for instance, the computers were initially trained on images of animals and then the networks were biased such that the computers ‘saw’ animals everywhere.

However, just as we might demonstrate how education shapes human beings, we must question what material is used to train these computers?

If they are trained on Western intellectual history, the art historical canon and the popular culture of recent times—as I suspect most are—there is a grave danger that these computers will develop the same hetero-normative values that have dominated human history. Far from a Brave New World of gender liberation, we are likely to create computers with the same racial and gender biases that exist in our own societies.

Examples of these biases are being exposed more and more. Last year it was discovered that Google’s photo app, which automatically applies labels to digital photographs, was classifying people with dark skin as gorillas[ii] and Nikon’s camera software has had similar issues, misreading Asian people as blinking.[iii] It has been found that state surveillance systems in the USA are racially biased, as less equipped to identify the faces of black people, and yet despite this, young black males remain the most convicted group in American society. And lest not we forget Microsoft’s teen twitter bot, Tay, who came up with racist and sexist slurs thanks to her training from other twitter users.[iv] Perhaps its time we start being more vigilant about the education that we provide for AI.

It is also equally vital that we challenge our own criteria for judging a computer’s creativity. In his talk Prof. Miller defined creativity as ‘problem solving’ and ‘randomness’.

Miller referred to the famous case of IBM’s Deep Blue, which out-manoeuvred a world champion chess player as an illustration of this type of creativity. Undoubtedly this was a major leap forward for AI, a confident tick in the ‘PASS’ box for the Turing test. However whether Deep Blue’s success was a display of creativity demands further scrutiny from a feminist perspective.

A number of women in the audience challenged Miller’s definition of creativity as reductive and falsely objective. This view of computer creativity takes the rational premises of science as its criteria. However, the scientific establishment was largely formulated by men, for men. It is a social construct and therefore subject to the same biases as other social establishments, an argument that is made by Feminist Standpoint Theory.[v] The fact is that most empirical, scientific methods are constructed under conditions of hetero-normative masculinity.

I am on rocky terrain here so let me clarify. I don’t propose for one second that women are any less able to work within this framework or that they haven’t contributed (often unacknowledged) to the culture of science; but nonetheless, male minds, in male-dominated institutions have set the terms of this definition of creativity and it is critical that we think beyond it.

The problem is that construing creativity in objective terms disembodies it. It reinforces Cartesian duality by regarding creativity as a domain of the mind, distinct from the body. But could the body not be construed as a formative presence in the creative process, indistinguishable from thought? For women, whose bodies have historically been at the forefront of psychological and social identity, there is a need for alternatives to the disinterested scientific model of thought production associated with these masculinized institutions.

Can we really remove creativity from an embodied, subjective, emotive experience? To do so certainly makes it easier to quantify scientifically but I am skeptical as to whether it provides an holistic picture of the processes by which we come to think, act and practice in original ways. In fact I am not alone, as many consciousness studies—for instance at the Sacker Centre for Consciousness—regard embodiment as a crucial element and attempt to incorporate it into their experiments.[vi]

To remove sensation from the creative act is, in my view, to lobotomize it. After all we are all agents within our environment, influencing and influenced by everything around us. Of course, I realize that as an art historian my viewpoint is unsurprising.

So, to return to the question of whether computers are creative. Creativity is an important facet of epistemology and the criteria used to define creativity are crucial to understanding how knowledge is attained. Whether or not a computer can be creative is not just a matter of it doing something that we humans find unexpected. It is a case of analyzing that unexpected act in relation to its training and environment, its position as an agent that acts and is acted upon— its total presence. Judged by these measures, I think the computer needs more time before it can truly be called creative.

What we have perhaps to focus on is our role as educators and critics of these nascent ‘beings’. We have a duty to stop regarding them as mere vessels and begin investing them with the ethical and social responsibilities that we expect of human beings, providing the tools that inspire a more nuanced approach to gender, race, ethnicity and class relations.

—————————————————————————————————————————————————–

[i] An example of this kind of creativity could be Google’s ability to translate a language it has never seen. https://www.engadget.com/2016/11/24/google-ai-translate-language-pairs-it-has-never-seen/

[ii] http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-as-gorillas-showing-limits-of-algorithms/

[iii] http://content.time.com/time/business/article/0,8599,1954643,00.html

[iv] https://techcrunch.com/2016/03/24/microsoft-silences-its-new-a-i-bot-tay-after-twitter-users-teach-it-racism/

[v] http://www.iep.utm.edu/fem-stan/

 

 

Related Posts