Creativity Theatre for Demonstrable Computational Creativity

Creativity Theatre for Demonstrable Computational Creativity (PDF)

2022 • 4 Pages • 6.47 MB • English
Posted July 01, 2022 • Submitted by Superman

Visit PDF download

Download PDF To download page

Summary of Creativity Theatre for Demonstrable Computational Creativity

Creativity Theatre for Demonstrable Computational Creativity Simon Colton,1,2 Jon McCormack,2 Michael Cook1 and Sebastian Berns1 1 Game AI Group, EECS, Queen Mary University of London, UK 2 SensiLab, Faculty of IT, Monash University, Melbourne, Australia Abstract While the quality of computationally generated artefacts con- tinues to improve, some people still find it difficult to ac- cept software as being creative. To help address this issue, we introduce the notion of creativity theatre, whereby com- putational creativity systems demonstrate their creative be- haviours, not only for the purpose of producing valuable arte- facts, but also to heighten the sense that observers have of it being creative. We present an approach to this whereby an entirely separate AI system controls a casual creator app, which is normally used as a creativity support tool by people. We describe the ‘Can You See What I Can See?’ installa- tion which performs such creativity theatre, and describe its operation in a recent open house event. Introduction and Motivation One of the accepted definitions of the field of Computational Creativity is given in (Colton and Wiggins 2012) as: The philosophy, science and engineering of computa- tional systems which, by taking on particular respon- sibilities, exhibit behaviours that unbiased observers would deem to be creative. It is fair to say that most creative AI systems do not explicitly exhibit behaviours at all. That is, the processing they under- take in creating artefacts is often behind the scenes, and be- haviours are either reverse-engineered (or guessed at) when people evaluate how the software has made its creations, or are read from technical papers describing how the software works. This lack of easily explainable and discernible be- haviours makes it difficult for non-experts to independently assess the contribution particular systems/projects make to- wards advancing the field according to the definition above. One way to partially improve the situation is for software to frame its work (Charnley, Pease, and Colton 2012) by out- putting text which describes its motivations, how it makes artefacts, its evaluation of its products and processes, etc. A survey of framing in computational creativity research is given in (Cook et al. 2019). Another possibility is for cre- ative software to more deliberately exhibit behaviours in a real-time fashion during its creative production. One way of achieving this is via anthropomorphisation, i.e., giving the software abilities which mimic human actions, even if they are not strictly required to create artefacts. For instance, in some installations, the Continuator music generation system (Pachet 2003) moves the keys of a real piano keyboard. In general, an advantage of embedding a creative AI sys- tem in a robotic platform is such anthropomorphisation, as well as a clear separation of the creative behaviours of the system from the behaviours of the media being employed. For example, in robotic painting projects such as those in (Lindemeier et al. 2015) and (Tresset and Leymarie 2013), elements such as the robotic arm and camera mimic the hands and eyes of people, and as such, we can project par- ticular behaviours onto them. These behaviours are separate from those of the analogue media (paints, brushes, pens, pa- per, etc.) that the robots employ. Similarly, in the You Can’t Know my Mind installation (Colton et al. 2015), The Paint- ing Fool software makes paint strokes on-screen via a sim- ulation of a hand holding a paintbrush. This anthropomor- phisation enables viewers to separate The Painting Fool’s decision making system from its non-photorealistic render- ing system. In artistic situations, such human-like physical- ity (whether real or simulated) can help audiences to project more nuanced behaviours onto creative AI systems, e.g., cu- riosity in (Gemeinboeck and Saunders 2010). In projects like the ones described above, machines stage performances which, among other things, help express their creativity. To capture this notion, we introduce, motivate and explore below the notion of creativity theatre for such performances. We then describe a project where creative behaviours are explicitly foregrounded, with a separate AI system controlling a generative system. We conclude with a recap and details of future work, including an art installation based on the work presented here. Creativity Theatre The term security theatre was introduced in (Schneier 2003) to describe situations where security countermeasures are enacted to provide the feeling of improved security, while doing little or nothing to achieve it. The theatrics are in- tended to comfort members of the public and to serve as a warning to potentially malicious agents. With airport se- curity as a particular focus, supporters of heightened secu- rity highlight the many times potential disasters have been averted, but critics argue that the measures aren’t effective, degrade passenger experience and have many unintended consequences, both in economic terms and with respect to more fatalities, for instance through increased car travel. We can generalise the idea of pointedly and visually en- acting a scenario, seemingly with a particular purpose, but also with an aim of changing public perception about some topic. Whether a particular type of enactment achieves the proposed purpose, and/or the changing of perception should be research questions subject to experimental evaluation in a 1 Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 288 relevant domain. We are interested here in how AI systems can get to this experimental stage with respect to issues of computational creativity, i.e., how generative software can perform a form of creativity theatre in order to potentially heighten public perception of it being creative. Being watched while being creative is, under most cir- cumstances, not necessary for making a particular artefact, idea or process. Moreover, due to shared educational ex- perience, people don’t necessarily need to show how they work in order for others to project aspects of creativity onto them. That is, we understand enough about the paint- ing/design/composing/writing/ideation process to know that a person’s activities won’t be that much different to our own, even though they may be a virtuoso in their field (Pachet 2012). Moreover, there may be issues in such demonstra- tions demystifying the creative act, given that people tend to want to celebrate creative individuals as being special, and seeing them at work may normalise their behaviours. Notwithstanding these points, the practice of creativity theatre in human endeavours is commonplace. Often, for instance, visual artists go out of their way to project aspects of their creativity on camera, e.g., Pablo Picasso painting on transparent glass ( Here, there is clear theatricality added to their standard art- making techniques and in post-production of the films which promote their creativity. Other forms of art production have established theatrical outreach routes focused on creativity, with poetry slams, rapping competitions, improvisational theatre and musical improvisation being obvious examples. Watching people be creative is enjoyed as entertainment, as well as for enlightenment and inspiration. There are numer- ous films, online streaming channels, radio, television, on- line and in-print series following creative people. Streaming creative work has become particularly popular, with thou- sands of channels dedicated to live creative work on pop- ular services such as Moreover, interviews with creative people are published, with their creative practices discussed and dramatised at length, e.g., (Peppiatt 2012). In the artistic performances themselves, and in third-party reports thereof, there are often theatrical renditions of people being creative, with artificial elements of drama introduced and commonly juxtaposed with more mundane elements of production, which act as counterpoint. Further analysis re- veals other commonalities, including the following: • A sense of purpose in creating something new, albeit with progress often via exploratory and unexpected routes. • Some unpredictability via improvisation and adaptation in the actions of the creative performer. • Elements of virtuosity, presumably with the intention of adding to the feeling that the artist is special. With such entertaining performances, the purpose is often to increase public perception of the performers’ creativity, rather than to produce a work of value. While it is not certain whether this actually works, this intention in clear. In the context of computational creativity, implementa- tions and installations are rarely conceived to promote the creativity of the system. Partly due to this, members of the public – who rarely read the technical papers describing AI systems – have generally formed their impressions in terms of what they see (or, in fact, don’t see), i.e., black box sys- tems explicitly programmed by a person to undertake often simple tasks. In this context, it’s not surprising that argu- ments in favour of the creativity of an AI system often fall flat. Hence, the public are not being fully informed of ad- vances in computational creativity, which may degrade their evaluation and adoption of the artefacts produced (Colton 2008). We therefore suggest that computational creativity systems are developed which can appropriately demonstrate their creative behaviours in a theatrical way, and we describe such a system in the next section. From Casual to Computational Creativity Casual creators are creativity support tools where user en- joyment is prized over productivity, fine-detailed control or the quality of output (Compton and Mateas 2015). They of- ten have a generative element, and normally offer a straight- forward user interface with instant and fun feedback that en- ables the searching of a space of novel artefacts rapidly and fluently. In many respects, their ease of use and rapid pro- duction of artefacts makes them ideal targets for third-party AI systems to control in order to explicitly exhibit creative behaviours for creativity theatre. Moreover, as casual cre- ators are for human use, if people feel (somewhat) creative themselves while using the app, they may project notions of creativity onto a separate AI system controlling the app in similar ways, if the separation is properly communicated. To explore the notion of creativity theatre empowered by casual creators, we implemented an AI system on top of the Art Done Quick casual creator app described in (Colton et al. 2020). This app employs a particle based image gen- eration approach, where mathematical functions initialise, move and impose colours on particles, so that rendering shapes at particle positions in the appropriate colours pro- duces an image. The functions, in addition to some render- ing parameters, constitute a genome, with the rendered im- age being the phenome. In overview, users make decorative imagery with the app via two main interfaces, as depicted in figure 1: (i) a sheet interface where a space of images can be explored through random generation and evolution, and (ii) an edit interface, where a single image can be altered. Our long-term aim is to test whether creativity theatre en- courages people to project notions of creativity onto an AI system controlling a casual creator app. The controller sys- tem we implemented for Art Done Quick was kept entirely separate from the base casual creator, i.e., the controller sim- ulates taps, double taps and dragging on the iOS touchscreen as a user would, rather than accessing the relevant subrou- tines directly. Moreover, we made sure that the controller is not able to access any information that people cannot see. For public engagement purposes, with these measures, we are able to properly communicate the difference in Art Done Quick and the controller, and to emphasise similarities in how the controller and people use the casual creator app. To visually highlight these points, we added an animated gloved hand on top of Art Done Quick, as portrayed in figure 2. The hand taps, double taps and drags with one (simulated) finger, Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 289 Figure 1: Screenshots of Art Done Quick (iPhone version): (i) randomly generated images in the sheet interface (ii) one image selected for high-res rendering (iii) creating a mon- tage using the edit interface (iv) adding a special effect. and pinches with two fingers, in a similar manner and speed to the gestures that people employ when using the app. Drawing on the above analysis of human creativity the- atre, and noting that our goal is for people to watch the controller in action, we focused on an over-arching purpose for the AI control of Art Done Quick, namely to produce visual puzzles for people to try and solve. In particular, the controller uses the ResNet50 image classification system (Krizhevsky, Sutskever, and Hinton 2012) to project content predictions onto the images achievable through Art Done Quick. For some images, ResNet will have high confidence (of 80% or more) that it contains a particular object (such as a guitar) or a scene (such as a seascape). However, such images are rare, and finding them for presentation to users, accompanied with the question: “Can you see...”, was cho- sen as the overall purpose for the creativity theatre exercise. Within this context, we identified and implemented in the controller AI system, simulations of the following subset of ways in which human users interact with Art Done Quick: • Tapping an empty cell on the sheet interface, which fills it with a randomly generated image. • Zooming and scrolling around the sheet to view image sets, then finding and inspecting ones of interest. • Choosing and double tapping an image to produce 8 vari- ations of it via genome mutation. • Deleting images which are too similar to existing ones or undesirable in other ways. • Editing a particular image through filters, transforms and collaging options available in the edit interface. To implement these controlling actions, we employed a modified version of behaviour trees, which are nor- mally employed to control non-player characters (NPCs) in videogames (Marcotte and Hamilton 2017). The first set of behaviours can be described as mundane, and includes tap- ping on empty cells to produce 25 randomly generated im- ages. If the total number of images runs past 500, some are deleted to reduce the number to this threshold. To do this, images are k-means clustered using the analyses from a headless application of ResNet, then deletions are made evenly over the clusters. This may seem to break the maxim of not allowing the controller information (ResNet analyses) Figure 2: A creativity theatre performance: gorging (tapping empty cells to add random images); exploring (scrolling to and then double tapping images to produce variations); looking (pinching to zoom out, then waiting); experiment- ing (here, tweaking the image with a glow filter); presenting the visual puzzle (asking if people can see a digital clock). that human users don’t have, but the machine vision func- tionality isn’t part of Art Done Quick, and the behaviour can be communicated as the controller using simulated eyes. If at any stage, an image is produced which scores above 80% confidence by ResNet for an image category C, then the image is generated at high resolution and presented full- screen with the question: “Can you see...?” Some theatrical- ity is introduced by giving audience members a 15 second pause in which they can try and guess what ResNet predicts the image to contain, after which the controller reveals the answer to the puzzle by changing the question to: “Can you see... a C?”, as in figure 2, presented for a further 15 sec- onds. The controller keeps a list of categories that it has pre- viously used in the puzzles, and does not repeat the usage of one in a puzzle until at least an hour has past. The second set of behaviours can be described as ex- ploratory, and involve the controller hill climbing via mu- tation. That is, the controller occasionally does a sweep for any images scoring between 40% and 80% ResNet confi- dence. It chooses the highest scoring of such images and double taps it to produce 8 variations. If any of these im- proves on the previous score, it is mutated and so on, until either no improvement is seen, or a variation is produced which scores 80% or more, which – as before – is presented full-screen as a visual puzzle. The controller deletes any variation image that it doesn’t hill climb with. The final set of behaviours can be described as directed, and involve the controller tweaking an image with ResNet Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 290 confidence between 60% and 80% for a particular category. To do this, the controller invokes the edit functionality in Art Done Quick and systematically changes colour filters, texture overlays, lighting effects and liquifying transforms, with an example shown in figure 2. For each tweak, it records the ResNet confidence and returns to the highest scoring image for subsequent rounds of tweaking. If the tweaking ever produces an image of more than 80% con- fidence, the process stops and the visual puzzle is presented. Some unpredictability is introduced via random ordering of behaviours, but there is still too much predictable repeti- tion. The above percentages were determined in advance, as sometimes – but not always – leading to visual puzzles. For audience members to get a sense of what is going on, a banner is given at the top of the screen with single- word descriptions of the behaviour the controller is exhibit- ing, as portrayed in figure 2. We added theatricality by us- ing provocative descriptions such as ‘gorging’, and making the controller take longer than necessary, to keep actions on a human scale. For instance, it has a ‘looking’ behaviour, where it simulates pinching on-screen to zoom out and view all the images on the sheet, followed by an artificial pause for 15 seconds to convey the idea that it is surveying its cre- ations, as portrayed in figure 2. For a technical evaluation, the controller and Art Done Quick were ran on an iPad Pro, with the screen mirrored onto a large screen during a period of six hours, as part of an open-house evening at SensiLab ( The performance ran with- out fail and produced more than 100 visual puzzles, which seemed regular enough to hold people’s attention. Conclusions and Future Work We have introduced the notion of creativity theatre as a loose analogy to security theatre and a tool for the demonstration of creative behaviours in a computational creativity setting. With an initial motivation and analysis of human creativity theatre, we identified some common aspects, including pur- poseful creation. We suggested adding secondary control- ling AI systems to casual creator apps, as a way of achieving such theatricality. It is not clear yet if such secondary control will enhance the public perception of creativity in software, and we plan experiments to test this hypothesis. However, there is reason to be optimistic, given that casual creators level the playing field, i.e., AI controllers can produce simi- lar artefacts in similar ways to people. We are currently enhancing the controller to use more Art Done Quick functionality, including clustering, collag- ing and crossover of images (Colton 2012). Certain differ- ences between behaviour tree usage for NPC control and us- age for casual creation have become clear. We are currently developing a theory of creativity behaviour trees where un- predictability, purpose and virtuosity, among other aspects of human behaviour, are modelled. We expect this to enable us to implement more sophisticated elements of drama such as the controller seemingly changing its mind, expressing an emotional arc, and providing a commentary on what it is do- ing. These new behaviours will be developed with reference to various evaluation methodologies e.g., (Colton 2008) and exercises, e.g., (Kantosalo and Riihiaho ). The installation described above will be exhibited in 2021 at the VisionarIAs art exhibition in the Etopia Centre (, the theme of which is creative AI enhanced by machine vision. Titled ‘Can you see what I can see?’, we hope the installation will encour- age people to realise that, while machine vision systems are generally developed to see as people do, they can also see differently and act as a second pair of eyes in an artistic set- ting. The installation will include a quiet area with seating where visitors will be able to play with Art Done Quick on its own. We hope this will encourage some visitors to ques- tion whether it is appropriate to project notions of creativity onto the controller, if they feel they are being creative, given that the software is doing very similar things to themselves. Acknowledgements We would like to thank the anonymous reviewers for much insightful feedback, some of which influenced this paper and all of which will be addressed in a longer account. We would also like to thank Marilia Bergamo for her design work for, and insights into, the SensiLab open-house installation. References Charnley, J.; Pease, A.; and Colton, S. 2012. On the notion of framing in computational creativity. In Proc. ICCC. Colton, S., and Wiggins, G. 2012. Computational Creativity: The final frontier? In Proc. ECAI. Colton, S.; Halskov, J.; Ventura, D.; Gouldstone, I.; Cook, M.; and Perez F´errer, B. 2015. The Painting Fool sees! New projects with the automated painter. In Proc. ICCC. Colton, S.; McCormack, J.; Berns, S.; Petrovskaya, E.; and Cook, M. 2020. Adapting and enhancing evolutionary art for casual cre- ation. In Proc. EvoMusArt Conf. Colton, S. 2008. Creativity versus the perception of creativity in computational systems. Proc. AAAI Spring Symp. Creative Sys. Colton, S. 2020. Casual Creation, Curation, Captioning, Clustering and Crossover. In Proc. Casual Creators Workshop, ICCC. Compton, K., and Mateas, M. 2015. Casual creators. Proc. ICCC. Cook, M.; Colton, S.; Pease, A.; and Llano, T. 2019. Framing in computational creativity – A survey and taxonomy. Proc. ICCC. Gemeinboeck, P., and Saunders, R. 2010. Zwischenraume: The machine as voyeur. In Proc. Conf. on Transdisciplinary Imaging at the Intersections between Art, Science & Culture. Kantosalo, A., and Riihiaho, S. 2019. Experience evaluations for human–computer co-creative processes. Connection Sci., 31(1). Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Ima- geNet classification with deep convolutional neural networks. In Advances in neural information processing systems. Lindemeier, T.; Metzner, J.; Pollak, L.; and Deussen, O. 2015. Hardware-based non-photorealistic rendering using a painting robot. Computer Graphics Forum 34(2). Marcotte, R., and Hamilton, H. 2017. Behavior trees for modelling artificial intelligence in games. The Computer Games Journal 6(7). Pachet, F. 2003. The Continuator: Musical interaction with style. Journal of New Music Research 32(3). Pachet, F. 2012. Musical virtuosity and creativity. In McCormack, J., and d’Inverno, M., eds., Computers and Creativity. Springer. Peppiatt, M. 2012. Interviews with Artists. Yale University Press. Schneier, B. 2003. Beyond Fear: Thinking Sensibly about Security in an Uncertain World. Copernicus Books. Tresset, P., and Leymarie, F. 2013. Portrait drawing by Paul the Robot. Computers and Graphics 37. Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 291