Paranoid Transformer: Reading Narrative of Madness as Computational Approach to Creativity Yana Agafonova Higher School of Economics, Saint-Petersburg Alexey Tikhonov Yandex, Berlin Ivan P. Yamshchikov Max Planck Institute for Mathematics in the Sciences, Leipzig [email protected] Abstract This papers revisits the receptive theory in context of computational creativity. It presents a case study of a Paranoid Transformer — a fully autonomous text gen- eration engine with raw output that could be read as the narrative of a mad digital persona without any ad- ditional human post-filtering. We describe technical de- tails of the generative system, provide examples of out- put and discuss the impact of receptive theory, chance discovery and simulation of fringe mental state on the understanding of computational creativity. Introduction The studies of computational creativity in the field of text generation commonly aim to represent a machine as a cre- ative writer. Although text generation is broadly associ- ated with a creative process, it is based on linguistic ratio- nality and the common sense of the general semantics. In (Yamshchikov et al. 2019), authors demonstrate that if a generative system learns a better representation for such se- mantics, it tends to perform better in terms of human judg- ment. However, since averaged opinion could hardly be a beacon for human creativity, is its’ usage feasible regarding computational creativity? The psychological perspective on human creativity tends to apply statistics and generalizing metrics to understand its object (Rozin 2001; Yarkoni 2019), so creativity becomes introduced through particular measures, which is epistemo- logically suicidal for aesthetics. While both creativity and aesthetics depend on judgemental evaluation and individ- ual taste that depends on many aspects (Hickman 2010; Melchionne 2010), the concept of perception has to be taken into account, when talking about computational creativity. The variable that is often underestimated in the mere act of meaning creation is the reader herself. Although the com- putational principles are crucial for text generation, the im- portance of a reading approach to generated narratives is to be revised. What is the role of the reader in the genera- tive computational narrative? This paper tries to address these two fundamental questions presenting an exemplary case study. The epistemological disproportion between common sense and irrationality of the creative process became the fundamental basis of the research. It encouraged our interest in reading a generated text as a narrative of madness. Why do we treat machine texts as if they are primitive maxims or well known common knowledge? What if we read them as narratives with the broadest potentiality of meaning like in- sane notes of asylum patients? Would this approach change the text generation process? In this paper, we present the Paranoid Transformer, a fully autonomous text generator that is based on a paranoiac- critical system and aims to change the approach to reading generated texts. The critical statement of the project is that the absurd mode of reading and the evaluation of generated texts enhances and changes what we understand under com- putational creativity. Another critical aspect of the project is that Paranoid Transformer resulting text stream is fully unsupervised. This is a fundamental difference between the Paranoid Transformer and the vast majority of text genera- tion systems presented in the literature that are relying on human post-moderation, i.e., cherry-picking. Originally, Paranoid Transformer was represented on the National Novel Generation Month contest1 (NaNoGenMo 2019) as an unsupervised text generator that can create nar- ratives in a specific dark style. The project has resulted in a digital mad writer with a highly contextualized personal- ity, which is of crucial importance for the creative process (Veale 2019). Related Work There is a variety of works related to the generation of cre- ative texts like the generation of poems, catchy headlines, conversations, and texts in particular literary genres. Here we would like to discuss a certain gap in the field of cre- ative text generation studies and draw attention to the spe- cific reading approach that can lead to more intriguing re- sults in terms of computational creativity. The interest in text generation mechanisms is rapidly growing since the arrival of deep learning. The there are various angles from which researcher approach text gener- ation. For example, (van Stegeren and Theune 2019) and (Alnajjar, Lepp¨anen, and Toivonen 2019) study generative models that could produce relevant headlines for the news publications. A variety of works study stylization potential of generative models either for prose, see (Jhamtani et al. 1https://github.com/NaNoGenMo/2019 Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 146 2017), or for poetry, see (Tikhonov and Yamshchikov 2018a; 2018b). Generative poetry dates back as far as (Wheatley 1965) along with other early generative mechanisms and has var- ious subfields at the moment. Generation of poems could be addressed following specific literary tradition, see (He, Zhou, and Jiang 2012; Yan et al. 2016; Yi, Li, and Sun 2017); could be focused on the generation of topi- cal poetry (Ghazvininejad et al. 2016); could be centered around stylization that targets a certain author (Yamshchikov and Tikhonov 2019) or a genre (Potash, Romanov, and Rumshisky 2015). For a taxonomy of generative poetry techniques, we address the reader to (Lamb, Brown, and Clarke 2017). The symbolic notation of music could be regarded as a subfield of text generation, and the research of computa- tional potential in this context has an exceptionally long history. To some extent, it holds a designated place in the computational creativity hall of fame. Indeed, at the very start of computer-science Ada Lovelace already entertains a thought that an analytical engine can produce music on its own. (Menabrea and Lovelace 1842) state: ”Suppos- ing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical compo- sition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.” For an exten- sive overview of music generation mechanisms, we address the reader to (Briot, Hadjeres, and Pachet 2019). One has to mention a separate domain related to differ- ent aspects of the ’persona’ generation. These could include relatively well-posed problems such as the generation of bi- ographies out of the structured data, see (Lebret, Grang- ier, and Auli 2016), or open-end tasks for the personaliza- tion of dialogue agent, dating back to (Weizenbaum 1966). With the rising popularity of chat-bots and the arrival of deep learning, the area of persona-based conversation mod- els (Li et al. 2016) is growing by leaps and bounds. The democratization of generative conversational methods pro- vided by open-source libraries such as (Burtsev et al. 2018; Shiv et al. 2019) fuels further advancements in this field. However, the majority of text generation approaches are chasing the generation as the significant value of such al- gorithms, which makes the very concept of computational creativity seem less critical. Another major challenge is the presentation of the algorithms’ output. Vast majority of re- sults on natural language generation either do not imply that generated text has any artistic value, or expect certain post- processing of the text to be done by a human supervisor be- fore the text is presented to the actual reader. We believe that the value of computational creativity is to be restored by shifting the researcher’s attention from generation to the pro- cess of framing the algorithm (Charnley, Pease, and Colton 2012). We show that such shift it possible since the gen- erated output of Paranoid Transformer does not need any additional laborious manual post-processing. The most reliable framing approaches are dealing with attempts to clarify the algorithm by providing the context, describing the process of generative acts, and making cal- culations about the generative decisions (Cook et al. 2019). In this paper, we suggest that such an unusual framing ap- proach as the obfuscation of the produced output could be quite profitable in terms of increasing the number of inter- pretations and enriching the creative potentiality of gener- ated text. Obfuscated interpretation of the algorithm’s output methodologically intersects with the literary theory that deals with the reader as the key figure responsible for the meaning. In this context, we aim to overcome disciplinary borderline and create bisociative knowledge, which devel- ops the fundamentals of computational creativity (Veale and Cardoso 2019). This also goes in line with the ideas of (Oh- sawa 2003; Abe 2011) regarding obfuscation as a mode of reading generated texts that the reader either commits volun- tarily or is externally motivated to switch gears and perceive generated text in such mode. This commitment implies a chance discovery of potentially rich associations and exten- sions of possible meaning. How exactly can literary theory contribute to computa- tional creativity in terms of the text generation mechanisms? As far as the text generation process implies an incremen- tal interaction between neural networks and a human, it in- evitably presupposes critical reading of the generated text. This reading brings a lot in the final result and comprehen- sibility of artificial writing. In Literature studies, the pro- cess of meaning creation is broadly discussed by hermeneu- tical philosophers, who treated the meaning as a developing relationship between the message and the recipient, whose horizons of expectations are constantly changing and en- riching the message with new implications (Gadamer 1994; Hirsch 1967). The importance of reception and its difference from au- thor’s intentions was convincingly demonstrated and articu- lated by the so-called Reader-response theory, a particular branch of the Receptive theory that deals with verbalised receptions. As Stanley Fish, one of the principal authors of the approach, puts it, the meaning does not reside in the text but in the mind of the reader (Fish 1980). Thus, any text may be interpreted differently, depending on the reader’s background, which means that even an absurd text could be perceived as meaningful under specific circum- stances. The same concept was described by (Eco 1972) as so-called aberrant reading and implied that the difference between intention and interpretation is a fundamental prin- ciple of cultural communication. It is often the shift in in- terpretative paradigm that makes remarkable works of art to be dismissed by most at first like Picasso’s Les Demoiselles d’Avignon that was not recognized by artistic society and was not exhibited for nine years since it had been created. One of the most recognizable literary abstractions in terms of creative potentiality is the so-called ’romantic mad poet’ whose reputation was historically built on the idea that genius would never be understood (Whitehead 2017). Mad- ness in terms of cultural interpretation is far from its psychi- atric meaning and has more in common with the historical concept of a marginalized genius. Mad narrator was chosen as a literary emploi for the Paranoid Transformer to extend the interpretative potentiality of the original text that could Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 147 be not ideal in formal terms, on the other hand, it could be attributed to an individual with exceptional understanding of the world, which gives more linguistic freedom to this individual for expressing herself and more freedom in in- terpreting her messages. The anthropomorphization of the algorithm makes the narrative more personal, which is as important as the personality of a recipient in the process of meaning creation (Dennett 2014). The self expression of the Paranoid Transformer is enhanced by introducing a nervous handwriting that amplifies the effect and gives more con- text for interpretation. In this paper, we show that treating the text generator as a romantic mad poet gives more liter- ary freedom to the algorithm and generally improves the text generation. The philosophical basis of our approach is de- rived from the idea of creativity as an act of transpassing the borderline between conceptual realms. Thus, the dramatic conflict between computed and creative text could be solved by extending the interpretative horizons. Model and Experiments The general idea behind the Paranoid Transformer project is to build a ’paranoid’ system based on two neural networks. The first network (Paranoid Writer) is a GPT-based (Rad- ford et al. 2019) tuned conditional language model, and the second one (Critic subsystem) uses a BERT-based classifier (Devlin et al. 2019) that works as a filtering subsystem. The critic selects the ’best’ texts from the generated stream of texts that Paranoid Writer produces and filters the ones that it deems to be useless. Finally, an existing handwriting syn- thesis neural network implementation is applied to generate a nervous handwritten diary where a degree of shakiness de- pends on the sentiment strength of a given sentence. This final touch further immerses the reader into the critical pro- cess and enhances the personal interaction of the reader with the final text. Shaky handwriting frames the reader and, by design, sends her on the quest for meaning. Generator Subsystem The first network, Paranoid Writer, uses an OpenAI GPT (Radford et al. 2019) architecture implementation by hug- gingface2. We used a publicly available model that was already pre-trained on a huge fiction BooksCorpus dataset with approximately 10K books with 1B words. The pre-trained model was fine-tuned on several addi- tional handcrafted text corpora, which altogether comprised approximately 50Mb of text for fine-tuning. These texts included: • a collection of Crypto Texts (Crypto Anarchist Manifesto, Cyphernomicon, etc.); • a collection of fiction books from such cyberpunk authors as Dick, Gibson, and others; • non-cyberpunk authors with particular affinity to fringe mental prose, for example, Kafka and Rumi; • transcripts and subtitles from some cyberpunk movies and series such as Bladerunner; 2https://github.com/huggingface/transformers • several thousands of quotes and fortune cookie messages collected from different sources. During the fine-tuning phase, we have used special labels for conditional training of the model: • QUOTE for any short quote or fortune, LONG for others; • CYBER for cyber-themed texts and OTHER for others. Each text got two labels, for example, it was LONG+CYBER for Cyphernomicon, LONG+OTHER for Kafka, and QUOTE+OTHER for fortune cookie mes- sages. Note, there were almost no texts labeled as QUOTE+CYBER, just a few nerd jokes. The idea of such conditioning and the choice of texts for fine-tuning was rooted in the principle of reading a madness narrative dis- cussed above. The obfuscation principle manifests itself in the fine-tuning on the short aphoristic quotes and ambiva- lent fortune cookies. It aims to enhance the motivation of the reader and to give her additional interpretative freedom. Instrumentally the choice of the texts was based on two fun- damental motivations: we wanted to simulate a particular fringe mental state, and we also were specifically aiming into short diary-like texts to be generated in the end. It is well known that modern state-of-the-art generative models are not able to support longer narratives yet can generate several consecutive sentences that are connected with one general topic. QUOTE/LONG label allowed us to control the model and to target shorter texts during the generation. Such short ambivalent texts could subjectively be more in- tense. At the same time, inclusion of longer texts in the fine-tuning phase allowed us to shift the vocabulary of the modal even further toward a desirable ’paranoid’ state. We also were aiming into some proxy of ’self-reflection’ that would be addressed as a topic in the resulting ’diary’ of the paranoid transformer. To push the model in this direc- tion, we introduced cyber-themed texts. As a result of these two choices, in generation mode, the model was to generate only QUOTE+CYBER texts. The raw results were already promising enough: let painting melt away every other shred of reason and pain, just lew the paint to move thoughts away from blizzes in death. let it dry out, and turn to cosmic delights, to laugh on the big charms and saxophones and fudatron steames of the sales titanium. we are god’s friends, the golden hands on the shoulders of our fears. do you knock my cleaning table over? i snap awake at some dawn. the patrons researching the blues instructor’s theories around me, then give me a glass of jim beam. boom! However, this was not close enough to any sort of cre- ative process. Our paranoid writer had graphomania too. To amend this mishap and improve the resulting quality of the texts, we wanted to incorporate additional automated filter- ing. Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 148 Heuristic Filters As a part of the final system, we have implemented heuristic filtering procedures alongside with a critic subsystem. The heuristic filters were as follows: • reject the creation of new, non-existing words; • reject phrases with two unconnected verbs in a row; • reject phrases with several duplicating words; • reject phrases with no punctuation or with too many punc- tuation marks. The application of this script cut the initial text flow into a subsequence of valid chunks filtering the pieces that could not pass the filter. Here are several examples of such chunks after heuristic filtering: a slave has no more say in his language but he has to speak out! the doll has a variety of languages, so its feelings have to fill up some time of the day - to - day journals. the doll is used only when he remains private. and it is always effective. leave him with his monk - like body. a little of technique on can be helpful. To further filter the stream of such texts, we implemented a critic subsystem. Critic subsystem We have manually labeled 1 000 of generated chunks with binary labels GOOD/BAD. We marked a chunk as BAD in case it was grammatically incorrect or just too dull or too stupid. The labeling was profoundly subjective. We marked more disturbing and aphoristic chunks as GOOD, pushing the model even further into the desirable fringe state of para- noia simulation. Using these binary labels, we have fine- tuned a pre-trained publicly available BERT classifier3 to predict the label of any given chunk. Finally, a pipeline that included the Generator subsystem, the heuristic filters, and the Critic subsystem produced the final results: a sudden feeling of austin lemons, a gentle stab of disgust. i’m what i’m humans whirl in night and distance. we shall never suffer this. if the human race came along tomorrow, none of us would be as wise as they already would have been. there is a beginning and an end. both of our grandparents and brothers are overdue. he either can not agree or he can look for someone to blame for his death. he has reappeared from the world of revenge, revenge, separation, hatred. 3https://github.com/huggingface/transformers#model- architectures Figure 1: Some examples of Paranoid Transformer diary en- tries. Three entries of varying length. he has ceased all who have offended him. and i don’t want the truth. not for an hour. The resulting generated texts were already thought- provoking and allowed reading a narrative of madness, but we wanted to enhance this experience and make it more im- mersive for the reader. Nervous Handwriting In order to enhance the personal aspect of the artificial para- noid author, we have implemented an additional generative element. Using implementation4 for handwriting synthesis from (Graves 2013), we have generated handwritten ver- sions of the generated texts. Bias parameter was used to make the handwriting shakier if the generated text’s senti- ment was stringer. Figures 1–3 show several final examples of the Paranoid Transformer diary entries. Figure 1 demonstrates that the length of the entries can differ from several consecutive sentences that convey a longer line of reasoning to a short, abrupt four-words note. Figure 2: Some examples of Paranoid Transformer diary en- tries. Longer entry proxying ’self-reflection’ and personal- ized fringe mental state experience. Figure 2 illustrates typical entry of ’self-reflection’. The text explores the narrative of dream and could be paralleled with a description of an out-of-body experience (Blanke et al. 2004) generated by the predominantly out-of-body entity. 4https://github.com/sjvasquez/handwriting-synthesis Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 149 Figure 3: Some examples of Paranoid Transformer diary en- tries. Typical entries with destructive and ostracised mo- tives. Figure 3 illustrates typical entries with destructive and os- tracised motives. This is an exciting side-result of the model that we did not expect. The motive of loneliness is recurring in the Paranoid Transformer diaries. It is important to emphasize that the resulting stream of the generated output is available online5. No human post- processing of output is performed. Discussion In Dostoevsky’s “Notes from the Underground” there is a striking idea about madness as a source of creativity and computational explanation as a killer of artistic magic: “We sometimes choose absolute nonsense because in our fool- ishness we see in that nonsense the easiest means for attain- ing a supposed advantage. But when all that is explained and worked out on paper (which is perfectly possible, for it is contemptible and senseless to suppose that some laws of nature man will never understand), then certainly so-called desires will no longer exist.” (Dostoevsky 1984) Paranoid Transformer brings forward an important question about the limitations of the computational approach of creative intel- ligence, either it belongs to a human or algorithm. This case demonstrates that creative potentiality and generation efficiency could be considerably influenced by such poorly controlled methods as obfuscated supervision and loose in- terpretation of the generated text. Creative text generation studies inevitably strive to reveal fundamental cognitive structures that can explain the cre- ative thinking of a human. The suggested framing approach to machine narrative as a narrative of madness brings for- ward some crucial questions about the nature of creativity and the research perspective on it. In this section, we are going to discuss the notion of creativity that emerges from the results of our studying and reflect on the framing of the text generation algorithm. What does creativity in terms of text generation mean? Is it a cognitive production of novelty or rather generation of unexpendable meaning? Can we identify any difference in treating human and machine creativity? In his groundbreaking work (Turing 1950) pinpoints sev- eral crucial aspects of intelligence. He states: ”If the mean- ing of the words “machine” and “think” are to be found by 5https://github.com/altsoph/paranoid transformer examining how they are commonly used it is difficult to es- cape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statis- tical survey such as a Gallup poll.” This starting argument turned out to be prophetic. It pinpoints the profound chal- lenge for the generative models that use statistical learning principles. Indeed, if creativity is something on the fringe, on the tails of the distribution of outcomes, then it is hard to expect a model that is fitted on the center of distribution to behave in a way that could be subjectively perceived as a creative one. Paranoid Transformer is a result of a conscious attempt to push the model towards a fringe state of proxi- mal madness. This case study serves as a clear illustration that creativity is onthologically opposed to the results of the ”Gallup poll.” Another question that raises discussion around computa- tional creativity deals with a highly speculative notion of self within a generative algorithm. Does a mechanical writer have a notion of self-expression? Considering a wide range of theories of the self (carefully summarized in (Jamwal 2019)), a creative AI generator triggers a new philosoph- ical perspective on this question. As any human self, an artificial self does not develop independently. By follow- ing John Locke’s understanding of self as based on memory (Locke 1860), Paranoid Transformer builds itself on memo- rising the interactive experience with a human, furthermore, it emotionally inherits to its supervising readers who la- belled the training dataset of the supervision system. On the other hand, Figure 4 clearly shows the impact of crypto- anarchic philosophy on the Paranoid Transformers’ notion of self. One can easily interpret the paranoiac utterance of the generator as a doubt about reading and processing unbi- ased literature. Figure 4: ”Copyrighted protein fiction may be deemed spec- ulative propaganda,” – the authors are tempted to proclaim this diary entry the motto of Paranoid Transformer. According to the cognitive science approach, the con- struction of self could be revealed in narratives about partic- ular aspects of self (Dennett 2014). In the case of Paranoid Transformer, both visual and verbal self-representation re- sult in nervous and mad narratives that are further enhanced by the reader. Regarding the problem of framing the study on creative text generators, we cannot avoid the question concerning the novelty of the generated results. Does Paranoid Trans- former demonstrate a new result that is different from others in the context of computational creativity? First of all, we can use external validation. At the moment, the Paranoid Transformer’ book of is prepared to come out of print. Sec- ondly, and probably more importantly here, we can indicate the novelty of the conceptual framing of the study. Since the design and conceptual situatedness influence the novelty of the study (Periˇsi´c, ˇStorga, and Gero 2019), we claim that the Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 150 suggested conceptual extension of perceptive horizons of in- teraction with generative algorithm can solely advocate the novelty of the result. An important question that deals with framing of the text generation results engages the discussion about the possibil- ity of a chance discovery. In (Ohsawa 2003) lays out three crucial three keys for chance discovery, namely, communi- cation, context shifting, and data mining. (Abe 2011) further enhances these ideas addressing the issue of curation and claiming that a curation is a form of communication. The Paranoid Transformer is a clear case study that is rooted in Ohsawa’s three aspects of chance discovery. Data mining is represented with a choice of data for fine-tuning and the process of fine-tuning itself. Communication is interpreted under Abe’s broader notion of curation as a form of commu- nication. Context shift manifests itself thought the reading the narrative of madness that invests the reader with inter- pretative freedom and motivates her to pursue the meaning in her own mind though simple, immersive visualization of the systems’ fringe ’mental state’. Conclusion This paper presents a case study of a Paranoid Transformer. It claims that framing the machine-generated narrative as a narrative of madness can intensify the personal experience of the reader. We explicitly address three critical aspects of chance discovery and claim that the resulting system could be perceived as a digital persona in a fringe mental state. The crucial aspect of this perception is the reader, who is moti- vated to invest meaning into the resulting generative texts. This motivation is built upon several pillars: a challenging visual form, that focuses the reader on the text; obfuscation, that opens the resulting text to broader interpretations; and the implicit narrative of madness, that is achieved with the curation of the dataset for the fine-tuning of the model. Thus we intersect the understanding of computational creativity with the fundamental ideas of receptive theory. References Abe, A. 2011. Curation and communication in chance dis- covery. In Proc. of 6th International Workshop on Chance Discovery (IWCD6) in IJCAI. Alnajjar, K.; Lepp¨anen, L.; and Toivonen, H. 2019. No time like the present: Methods for generating colourful and fac- tual multilingual news headlines. In The 10th International Conference on Computational Creativity, 258–265. Associ- ation for Computational Creativity. Blanke, O.; Landis, T.; Spinelli, L.; and Seeck, M. 2004. Out-of-body experience and autoscopy of neurological ori- gin. Brain 127(2):243–258. Briot, J.-P.; Hadjeres, G.; and Pachet, F. 2019. Deep learn- ing techniques for music generation, volume 10. Springer. Burtsev, M.; Seliverstov, A.; Airapetyan, R.; Arkhipov, M.; Baymurzina, D.; Bushkov, N.; Gureenkova, O.; Khakhulin, T.; Kuratov, Y.; Kuznetsov, D.; et al. 2018. Deeppavlov: Open-source library for dialogue systems. In Proceedings of ACL 2018, System Demonstrations, 122–127. Charnley, J. W.; Pease, A.; and Colton, S. 2012. On the notion of framing in computational creativity. In Proceed- ings of the 3rd International Conference on Computational Creativity, 77–81. Cook, M.; Colton, S.; Pease, A.; and Llano, M. T. 2019. Framing in computational creativity–a survey and taxon- omy. In Proceedings of the 10th International Conference on Computational Creativity, 156–163. Dennett, D. C. 2014. The self as the center of narrative gravity. In Self and consciousness. Psychology Press. 111– 123. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), 4171–4186. Dostoevsky, F. 1984. Zapiski iz podpolya - notes from un- derground. Povesti i rasskazy v 2 t 2:287–386. Eco, U. 1972. Towards a semiotic inquiry into the televi- sion message. Trans. Paola Splendore. Working Papers in Cultural Studies 3:103–21. Fish, S. E. 1980. Is there a text in this class?: The authority of interpretive communities. Harvard University Press. Gadamer, H.-G. 1994. Literature and philosophy in dia- logue: Essays in German literary theory. SUNY Press. Ghazvininejad, M.; Shi, X.; Choi, Y.; and Knight, K. 2016. Generating topical poetry. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Pro- cessing, 1183–1191. Association for Computational Lin- guistics. Graves, A. 2013. Generating sequences with recurrent neu- ral networks. In arXiv preprint. He, J.; Zhou, M.; and Jiang, L. 2012. Generating chinese classical poems with statistical machine translation models. In AAAI. Hickman, R. 2010. The art instinct: Beauty, pleasure, and human evolution. International Journal of Art & Design Education 3(29):349–350. Hirsch, E. D. 1967. Validity in interpretation, volume 260. Yale University Press. Jamwal, V. 2019. Exploring the notion of self in creative self-expression. In 10th International Conference on Com- putational Creativity ICCC19, 331–335. Jhamtani, H.; Gangal, V.; Hovy, E.; and Nyberg, E. 2017. Shakespearizing modern language using copy- enriched sequence-to-sequence models. In Proceedings of the Workshop on Stylistic Variation, 10 – 19. Lamb, C.; Brown, Daniel, G.; and Clarke, Charles, L. 2017. A taxonomy of generative poetry techniques. Journal of Mathematics and the Arts 11(3):159–179. Lebret, R.; Grangier, D.; and Auli, M. 2016. Neural text generation from structured data with application to the bi- ography domain. In Proceedings of the 2016 Conference on Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 151 Empirical Methods in Natural Language Processing, 1203– 1213. Li, J.; Galley, M.; Brockett, C.; Spithourakis, G. P.; Gao, J.; and Dolan, W. B. 2016. A persona-based neural conversa- tion model. CoRR abs/1603.06155. Locke, J. 1860. An essay concerning human understanding: and a treatise on the conduct of the understanding. Hayes & Zell. Melchionne, K. 2010. On the old saw” i know nothing about art but i know what i like”. The Journal of Aesthetics and Art Criticism 68(2):131–141. Menabrea, L. F., and Lovelace, A. 1842. Sketch of the analytical engine invented by charles babbage. Ohsawa, Y. 2003. Modeling the process of chance discovery. In Chance discovery. Springer. 2–15. Periˇsi´c, M. M.; ˇStorga, M.; and Gero, J. 2019. Situated novelty in computational creativity studies. In 10th Inter- national Conference on Computational Creativity ICCC19, 286–290. Potash, P.; Romanov, A.; and Rumshisky, A. 2015. Ghost- writer: Using an lstm for automatic rap lyric generation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 1919–1924. Association for Computational Linguistics. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language models are unsupervised mul- titask learners. OpenAI Blog 1(8):9. Rozin, P. 2001. Social psychology and science: Some lessons from solomon asch. Personality and Social Psychol- ogy Review 5(1):2–14. Shiv, V. L.; Quirk, C.; Suri, A.; Gao, X.; Shahid, K.; Govin- darajan, N.; Zhang, Y.; Gao, J.; Galley, M.; Brockett, C.; et al. 2019. Microsoft icecaps: An open-source toolkit for conversation modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 123–128. Tikhonov, A., and Yamshchikov, I. 2018a. Sounds wilde. phonetically extended embeddings for author-stylized po- etry generation. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, 117–124. Tikhonov, A., and Yamshchikov, I. P. 2018b. Guess who? multilingual approach for the automated generation of author-stylized poetry. In 2018 IEEE Spoken Language Technology Workshop (SLT), 787–794. IEEE. Turing, A. M. 1950. Computing machinery and intelligence. Mind 59(236):433. van Stegeren, J., and Theune, M. 2019. Churnalist: Fictional headline generation for context-appropriate flavor text. In 10th International Conference on Computational Creativity, 65–72. Association for Computational Creativity. Veale, T., and Cardoso, F. A. 2019. Computational Cre- ativity: The Philosophy and Engineering of Autonomously Creative Systems. Springer. Veale, T. 2019. Read me like a book: Lessons in affective, topical and personalized computational creativity. 25–32. Weizenbaum, J. 1966. Eliza—a computer program for the study of natural language communication between man and machine. Communications of the ACM 9(1):36–45. Wheatley, J. 1965. The computer as poet. Journal of Math- ematics and the Arts 72(1):105. Whitehead, J. 2017. Madness and the Romantic Poet: A Critical History. Oxford University Press. Yamshchikov, I. P., and Tikhonov, A. 2019. Learning Literary Style End-to-end with Artificial Neural Networks. Advances in Science, Technology and Engineering Systems Journal 4(6):115–125. Yamshchikov, I. P.; Shibaev, V.; Nagaev, A.; Jost, J.; and Tikhonov, A. 2019. Decomposing textual information for style transfer. In Proceedings of the 3rd Workshop on Neural Generation and Translation, 128–137. Yan, R.; Li, C.-T.; Hu, X.; and Zhang, M. 2016. Chinese couplet generation with neural network structures. In Pro- ceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2347 – 2357. Yarkoni, T. 2019. The generalizability crisis. Yi, X.; Li, R.; and Sun, M. 2017. Generating chinese classi- cal poems with rnn encoder-decoder. In Chinese Computa- tional Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, 211–223. Proceedings of the 11th International Conference on Computational Creativity (ICCC’20) ISBN: 978-989-54160-2-8 152
2022 • 16 Pages • 757 KB
2022 • 15 Pages • 398.74 KB
2022 • 5 Pages • 171.05 KB
2022 • 52 Pages • 586.03 KB
2022 • 20 Pages • 3.2 MB
2022 • 14 Pages • 3.95 MB
2022 • 10 Pages • 4.3 MB
2022 • 10 Pages • 211.3 KB
2022 • 203 Pages • 2.12 MB
2022 • 3 Pages • 134.05 KB
2022 • 10 Pages • 76.54 KB
2022 • 9 Pages • 406 KB
2022 • 1 Pages • 837.51 KB
2022 • 6 Pages • 84.34 KB
2022 • 11 Pages • 180.77 KB
2022 • 25 Pages • 293.11 KB