Conformity of Eating Disorders through Content Moderation

Conformity of Eating Disorders through Content Moderation (PDF)

2022 • 28 Pages • 842.04 KB • English
Posted July 01, 2022 • Submitted by Superman

Visit PDF download

Download PDF To download page

Summary of Conformity of Eating Disorders through Content Moderation

Authors’ addresses: Jessica L. Feuston, [email protected], Northwestern University, Evanston, IL, USA; Alex S. Taylor, [email protected], City, University of London, London, United Kingdom; Anne Marie Piper, [email protected] northwestern.edu, Northwestern University, Evanston, IL, USA. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proft or commercial advantage and that copies bear this notice and the full citation on the frst page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specifc permission and/or a fee. Request permissions from [email protected] © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. 2573-0142/2020/5-ART40 $15.00 https://doi.org/10.1145/3392845 Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. 40 Conformity of Eating Disorders through Content Moderation JESSICA L. FEUSTON, Northwestern University ALEX S. TAYLOR, City, University of London ANNE MARIE PIPER, Northwestern University For individuals with mental illness, social media platforms are considered spaces for sharing and connection. However, not all expressions of mental illness are treated equally on these platforms. Diferent aggregates of human and technical control are used to report and ban content, accounts, and communities. Through two years of digital ethnography, including online observation and interviews, with people with eating disorders, we examine the experience of content moderation. We use a constructivist grounded theory approach to analysis that shows how practices of moderation across diferent platforms have particular consequences for members of marginalized groups, who are pressured to conform and compelled to resist. Above all, we argue that platform moderation is enmeshed with wider processes of conformity to specifc versions of mental illness. Practices of moderation reassert certain bodies and experiences as ‘normal’ and valued, while rejecting others. At the same time, navigating and resisting these normative pressures further inscribes the marginal status of certain individuals. We discuss changes to the ways that platforms handle content related to eating disorders by drawing on the concept of multiplicity to inform design. CCS Concepts: • Human-centered computing → Empirical studies in collaborative and social com- puting. Additional Key Words and Phrases: Eating disorders; mental illness; social media; online communities; content moderation; digital ethnography; conformity; multiplicity. ACM Reference Format: Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper. 2020. Conformity of Eating Disorders through Content Moderation. Proc. ACM Hum.-Comp ut. Interact. 4, CSCW1, Article 40 (May 2020), 28 pages. https: //doi.org/10.1145/3392845 1 INTRODUCTION Participation of diverse groups on social media platforms, such as Facebook, Instagram, and Reddit, occupies a large contingent of work in Computer-Supported Cooperative Work (CSCW). Research addresses the proliferation of networks and communities across these platforms as well as the content of discussions and practices of sharing [1, 5, 48, 73, 83, 109]. Emergent within this literature is an emphasis on understanding the practice of content moderation and associated experiences. As Tarleton Gillespie writes, content moderation is central to what online platforms do [58]. Moderation of participation and discussion has been studied within general contexts, such as Reddit [74, 76], as well as specifc ones, including examination of hate speech and online 40:2 Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper harassment [26, 116, 135, 136]. Much discussion in this domain involves identifying specifc topics of conversations [28, 136], determining which topics should be encouraged or removed [33, 55, 60, 129], and understanding interactions between manual and automated forms of regulation [75, 130]. In the CSCW- and Human-Computer Interaction (HCI)-related literature, as well as publicity from large tech frms [118], the moderation of individuals and groups has been treated as a necessity. In this paper, we aim to approach the topic of platform content moderation from another perspective. Platform moderation involves various confgurations of human and algorithmic activity (e.g., fagging content, content removal). Here, we closely attend to how moderation happens and what the consequences of moderation are for members of marginalized groups expressing non-dominant narratives. We argue that the relations between the social and technical (i.e., the sociotechnical) aforded on social media platforms exert an active force, producing and reproducing a conformity to particular norms and values. Conformity, therefore, is not only established through the formally documented rules in a platform’s standards and guidelines [49, 58, 113]. Similarly, it is not solely dependent on the technological features and underlying structures of a platform. Rather, it emerges as a tacit set of norms and values through the interplay between a platform’s features and how a group’s members come to actively moderate talk and interaction. Our interest, then, is in how the distinctive features of social media platforms interweave with the social practices of moderation and how such sociotechnical relations serve to sustain and amplify certain norms and values that often exclude or marginalize non-dominant narratives. We examine these social and technical practices of content moderation on social media platforms as they relate to individuals with eating disorders. The work that follows is grounded in two years of digital ethnography, most recently focusing on the experiences of individuals with eating disorders across an ecosystem of social media platforms. In addition to analyzing online content, we interviewed 20 individuals with eating disorders who reported having content removed from social media platforms, including Facebook, Instagram, Reddit, Tumblr, and Twitter. Through a constructivist grounded theory approach to analysis [30], we show that the pressures of moderation can have damaging consequences, especially for marginalized groups. These consequences include loss, labor, and displacement, as well as wider processes that reinforce ideas around which versions of mental illness are sanctioned as ‘normal’ and ‘acceptable’ in online spaces. We will show that resistances arise in response to these many consequences and to the efects of being marginalized. Individuals and groups fnd ways to work around platform processes through the creation of difer- ent user accounts and establishment of splinter communities forged through ingroup, grassroots processes of community moderation. We make three primary contributions. The frst is a detailed account of how members of a marginalized group—individuals with eating disorders—experience content moderation, extending prior work in this space [25, 74, 76, 106]. Although content moderation is typically conceptualized as necessary for the greater good of online communities (e.g., preventing harassment, protecting individuals from graphic or triggering content), its potential harms are not well-understood or doc- umented. Our analysis reveals the ways in which content moderation has consequences, sometimes severe, for people with eating disorders. These consequences include loss of personal content (e.g., used for self-refection) and community support, as well as the creation of additional restorative work for people who have been subject to moderation. Second, we turn to conformity as a way of understanding the broader social and technical practices of content moderation. In this paper, we view these practices as mechanisms of social infuence and control. Conformity, in this context, is simultaneously a particular confguration of norms and values and an active process in which people with difering norms and values are pressured to assimilate or comply [31]. Conformity is central to many social processes. Our aim is not to contend the importance of conformity or to call for its eradication. Rather, we use this paper as Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. Conformity of Eating Disorders through Content Moderation 40:3 a space to question and call attention to a particular practice of conformity—content moderation as it relates to eating disorders online—that has become pervasive and taken-for-granted across many online spaces. We discuss how content moderation contributes to wider processes of conformity, set within historical and contemporary contexts, where particular versions of mental illness are legitimized and others are rejected. Finally, as a counterpoint to conformity, we refect on what it means to design for multiplicity in online social platforms. Drawing from Annemarie Mol’s work [103], we discuss how eating disorders are enacted diferently across various sociotechnical confgurations of online spaces and actors. These difering, multiple versions of eating disorders are simultaneously performed and entangled within diferent platforms, communities, accounts, and people [69, 132]. Here, we use multiplicity to focus on the many diferent versions of body image and body management as they are performed online with respect to eating disorders and, more broadly, mental illness. In this context, multiplicity helps us attend to the range of norms and practices within eating disorders online and the restrictive impact of platform content moderation. We articulate directions for future work aimed at creating more diverse and equitable online spaces. 2 RELATED WORK Our work builds on a growing body of literature related to content moderation, eating disorders, and members of marginalized groups online. Though we use ‘eating disorder’ and ‘mental illness’ throughout this paper, we do not reference or rely on medical interpretations or delineations of these experiences. Instead, these phrases act as social groupings that help us connect our work with other technologists and conversations in this domain. In this paper, we are addressing particular ways that bodies (e.g., body image, body management) are constructed online and how people respond to and negotiate norms around this content. 2.1 Content Moderation on Social Media A large body of work within CSCW- and HCI-related literature examines content moderation in the context of social media and online communities [8, 26, 27, 58, 74–78, 85, 100, 106, 129, 130]. Practices of moderation aim to facilitate quality content, civil discussion, and, generally speaking, online spaces where individuals can engage and participate without overt fear of abuse, harassment, or accidental viewing of violent, illegal, or triggering activities [85, 86]. Throughout this paper, when we refer to content moderation, we refer to “the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse” [62]. What we call platform moderation, others have termed commercial content moderation [122]. This practice of moderation involves the organized ways in which content produced by social media users is subject to surveillance, report, review, and removal [106]. These practices often rely on decisions passed down by dispersed groups of outsourced laborers [61, 122]. Though mechanisms behind content moderation are largely proprietary and private (i.e., a black box [75]), some researchers have illuminated the underpinnings of these sociotechnical processes [58, 61, 106, 122]. Broadly, content moderation may involve automated systems, community fagging and reporting [32], and outsourced labor [61, 121, 122]. Several social media platforms, including Reddit and Facebook (e.g., subreddits, Facebook groups), also rely on community moderators—at times, with automated systems—to manage groups of individuals with similar interests, as well as transient visitors [75, 78, 81, 100, 130]. We distinguish this instance of moderation, in which moderators and other members of communities engage in shaping (i.e., moderating) particular forms of participation online, from platform moderation. However, as we argue, practices of platform and ingroup community moderation are entangled. Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. 40:4 Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper Given the pervasiveness of content moderation, a growing area of interest involves understanding the experience of being moderated [74, 76, 77, 106]. This research thread speaks to the frustration and, at times, confusion of having content removed. Though marginalized communities and groups of people are not highlighted currently in this body of work, researchers have suggested that content moderation may have more detrimental efects on their members [77, 106]. The present study helps bridge this gap in the literature by engaging with a particular marginalized group (i.e., individuals with eating disorders) through digital ethnography, including online observation and interviews. In addition to demonstrating the harms of content moderation in this context, we animate its role in constituting eating disorders and, as we detail in the discussion, illness narratives online. 2.2 Moderating Eating Disorders Online Researchers have also studied content moderation as it relates to eating disorders. This work typically engages with ‘deviant’ (i.e., rule-breaking) content from pro-eating disorder (pro-ED) communities. Research in this domain has used machine learning techniques to characterize types of content removed [22, 23] and behavioral responses to moderation, including the ways that individuals use platform features, such as hashtags, to circumvent banned content [25, 57]. Findings from these works provide valuable insight into how the practices of platform moderation (i.e., particularly the banning of hashtags) amplify existing challenges to moderation and may inadvertently overlook others. For example, Stevie Chancellor and colleagues [25] found that attempts to moderate certain types of eating disorder content through hashtag bans resulted in a broader diversity and lexical variation of hashtags. The increased lexical variation of eating disorder hashtags resulted in additional challenges to moderation conducted via hashtags. Ysabel Gerrard [57], similarly, detailed limitations to practices of hashtag-based moderation, including the ways in which recommender systems can actively circulate pro-ED content. Due to these pitfalls of platform moderation, researchers note that alternatives are necessary [25, 47, 57]. In this paper, we extend these prior works through an empirical study of the experience of content moderation and a subsequent discussion detailing new avenues for design. Content moderation is not, of course, the lone interest for researchers examining eating disorders online. Prior works detail a large and diverse spectrum of inquiries, including characterizations of content [10, 20, 34, 59, 79, 112, 114, 137], information-seeking behaviors [11, 51, 92, 107], recovery likelihood [24], and ethical concerns, including those related to censorship [131]. Across this body of research we see a commitment to supporting people with eating disorders and understanding the complexities of eating disorders in digitally-mediated spaces. Speaking to this complexity, Eliz- abeth V. Eikey, across a number of collaborations [43–45], describes how technologies, including social media platforms and weight loss applications, can be simultaneously benefcial and negative for individuals with eating disorders. Similarly, Pamara F. Chang and Natalya N. Bazarova’s ex- amination of disclosure-response sequences on Pro-Ana Nation, an online forum, demonstrates how community-provided support within pro-anorexia spaces can be detrimental to health [29]. These examples highlight tensions within online spaces that individuals with eating disorders frequent. Specifcally, as these spaces are not inherently or wholly positive or negative, benefts and consequences are entangled in the ways that people use them. Given the complexities of technology use in this domain, computer-mediated support for indi- viduals with eating disorders presents a challenging area for research. With respect to research focusing on pro-ED content and communities, recommendations tend to settle within a narrow window of approaches. These approaches may include novel forms of moderation [22, 34], such as automated systems to assist human moderators, and health interventions [23]. These design recommendations may beneft a number of people. For example, they aim to reduce the prevalence Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. Conformity of Eating Disorders through Content Moderation 40:5 of triggering content online, and its potential for contagion [19], and provide recovery support to in- dividuals posting about certain topics. However, little is known or understood about their potential for harm. Understanding conficting and negative efects from well-intentioned and health-minded design can support academic and industry professionals in developing equitable approaches to eating disorders online that work to mitigate unintentional harm and oppression. Additionally, in considering prior works, many studies do not engage directly (i.e., through interviews) with the communities they observe and plan to serve. First-person accounts are vital to better understand the complexity of eating disorders online. With this paper, we build on these earlier studies with interviews and attention to the consequences of content moderation for individuals with eating disorders. 2.3 “On the Margins” of Social Media and Online Communities The contemporary experience of living with an eating disorder cannot be understood without considering the historical context of mental illness. Historically, individuals living with mental illness have encountered stigma, social ostracization, and forms of oppression, including forced institutionalization [52, 89, 128]. Specifc to eating disorders, research has found that anorexia and bulimia are signifcantly more stigmatized than depression [123], and that eating disorders are associated with a variety of stigma and negative stereotypes dependent on the specifc diagnosis (e.g., anorexia, bulimia, and binge eating disorder) [119]. In this paper, we join with other mental health and social media researchers in considering the experiences of people with eating disorders— and mental illness, more generally—through a history of marginalization [47, 108, 112]. Situating the experiences of these individuals in the context of marginalization helps us to better attend to power dynamics and diferentials, acknowledge labor practices, and contribute to a growing body of literature that examines the marginalization of groups and designs for more equitable online experiences [8, 64, 72, 120, 126]. People with eating disorders may seek online socialization and support for a number of reasons [12, 42, 80]. Online spaces can help reduce feelings of loneliness and isolation derived from stigma and also connect individuals with people and communities where experiences are shared and understood [29, 43, 80]. Similarly, beyond research examining eating disorders, there is a body of work that examines how individuals with other ‘non-normative’ identities and experiences engage and participate online [36, 41, 63, 96, 101]. Research within this corpus often addresses the everyday lives of people from marginalized and minoritized groups. For example, Bharat Mehra and colleagues examine internet usage by low-income families, sexual minorities, and African-American women [101], fnding that the internet can efectively operate as an instrument of empowerment. Other research foregrounds the importance of having safe and supportive online spaces, particularly for opportunities related to self-disclosure and identity work [2, 3, 36]. Collectively, this work documents the benefts and detriments of participation online and discusses how to improve online spaces for individuals at society’s margins. Though there are benefts to online participation for members of marginalized groups, there are also an array of harms. For example, women, people of color, members of the LGBTQ community, and individuals with mental illness all encounter disproportionate and targeted forms of harassment online [40, 47, 90]. Ongoing research aims to address problems with harassment, such as through work with social organizations, communities, and platforms, including Hollaback [37] and HeartMob [8]. Social media platforms are also invested in understanding and solving problems related to online harassment [87]. However, as Gillespie describes, platform eforts related to reporting and mitigating harassment can themselves contribute to the problem (e.g., such as when individuals organize to use reporting features to fag or report a specifc user—or group of users—who they do not agree with or like). Here, we consider how features designed for good (i.e., moderation to Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. 40:6 Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper support positive experiences and health) can work to exclude individuals with eating disorders and contribute to the oppression of a marginalized group online. 3 METHOD A two-year digital ethnography, including online observation and interviews, grounded our under- standing of the experience of content moderation for individuals with eating disorders. During this digital ethnography, we studied topics related to mental health and mental illness across several social media platforms and online communities. Most recently, we integrated our on-going digital ethnography, particularly those data collected through online observation, with 20 semi-structured interviews. In these interviews, we spoke with individuals who have or have had eating disorders and experienced content moderation online. 3.1 Digital Ethnography Digital ethnography has a foothold in a number of disciplines, including HCI, sociology, and anthro- pology [117]. Our approach to digital ethnography involves a commitment to the ‘ethnographic turn’ in HCI [39], as well as works produced by ethnographer danah boyd [13, 14] and sociologist Dhiraj Murthy [104, 105]. Through boyd’s ethnography of the social lives of teenagers in a net- worked era we see a commitment to immersion, observation (e.g., digitally-mediated and physical), and in-person interviews [14]. Drawing on these practices, we also turn to Murthy, who writes that digital ethnography is “ethnography mediated by digital technologies. [...] As this defnition suggests, digital ethnographies can be ethnographic accounts of both ofine and online groups. The ‘digital’ in this mode of ethnography stems from the methods rather than merely the target ethnographic object” [105]. Our understanding of digital ethnography supports a range of practices and forms of engagement with research populations. Unlike traditional ethnographies, our research does not include a conventional feld site, such as a specifc geographic location or collocated community. The posts, accounts, and communities we observed online, as well as the participants we interviewed, are geographically dispersed and, instead, better understood through the mediating technologies of a networked world [13]. As with many ethnographies, our digital ethnography is not apolitical or disengaged with ethical responsibility and considerations. We aim to use this ethnography to foreground the experiences and perspectives of people with eating disorders who have had content removed. For us, digital ethnography provides a means to connect and empathize with marginalized populations. It also enables us to observe and understand changes in individuals, communities, and posting activity over time. 3.1.1 Online Observation. During the third week of November 2017, we conducted online observa- tion on Instagram that resulted in an initial corpus of 2,102 posts once duplicates were removed. Initially, our inquiry focused on understanding multimodal expressions of mental health and illness [91]. To build this corpus, we used fve hashtags (i.e., #anorexia, #anxiety, #bipolar, #depression, #mentalillness) that had been validated in previous research [3, 23, 25]. We collected posts four times a day by manually saving (i.e., hand-scraping) nine Top Posts and nine Most Recent posts for a given hashtag. During this preliminary data collection period, in addition to saving posts, we also spent time memoing on the content we observed. This involved copying and pasting URLs and taking screenshots of images, videos, captions, and comments to incorporate in documents. Juxtaposed with this online content, we wrote extensively during our initial interpretive work. Following this one-week data collection period, we used our initial corpus as a starting point for continued online observation. Between November 2017 and September 2018 we collected a total of 6,223 Instagram posts (n=2,188 unique users) by tracing through accounts of individuals Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. Conformity of Eating Disorders through Content Moderation 40:7 who had posted, commented, and liked posts in our corpus. This traversal included full accounts, posts without hashtags, and posts without any text. Though we were interested in understanding how people posted about mental health and mental illness on Instagram, we noticed a number of instances related to content removal. Specifcally, we observed a number of posts in which individuals who had content removed, including content related to self-starvation and self-harm (e.g., cutting), shared about their frustrations with content removal, reporting features, account terminations, and Instagram as a platform. In addition to Instagram, we conducted data collection on Reddit and Tumblr. We did this to expand our corpus beyond a single social media platform. Broadening data collection provided a more nuanced view of the ecosystem [17, 36] in which people with eating disorders interact, and how this ecosystem changes and is disrupted by platform moderation of content. This ecosystem view is important for a number of reasons. Namely, people who use social media or online communities often belong to a number of diferent online spaces, rather than just one [36]. By examining experiences across an ecosystem, rather than a specifc platform, we can better understand what is common and what is extraordinary. Understanding the systemic reach of common practices, as we do here (i.e., platform content moderation), can support critical interrogation of activities or features that are concealed or taken for granted. Our online observation of Reddit began during November 2018, when Reddit issued a series of bans to communities such as r/ProED, r/ProEDMemes, and r/ProEDAdults. Following this incident, we observed how a number of banned subreddit members joined other social media and online communities. At this time, and continuing throughout the following months, we collected public posts on Reddit discussing the platform’s decision to ban these subreddits. We also gathered relevant content from other online spaces, including online communities, individual blogs, and social media. When we began interviewing for this study, several participants described content removal and account bans on Tumblr. As such, we decided to collect posts from Tumblr beginning in June 2019 to supplement our understanding of how people with eating disorders use the platform and respond to content moderation. We began our manual crawl through Tumblr using several eating disorder search terms, such as those used and validated in previous research (i.e., anorexia, proana, proED, bulimia, eating disorder) and others occurring alongside these hashtags on posts and within accounts [112]. We included posts and accounts in our analysis when we observed a mention of moderation. However, though many posts and accounts were not included in analysis, they did support our understanding of Tumblr as a platform, as well as the ways in which people with eating disorders use it. We included an additional 208 threads started by 103 unique users from Reddit and 160 posts from 23 accounts on Tumblr in our analysis. These data were used to inform our line of questioning for interviews and supplement analysis. When presenting these data, as well as the content from Instagram and other online spaces (e.g., blogs, online communities) mentioned above, we alter the wording of posts so that they are not easily searchable or identifable. Additionally, our online observation involves currently active and quarantined subreddits, as well as several smaller, online communities that are not housed on social media platforms. To preserve the privacy of these communities and their members, we do not name them. However, in this paper we do refer to banned eating disorder support communities on Reddit by their names. We do this as a form of activism to raise awareness about the termination of communities that provided support for a marginalized group. 3.1.2 Interviews. We conducted semi-structured interviews with 20 adults (ages 18 – 57; M=29) with eating disorders who had content related to their disorders removed from online communities and social media platforms. Though eating disorders can impact anyone [114], regardless of any Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. 40:8 Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper particular facet of identity, only three participants in our study identifed as male (17 female). This is not to suggest that eating disorders are more prevalent or signifcant for women, only that our methods of recruiting did not adequately reach out to or engage with other individuals. With respect to race, eating disorders often run the risk of being associated predominantly with white women [84]. While the majority of our participants were white (n=12), six were African-American, one was Hispanic, and one identifed as multi-ethnic. Eligibility for this study was not contingent on a diagnosis. However, barring diagnosis, partici- pants were required to identify as having an eating disorder. We invited individuals living with and in recovery from eating disorders to participate. As such, we have a broad spread of experiences represented by our participants. For example, several of our participants described being in recovery, while others were relapsing at the time of the interview or had grown accustomed to living with their disorder. Many individuals in our study described specifc categories of eating disorders (e.g., anorexia, binge eating disorder, bulimia, other specifed feeding or eating disorder), even when they had not received a diagnosis. Others, however, identifed through particular activities, such as experiences with self-starvation and binge eating. These various participant experiences are difcult to neatly categorize. Many participants self-described having multiple experiences, such as with diagnosis and with disordered eating practices. We interviewed individuals who were members of pro-ED communities, as well as individuals who were members of pro-recovery or diet communities. The thread connecting our participants were their experiences, even those in the past, with content moderation. The content removal experienced by our participants included posts, accounts, and communities. We recruited participants from an online eating disorder support community (n=4), Reddit (n=3), Craigslist (n=12), and snowball sampling (n=1). We issued a pre-interview phone screener, where we called participants to verify their age, eating disorder status, and experience with content moderation. Interviews lasted an average of 45 minutes and were held over the phone (n=18) or in-person (n=2). During the interview, we discussed topics related to experiences with online eating disorder accounts and communities, content removal, reactions to content removal, support resources, and opportunities for platform redesign. Interviews were audio recorded and transcribed for data analysis. Participants received a $30 Amazon gift card or $30 in cash. When referencing our participants throughout the paper, we use pseudonyms. 3.2 Data Analysis Our approach to data analysis follows a constructivist grounded theory process, where members of the research team developed themes through iterative coding, memo writing, and constant comparison of data to developed concepts [30]. The process of memoing, as it relates to online observation, involved manually saving links to spreadsheets and taking screenshots of images and text to move into digital documents (i.e., Microsoft Word, Microsoft OneNote). We then wrote our memos directly juxtaposed with the online content we were observing. With respect to videos, which were rare in our dataset, we watched the video online and took a screenshot of the thumbnail, as well as anything visually relevant to the current inquiry (i.e., content moderation). We did not discretely memo around online observations and interviews. Insights, quotes, and images from these various collection methods were entangled in our memos, where they co-informed interpretations of one another and of our thematic development. Preliminary themes included types and motivations for posting content that was eventually moderated, receipt of news (i.e., how participants came to know their content had been moderated), sensemaking around moderation, consequences of moderation, workarounds and resistance, and tensions with coexistence (i.e., how individuals navigate eating disorder communities that may include triggering content). Through our analysis, we began to understand the ways in which Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. Conformity of Eating Disorders through Content Moderation 40:9 harm can be caused by good intentions (e.g., content moderation and support resources), as well as how individuals push back on oppressive practices and participate online in ways that support the diversity of experiences with eating disorders. 3.3 Ethical Responsibility We received approval for this research from [institution]’s IRB. In working with the IRB for this approval, we included two safety provisions for participants: a mental health practitioner and a document of mental health support resources. The mental health practitioner was available as a resource to participants (i.e., contact information on the consent documentation). Additionally, participants were informed that, should they describe current experiences with psychological distress, suicidal ideation, or self-harm, the interview would immediately end and their contact information would be shared with the practitioner, who would then reach out. Another provision we included was a document of mental health support resources, including several phone and text-based helplines. This document was sent to individuals along with the consent documentation, prior to the interview taking place. As part of our method, we are mindful of how the analytic frame of marginalization requires accounting for and refecting on how our expectations, values, and norms as researchers, and as individuals within society, difer from those of our participants and online posters [16, 66]. Our work is shaped by our personal experiences, including our own individual experiences with eating disorders and our experiences alongside close family members and colleagues with eating disorders. Our personal experience inherently shapes our approach and perspective, including our interest in this line of inquiry, interpretation of the data, and conscious commitment to foregrounding an experience that is largely unrepresented in current CSCW and HCI research. It is through this analytic framing that we began to see the concept of conformity take central focus in our analysis and understanding of content moderation. We view ourselves as having an obligation to the communities and people we research [104, 127]. This means foregrounding experiences without sensationalizing them. For this reason, we do not include any images or video screenshots in this paper. Visual modes of communication, in particular, when taken out of context, such as posts removed from the entirety of an individual’s account, can make certain topics or practices seem unfamiliar or other. Similarly, because we do not have the consent of the individuals we observed online, we do not share unmodifed text excerpts [50]. 4 F INDINGS Through our analysis, we show how content moderation involves the interplay of social norms and technical features of a platform that work to silence individuals and remove support, create new labor by encouraging responses and resistance, and shape community-led practices of moderation. To set the scene for our fndings, we frst walk through a case with one of our participants that illustrates how platform content moderation works in this context. Dani, now 20 years old, has participated on social media and online communities for nearly a decade. Though her personal experience with eating disorders was not the only content she shared online, it did specifcally result in account bans on both Tumblr and Instagram. With respect to Tumblr, prior to the termination of her account, Dani used a number of strategies to manage her public eating disorder blog and limit unwarranted attention. For example, she avoided using features that could establish links to other content or aid in platform search and providing tips or advice to other users (i.e., “telling p eop le you should do this”). Despite these strategies, Dani felt like she was “walking on eggshells” whenever she posted. Her sensitivity to the workings of Tumblr (e.g., its capacities for linking and connecting content) was motivated by wanting to maintain a highly personal blog detailing her own sense of self and body image, while, at the same time, wanting to Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. 40:10 Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper escape criticism and platform moderation. Specifcally, she “didn’t want p eop le to come crucify me because I was talking about, you know, the p art of eating disorders that nobody wants to see. That nobody wants to hear.” Despite Dani’s strategic use of Tumblr, her eating disorder blog attracted attention. A year into managing this blog, Dani received an “aggressive” anonymous message asking her to delete an unspecifed post about body image. “I didn’t know exactly which p ost they were talking about,” she said. “[T]hat wasn’t the frst time I p osted about me not liking the way I looked... So, for a moment I sat and stared at the [message], and I was like, ‘What? Which one?”’ Rather than remove any posts, Dani sent a message back to the anonymous user, telling them to “just block” her. Shortly after, Dani’s account was terminated by Tumblr. An email from Tumblr’s support team notifed her the eating disorder blog had been deleted for “violating their terms” and, though it invited appeal, Dani’s eforts to receive an explanation and reinstate her content remain unanswered. Though what triggered the ban is unclear, Dani placed blame with the anonymous user who messaged her earlier in the day. However, it may have been another, or even an automated content reporting system, that was ultimately responsible. Despite being subjected to regulation, Dani resisted the ban on Tumblr by creating a new account and, ultimately, fnding new online communities, including those of of social media, to join. Even with new accounts and online spaces, Dani’s experience of being banned shaped her future interactions online, including practices of participation. She explained: “I’m not as talkative anymore... I just kind of lurk... I know there’s still p eop le p osting about eating disorders on there, but, when I see a p ost from them, I immediately get nervous saying, you know, if I interact with this p erson...someone is going to fnd my account and fnd a reason to make me disap p ear.” In Dani’s case, we see how a range of sociotechnical mechanisms and practices can work together to monitor and regulate content. Specifcally, we see that moderation is made possible through the tight coupling of social interactions and the underlying technical structure of a platform (i.e., how the platform makes possible specifc moderation practices, such as reporting and removal). These entangled relations—the interplay between the social and technical—do not only infuence what and where some individuals post, but also shape appropriate or acceptable versions of having an eating disorder online. For Dani, we see that content moderation has serious consequences, including reduced social engagement and online expression. Additionally, we see how moderation and its consequences, including the possibility of further sanction, serve to amplify Dani’s sense of being subject to control and surveillance. As this example begins to show, individuals experience a number of serious consequences following from moderation. These consequences may lead many to react against and resist platform moderation. However, as we will show, moderation is not simply an external force. It is also an interactive process that shapes how groups of individuals with diverse and varied experiences of eating disorders establish their own community-led moderation practices as part of engaging and participating within online spaces. 4.1 Experiencing Content Moderation as Loss Throughout our data, and exemplifed in Dani’s case, we learned of many unintended consequences of moderation, including reduced online engagement and loss of community. Marie, discussing an experience with account termination, addressed how, for her, moderation “was kind of embarrassing.” She “felt like I was being told I was wrong. Or getting p unished when I hadn’t done anything. I felt like I hadn’t done anything wrong and I was angry about that, as I felt it was unfair.” The initial anger and confusion associated with moderation, as Marie and others in our dataset described, have been detailed in prior research [74]. These—often strong—emotions are entangled with the ways that Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. Conformity of Eating Disorders through Content Moderation 40:11 individuals learn about and make sense of the experience of moderation, which can be confusing due to the lack of transparency and consistency. Marie’s comment, in addition to describing her embarrassment and anger, speaks to recent fndings detailed by Shagun Jhaver and colleagues [74]. Notably, that many individuals who have been moderated feel that they were done so unfairly. Here, rather than focus on perceptions of fairness or emotional responses to moderation, we attend to the various losses, including personal content for refection and community support, that moderation entails. Loss of content is central to the experience of being moderated. Platform moderation often involves unsolicited removal of personal posts and accounts, which are maintained by and for the individual. As most of our participants were not in the habit of saving content to multiple locations, their content was lost entirely. By removing or deleting this personal content, platforms efectively remove certain experiences and prevent opportunities for refection and catharsis. Andrea and Dani both equated aspects of their online content with “diary” entries. This perspective shows how online content related to eating disorders is not merely a snippet of conversation or the representation of an experience. Rather, it plays into how people think of themselves and gives shape to an archive where posts can be revisited and refected upon. Specifcally, content in aggregate becomes a resource for refection in the short and longer term [93]. While access to online content, particularly content functionally similar to a diary or journal, is valuable at any point during an individual’s experience with an eating disorder [95], Andrea talked about how rereading her earlier posts was benefcial during recovery. She said: “I remember I used to p ost a lot of intrusive thoughts and then, going through recovery, I started having a lot fewer of those. And then there’s a lot of elements where you’re like, ‘Oh, am I in a really bad p lace?’ And then you go back and look at it and you’re like, ‘Oh, I’m not having 50 obsessive thoughts today about needing to weigh myself...’ I can actively see how it’s changed or even like at the time too, seeing how it got worse. That was really help ful to me right when I started recovery...” Content removal as a practice of moderation can suggest that certain experiences with mental illness are unwelcome and unworthy [47]. Notably, this interpretation coexists alongside the view of content removal as benefcial due to, for example, the reduction of potentially triggering imagery and text. In the instances we articulate here, moderation can feel like a loss of personal voice or silencing of experience. While many of our participants shared content related to living with an eating disorder, Grace discussed how posts on her Instagram account centered on “trying to be healthy” and “trying to gain my weight back.” Despite this recovery context, Instagram removed a selfe that Grace shared because she looked too thin. The removal of her post from Instagram left Grace feeling sad, ashamed, and “unworthy to be seen.” This example demonstrates how various types of eating disorder content, such as recovery imagery and thinspiration, can share similarities. These similarities speak to the difculties of classifying mental health and illness content on social media [46], as well as the ways platforms may inadvertently delegitimatize experiences while aiming to provide certain protections or support (e.g., helping people avoid triggering content). Another form of loss that individuals experience as a result of moderation involves loss of community and social support. When platforms moderate content, they may “[take] away a sup p ort system,” Christy explained. Loss of community, such as through practices related to account and community bans, can lead to social isolation, particularly for individuals who “don’t have anywhere else to go,” one former member of the now banned r/ProED wrote. As another former member described, the subreddit ban was “extremely up setting. So many p eop le used this [subreddit] for help and sup p ort. We can’t always fnd that sup p ort ofine.” Social isolation due to practices of moderation can afect health. For example, Dani had a few helpful “p eop le [on Tumblr] that would tell me, you Proc. ACM Hum.-Comput. Interact., Vol. 4, No. CSCW1, Article 40. Publication date: May 2020. 40:12 Jessica L. Feuston, Alex S. Taylor, and Anne Marie Piper know, ‘You’re not alone. I’m here to talk,’ and stuf like that.” Following the ban of her Tumblr account, Dani lost these meaningful connections, which caused her to feel “dep ressed, ‘cause I didn’t have anyone to talk to.” In addition to depression, we observed instances in which the experience of moderation led to dangerous ofine behaviors, including purging. A former member of r/ProED wrote, “I was really trying to recover... I don’t know what to do now. I really feel like p urging everything. This is so stup id.” In attempting to remove content classed as non-normative and harmful, platforms can create a downward stream of negative consequences, including loss of social support that, at times, amplifes illness. Content removal is not the only practice of moderation that results in loss of community. For example, on Reddit, the practice of quarantining efectively isolates certain communities and their members from the larger Reddit community. In particular, quarantine suggests that, while certain subreddits are “not prohibited,” they are, nevertheless, not normative or socially sanctioned. Quarantine on Reddit is established in several ways. Take, for example, the community that Morgan moderates. At the time of the interview, the subreddit had been under quarantine for several months. Functionally, this means that visitors receive a warning screen prior to viewing the subreddit. This warning screen includes the following message: “Are you sure you want to view this community? This community is quarantined. If you or someone you know is struggling with an eating disorder, there are resources that can help . Visit the National Eating Disorders Association...

Related books

Disordered Eating and Eating Disorders

2022 • 2 Pages • 34.49 KB

beating your eating disorder

2022 • 216 Pages • 734.2 KB

Treating Eating Disorders - Psychiatry Online

2022 • 34 Pages • 179.44 KB

CAUSES OF EATING DISORDERS

2022 • 9 Pages • 411.66 KB

Eating disorders;

2022 • 28 Pages • 499.24 KB

Eating Disorders

2022 • 289 Pages • 4.72 MB

ASSESSMENT OF SUSPECTED EATING DISORDERS

2022 • 18 Pages • 342.83 KB

Clinical Manual of Eating Disorders

2022 • 36 Pages • 2.08 MB