Does Feedback from This Device Change Unhealthy habits

Does Feedback from This Device Change Unhealthy habits (PDF)

2022 • 16 Pages • 845.11 KB • English
Posted June 27, 2022 • Submitted by pdf.user

Visit PDF download

Download PDF To download page

Summary of Does Feedback from This Device Change Unhealthy habits

Design Research Society Design Research Society DRS Digital Library DRS Digital Library DRS Biennial Conference Series DRS2018 - Catalyst Jun 25th, 12:00 AM Does Feedback from This Device Change Unhealthy habits? Does Feedback from This Device Change Unhealthy habits? Lessons from my PhD project Lessons from my PhD project Sander Hermsen Utrecht University of Applied Sciences Follow this and additional works at: https://dl.designresearchsociety.org/drs-conference-papers Citation Citation Hermsen, S. (2018) Does Feedback from This Device Change Unhealthy habits? Lessons from my PhD project, in Storni, C., Leahy, K., McMahon, M., Lloyd, P. and Bohemia, E. (eds.), Design as a catalyst for change - DRS International Conference 2018, 25-28 June, Limerick, Ireland. https://doi.org/10.21606/ drs.2018.306 This Research Paper is brought to you for free and open access by the Conference Proceedings at DRS Digital Library. It has been accepted for inclusion in DRS Biennial Conference Series by an authorized administrator of DRS Digital Library. For more information, please contact [email protected]. This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License. https://creativecommons.org/licenses/by-nc-sa/4.0/ Does Feedback from This Device Change Unhealthy habits? Lessons from my PhD project HERMSEN Sander Utrecht University of Applied Sciences [email protected] doi: 10.21606/drs.2018.306 Feedback from digital technology has often been used to support people in changing undesired, unhealthy habits. As yet, there has been little research into the efficacy of these designs. In my PhD project, I evaluated the acceptance, sustained use, and effect of four designs that provide feedback on undesired habitual behaviour through digital technology. Findings are that the disruptive effect of feedback on undesired habits has been proven, and there is some evidence that feedback may have a lasting effect on behavioural change. (Sustained) use of digital designs that provide feedback is moderated by motivation, age, goal-related aspects, and user experience. The necessity of high motivation to use a device poses challenges for the acceptance of and sustained engagement with designs for behaviour change that rely on feedback. Further challenges concern privacy and the quality of the evaluations of our designs. feedback; digital devices; behaviour change; health behaviour 1 Introduction Undesirable habits can be very hard to change. In recent years, we have seen a growing number of digital designs that claim to provide a solution. Many of these designs (automatically) record our behaviour and give us feedback on our performance. Evidence of the efficacy of designs that provide feedback on behaviour is slowly accumulating but remains limited to academic outlets that are historically less accessible to non-behavioural scientists, such as HCI researchers, designers and design researchers (Hekler, Klasnja, Froehlich, & Buman, 2013). This paper aims to provide designers and design researchers with an accessible overview of my PhD project, which contributes to answering the question whether feedback through digital technology is effective to change habitual behaviour. To do so, the paper provides a summary of a recent analysis of the current literature, and an evaluation of four existing designs for behaviour change that provide feedback on undesired habits. 2525 In literature, habits are commonly defined as "behaviour (...) prompted automatically by situational cues, as a result of learned cue-behaviour associations" (Wood & Neal, 2009, pp. 580; Gardner, 2014, p.1). They help us to come to terms with the enormous complexity of everyday life, by taking away the burden of conscious deliberation from many uncritical decisions. Unfortunately, many of our habits have adverse effects on our own health and that of the planet we live on. The rigid cue- response-chain of a strong habit overrides contradictory behavioural intentions (Verplanken & Faes, 1999; Verplanken & Wood, 2006). This may lead to undesired results when habits have a satisfying short-term effect but damaging health consequences in the long run, as with snacking, a lack of physical activity, or alcohol abuse. Furthermore, since habits do not take into account current context, changed circumstances may render habits unproductive for contemporary life, even though the behaviour may have led to rewards in the past. The major benefit of habitual behaviour is that it circumvents active consideration of the current context, but this also makes it very hard to change habits using interventions aimed at controlled processing, e.g. through persuasive messages related to the consequences of behaviour (Verplanken & Wood, 2006) or changing behavioural intentions (Sheeran, 2002). A more successful way to disrupt undesired habits is to bring habitual behaviour and its context to (conscious) awareness. Self- monitoring, the procedure by which individuals record the occurrences of their own target behaviours (Nelson & Hayes, 1981), enables perception of our own behaviour and adaption to the current context. This leads to a decrease in unwanted behaviour (Quinn, Pascoe, Wood, & Neal, 2010). Unfortunately, self-monitoring is difficult for even the most motivated individual (Wilson, 2002). There is often a discrepancy between self-reported and actual performance in health behaviours such as calorie intake and physical activity (Lichtman et al., 1992), Accurate self-monitoring is greatly improved by personalized information from external sources (Kim et al., 2013; Li, Dey, & Forlizzi, 2010). The advent of mobile and interactive media has given us an unsurpassed opportunity to support people in self-monitoring, by providing them with tailored feedback. Feedback has been defined as "actions taken by (an) external agent(s) to provide information regarding some aspect(s) of one's task performance" (Kluger & Denisi, 1996), on their behaviour. Digital technology can offer constant, real-time updates, powered by sensitive measurement devices, often worn on the body. Besides data generation, digital technology can offer habit-disrupting cues such as light and sound signals, buzzes, and push messages. Digital technology is not only useful to present users with evaluations of past behaviour ("reflection-on-action"); because of the ubiquity of wearables and mobile devices, feedback from digital technology offers an unprecedented opportunity for "reflection-in-action" (Schön, 1984): the analysis of behaviour as it occurs. This could greatly increase people's efficacy in self-managing healthy behavioural change. 1.1 Solutionism or smart solutions? The rapid rise of the technological possibilities has been matched by a similar rise in the number of designs on the market that make use of these possibilities. Wearable activity trackers (cf. Kooiman et al., 2015) give us feedback on whether we walk enough; sleep monitors monitor our sleep (e.g. Ogihara & Eshita, 2016); smart devices track our eating habits (e.g. Zandian et al., 2009), an app can warn us about situations in which we are likely to smoke a cigarette (e.g. Naughton et al., 2016) and a growing number of devices tell us (and others) what emotions we experience in cases where we are unable to do so ourselves (e.g. Van Dijk, 2017). This increased attention in health design practice is closely followed by a growing body of literature in design research and human-computer interaction research in the past decades (e.g. Darby, 2001; Fischer, 2008; Frohlich, Findlater, & Landay, 2010; Ludden, 2013; Hänsel et al., 2015; Gouveia et al., 2016). By far the biggest part of this literature researches the different channels, modalities, and other properties of feedback through digital technology: how to optimally design the feedback technology. Considering all this attention, it may come as a surprise that there has been relatively little research into whether all this feedback on health behavioural is as effective as we implicitly presume. After all, the rise in designs and research based on these designs may very well be a case of technocratic solutionism (Morozow, 2013): we have sensors and actuators, especially in smartphones, and we 2526 have wearables. Now that we've been provided with these hammers, we suddenly see nails everywhere. But are these really nails? 1.2 When we build it, they will change? In my PhD project, I investigated whether feedback through digital technology is an effective way to support people in changing their undesired, unhealthy habitual behaviour. Theory supports this hypothesis; with Control Theory (Carver & Scheier, 1985) delivering the best explanation: reflective behaviour change resembles a thermostat. When looking to change their behaviour, people compare their performance to a behavioural goal. When a discrepancy between goal and performance is noted – given enough motivation, opportunity, and the right abilities – people will attempt to reduce this discrepancy. This process depends on conscious scrutiny of behaviour and its effects. Knowing that habitual behaviours are mostly automatic, and thereby outside of conscious scrutiny, the strength of feedback lies in delivering exactly that cue that is needed to make automatic behaviour available for conscious deliberation. Feedback may also increase motivation to change the target behaviour (Northcraft, Schmidt, & Ashford, 2011): feedback places the target behaviour higher on a hypothetical list of priorities. When given feedback on the number of steps we take, we may prioritise walking over other modes of transportation or other physical activity choices, because feedback diverts our attention towards this behaviour. The question is, of course, whether practice follows theory. To find out, we 1examined the available evidence from literature, to evaluate whether current literature provides an answer to the following questions: • Is feedback through digital technology an effective way to change habitual behaviour? • Is feedback through digital technology effective for each user in every context, or are there intrapersonal (e.g. character traits, psychological states such as motivation) or interpersonal (contextual or systemic) moderators? What feedback properties are most effective in different circumstances? To provide further answers to these questions, we then evaluated 4 existing designs for behavioural change. Inclusion criteria for the designs were: a) the design addresses habitual behaviour, b) the design uses feedback on behavioural performance as its (primary) behaviour change technique, c) the design can be tested in real-life conditions (beyond the lab). To obtain valid results, we only included participants who could reasonably be expected to be motivated to change their behaviour, for instance because they chose to purchase or download the design of their own accord. The first design we evaluate in this paper is a physical activity tracker which is currently available in the market; the second design is a commercially available app that gives feedback on water drinking. The third is an online solution (web-based platform and app), currently available from a public institution, that gives feedback on the nutritional content of meals. The evaluations of these three designs help answering questions about what determinants and design properties enable the design to be effective for what audience. The fourth design is a 'smart' fork that registers eating rate and gives feedback when you eat too fast. This evaluation contributes to answering whether feedback is an effective way to durably change undesired behaviours. 2 Literature review: The efficacy of feedback technology for habit change To evaluate current practices and the state of the art, we reviewed the available scientific evidence for the effect of feedback through digital technologies on habitual behaviour. A combined search in a range of scientific and design- and HCI- oriented databases, and auxiliary ancestry searches, yielded a set of 69 original papers (with a total of 72 studies) that matched our inclusion criteria: digital technology that delivers tailored feedback by an external agent to provide information regarding task performance, aimed at automatic (habitual) behaviour, with an analysis of the design's efficacy. 1 All research projects have been performed together with a range of partners from academia, hence the 'we'. 2527 The included studies covered a range of dependent variables, varying from energy consumption to motor skills and physical activity. We thematically classified target behaviours of the intervention, feedback technology, feedback characteristics (content (feedback sign, comparison, and level of tailoring), timing, modality, frequency, duration, data source), and the availability of visual examples of the design and provided feedback. For each intervention, number of participants, independent and variables, analysis method, results, and possible methodological concerns were assessed. A complete overview of the search terms and analysis of the interventions is available in Hermsen, Frost, Renes, & Kerkhof, 2016. 2.1 Feedback disrupts habitual behaviour Our analysis showed strong evidence for the idea that feedback disrupts habitual behaviour, making it available for conscious scrutiny. 59 of 72 studies show a beneficial effect of feedback on disrupting habitual behaviour. Of those 13 studies that did not find this effect, 4 suffered from a lack of statistical power for the type of analysis performed. Their null finding may very well be due to small sample sizes, since descriptive results in all four studies did point towards a small positive effect of the reported interventions. Where feedback did not lead to disruption of current behaviour, this was sometimes due to misunderstanding of the design's purpose. Other studies showed contrary effects, such as in a study on taking breaks at work, where participants used a design providing social activity feedback not to take part in social activities, but to avoid colleagues, or to find empty rooms for meetings (Kirkham et al., 2013). However, current literature does not (yet) provide evidence for lasting effects of this disruption on behaviour. Two causes underlie this: as yet, there has hardly been research into lasting effects of this type of feedback on behaviour change; and research that has been performed so far, often suffers from methodological shortcomings. Either the research designs lack statistical power for the type of analysis performed in the study, which leads to a greater chance of false positives and inflated effect sizes, or the research designs had no strategies to deal with demand characteristics ("I have this beautiful design for you, and now I'm going to watch you use it. Does your behaviour change yet?"). 2.2 Conclusions from our review Our review enables us to at least partially answer our first question: yes, feedback from digital technology is able to disrupt undesired habits; but whether this leads to lasting behavioural changes remains unclear. To test this, we need research with higher quality research designs, data gathering, and analysis than what is currently common; be it qualitative or quantitative, or action research (such as the different flavours of research-through-design), which all have their relative merits to add to our knowledge. Furthermore, our review showed that there is hardly any evidence about what moderates sustained use of digital feedback technologies. The question who uses these technologies in which circumstances to which effect still needs to be answered. Conclusion I: feedback from digital technology can disrupt habitual behaviour, but evidence for lasting behaviour change is lacking. Challenge: we are in need of better evaluation methods of our designs. Interestingly, our review shows that the disruptive effect of feedback on undesired habits occurs independently from modality (e.g. visual, auditory, or tactile feedback), timing, frequency, and medium (e.g. mobile phone apps, websites, or wearable devices). This is probably the result of optimisation and iterative user testing in the design phase, which led to choices for feedback modality, timing, frequency, and media which fit the target behaviour and user needs. For instance, in the case of modality, the target behaviour often rules out specific feedback modalities. In driving a car, the visual channel is more often than not occupied by keeping track of traffic. Visual feedback on driving behaviour is more often than not dangerous instead of supportive, as anyone who has ever attempted to text while driving will realise. At the dinner table, both the visual and the auditory channel are occupied, and a designed artefact which relies on visual or auditory feedback on eating behaviour needs to deal with the social practices of eating, which, for many people, has an 2528 important social function as well. Disrupting this social aspect with feedback messages on eating behaviours can be perceived as rude, and such designs are likely to be abandoned. Conclusion II: there are no general guidelines for choosing optimal feedback properties. These depend on the design's context of use and the target behaviour. 3 Design case I: Physical activity tracker Evidence (Couper et al., 2010; Funk et al., 2010; Donkin et al., 2011; Perski et al., 2016) shows us that lasting engagement with a design is essential for behaviour change. Unfortunately, our literature review revealed that we hardly know anything about what factors (be it states, traits or context) drive sustained use of our designs, and how this differs for individuals across different contexts. Even the expected uptake of designs for behavioural change is as yet under-researched, let alone how long we can expect people to keep on using a design. The only evidence available as yet comes from industry whitepapers (Fox & Duggan, 2013; Chen, 2015), which claim abandonment rates of 30–80% in the first weeks, depending on technology. 3.1 711 Fitbit Zips moving about To find out more about patterns in who will succeed in using designs that give feedback on behaviour long enough for the behaviour change to occur, we performed an explorative study among 711 participants from four urban areas in France. They received an activity tracker (Fitbit Zip) and gave us permission to use their logged activity data for 320 days. They also filled out three Web- based questionnaires: at start, after 98 days, and after 232 days to measure a range of potential determinants of sustained use: demographic and socio‐economical, psychological, health‐related, goal‐related, technological, user experience‐related and social predictors. We determined the relative importance of all included determinants on the duration of tracker use by using machine learning analysis techniques. Providing a detailed overview of the rationale, method, analysis, and results of this study would go beyond the scope of the current paper, but can be found in Hermsen, Moons, Kerkhof, Wiekens, & De Groot, 2017. 3.2 Slower attrition than expected The data showed a slow exponential decay in physical activity tracking, with 73.9% (526/711) of participants still tracking after 100 days and 16.0% (114/711) of participants tracking after 320 days. On average, participants used the tracker for 129 days. This decay is exponential, but slower than may be expected from what little literature exists on the topic. Most important reasons to quit tracking were technical issues such as empty batteries and broken trackers or lost trackers (21.5% of all respondents of our third questionnaire, 130/601). Major determinants of tracking duration were age (the under 25 kept up tracking less long than older participants) and user experience-related factors (those who liked the design and user interface of the Fitbit more and found it easier to use, tracked longer than those who liked it less and found it more challenging). Other, smaller determinants were mobile phone type (iPhone less than others), household type (single parents less than others), perceived effect of the Fitbit tracker, and goal-related factors (having 'adjacent' goals such as healthy eating and quitting smoking decreased Fitbit use, when compared to 'central' goals such as increasing activity). Interestingly, many determinants had a smaller contribution to sustained use than may be expected from literature, or no effect at all. Perhaps this means that in real life, determinants such as education, character traits, income, and profession play a much smaller role than in isolated lab conditions. Conclusion III: User experience and the evaluation of the user interface are important determinants for engagement with and sustained use of a design; technical failures are the most important reason for abandonment 2529 Figure 1: Usage decline of the Fitbits. The horizontal axis shows the number of days since the first day of use. The percentage of participants who used the activity tracker for any number of days after a particular day is indicated with a solid line. The other lines indicate habitual use: the percentage of participants who used the tracker for at least 3, 5, and 7 days in the preceding 7 days It may not come as a surprise to designers and design researchers that user experience, aesthetic preferences, and ease of use matter, but many other stakeholders such as commissioners and the scientific community are relatively unaware of this importance. The latter tend to put more faith in underlying general working mechanisms and neglect user experience design (Hermsen, Van der Lugt, Mulder, & Renes, 2016) which may lead to clunky designs. 4 Design case II: An app that gives feedback on water drinking To shed further light on potential moderators of sustained use and engagement of designs providing feedback, we performed two smaller studies with mobile apps. In the first study, we adapted an existing mobile app which gives users feedback on their water consumption and attempted to influence sustained use by manipulating the kind of feedback participants received. In a second study, we interviewed long-term and novice users of a mobile app which gives feedback on the nutritional value of your meals. In the first study, we looked at moderators of sustained use of an app in which participants could register the amount of water they drank. The app then gives feedback on their water consumption. We recruited 538 participants through the online iOS app store. After downloading and installing the app, all participants completed a questionnaire about their motivation to record and change their water drinking behaviour, their perceived self-efficacy in doing so, and the appropriateness of five potential goals for their app use (Rooksby, Rost, Morrison, & Chalmers, 2014): documentary ('how much water do I drink?'), diagnostic ('has my water drinking an effect on fatigue?'), behaviour change oriented ('I want to drink more water'), reward-oriented (comparison to others, badges, etcetera), and 'fetishized' (interest in gadgets for their novelty value). We randomly assigned all participants to one of five conditions: app-as-is, negative feedback on behaviour, positive feedback on behaviour, feedback aimed at competition against other app users, feedback aimed at cooperation with other app users (common goals). Participants recorded their 2530 water drinking behaviour using the smartphone app. as they saw fit, with no requirements on duration and frequency of use. The trial lasted for 68 days, but no participant made it that far; 23.8% (128 participants) downloaded the app, but never used it. A further 23.6% (127 participants) only used the app a single time. Only 129 users (24%) made it past the first week. These findings are in line with what little evidence that exists about the expected duration of the use of mobile apps for health. An 80% attrition rate in the first week is quite normal (Chen, 2015), and people are likely to keep downloading different apps until they find one that fits their needs. All participants were highly motivated to use the app in the first place (µ = 5.38, SD = 1.36 on a seven-point scale), but it took a very high motivation and the goal to change drinking behaviour to actually start using the app. Once in use, age (older more than younger), motivation (extremely high more than very high), and having the concurrent 'documentary' goal influenced sustained use. Interestingly, our experimental manipulations did not affect sustained use whatsoever (a full overview of experimental methods and results is provided in Hermsen & Frost, 2018). Conclusion IV: A digital feedback design must have a close fit with user needs and goals. Figure 2: Screens from the water drinking app, and a graphical display of the percentage of users that stopped logging on a particular day 5 Design case III: An app that gives feedback on nutritional content of meals In a second, qualitative study, we interviewed 20 long-term users and 8 novice users of Eetmeter, an app that provides feedback on the nutritional value of your meals. This app has been developed by the Netherlands Nutrition Centre and has a steady following of tens of thousands of active users. Once again, initial motivation to use the app, documenting the nutritional value of participants current diet, and motivation to change eating behaviour were the main drivers for app use. Social influence and integration with other dietary interventions (diets, consults from dietary professionals, etcetera) did not appear to have an effect on the use of the app at all (a full overview of the research method and results will be provided in Hermsen & Van Eijl, 2018). 2531 Figure 3: Screens from Eetmeter, an app to determine the nutritional value of your meals 5.1 Motivation is the key Our studies indicate that when people are sufficiently motivated to change their behaviour, they need some kind of 'scaffolding' to support and shape their behaviour change attempts. For some people, this takes the form of a mobile health app, even if this involves some rather annoying self- reporting of eating and drinking behaviours which would drive most people away from the app. For others, digital technology that gives feedback on their behaviour is not the kind of motivational and practical support they need to reach their goals; they are better off using other interventions or designs that combine feedback with other behaviour change techniques. This finding, that highly motivated people use the designed intervention as a sort of 'scaffolding' to support and shape their behaviour change process, is in line with previous literature that sees feedback technology as lived informatics, i.e. the idea that people will actively select those resources that best support the behavioural change they seek, rather than with literature that follows the idea of 'persuasive technology', i.e. the idea that technology is capable of driving behavioural change itself (Rooksby et al., 2014). The notion of lived informatics also comprises the variety of uses and motivations that people have for the design. Some people will see tracking their behaviour as a social, collaborative process and will find more use for designs that encourage relatedness; others track to achieve autonomy and self-determination and will find use for designs that encourage those (Gouveia, Kapanaros, & Hassenzahl, 2015; Kapanaros, Gouveia, Hassenzahl, & Forlizzi, 2016). Conclusion V: only those who display extreme motivation to persevere show lasting engagement. People with less than extreme motivation are likely to abandon the intervention before it has a change to affect behaviour. Design Challenge: how do we engage people who are not already extremely motivated to change their behaviour? This is a severe problem that concerns all designs for behavioural change. When should we opt for 'broader' interventions, with different components for people with the need for relatedness or autonomy, and when should we restrict ourselves to designing for a target group who has goals and use styles that fit our design? 2532 6 Design case IV: A fork that vibrates when you eat too fast To contribute to the existing knowledge on whether feedback through digital technology can change undesired habits in the long run, we performed a range of studies that evaluate the acceptance and efficacy of a design to slow down eating rate. This is a deeply engrained habitual behaviour, which is strongly associated with stomach disorders and overweight (Robinson et al., 2014), which in itself causes a range of debilitating health issues, such as diabetes type II and some forms of cancer (Berenson, 2012). Because of its deeply automatic nature, eating rate is put-near impossible to change by will alone. The solution may lie in using a 'smart' fork, equipped with sensors and actuators to provide real-time feedback on eating rate; in other words, the fork vibrates when you eat too fast. 6.1 User experience evaluation To evaluate the usability and acceptance of this fork, which is available on the market under the name 10sFork, we asked 11 participants to eat a single meal with the fork in our laboratory, and then take the fork home for three days and use it as much as possible. After the laboratory meal and upon returning the fork, we interviewed the participants. The fork proved an acceptable tool: users reported enhanced awareness of their eating rate and felt comfortable using the fork in social settings. However, none of the participants felt the fork was 'for them', even though they did recognise the need to slow down their fast-eating. This self-perceived target group membership, and the incapacity of the fork to take meal characteristics into account, may be issues affecting acceptance of the fork as an intervention for healthy eating in real life (full report in Hermsen et al., 2016). Figure 4: The 10sFork, produced by SlowControl, Paris, France 2533 6.2 Lab study on effect To test the effect of the fork on eating rate, we invited 114 self-reported fast eaters to our lab. They were randomly assigned to the feedback condition, in which they received vibrotactile feedback from their fork when eating too fast (i.e., taking more than one bite per 10 seconds), or a non- feedback condition, where they ate with the fork without feedback. To control for demand characteristics, we told all participants about the importance of eating slowly, and that the fork would record their eating speed. Participants in the feedback condition ate at a slower eating rate and took fewer bites per minute than did those without feedback. A slower eating rate, however, did not lead to a significant reduction in the amount of food consumed or level of satiation. This may have to do with the artificial setting of the meal; uncertainty about norms in a social setting are known to cause people to 'revert' to generally accepted ideas of portion size (Higgs, 2015). Alternatively, a slower eating rate may take more meals to start having an effect on the amount of food we eat (full report in Hermans et al., 2017). 6.3 Field study on real-life use and effect Finally, we performed a field study, to learn more about the effect of using the fork in everyday life. We enlisted 150 participants, all self-reported fast eaters. To make sure all participants were well motivated to change their eating rate, we invited only participants currently under treatment of a dietician for complaints related with their eating rate, such as overweight and stomach complaints. All participants used the fork for one week without feedback to establish their baseline eating rate. Then they were randomly assigned to one of three conditions: eating as many meals with the fork as possible for one month, without feedback; same, but with vibrotactile feedback; and same, but with vibrotactile feedback and access to an online dashboard that provides retrospective feedback on eating rate. After this one-month training period, they once again ate with the fork without feedback for a week, to establish the effect of the training. This one-week measurement was repeated two months later. The study revealed that people who received vibrotactile feedback managed to decelerate their eating, with a lower eating rate and a higher success ratio (percentage of bites that have at least 10 seconds between them). This effect remained after two months. Even more surprisingly, people in the experimental conditions managed to lose a bit of weight because of eating with the fork, where people in the control condition remained at the same weight. After two months, this weight loss persisted. This result shows that feedback from digital technology indeed has the potential to change undesired behaviours in the long run. However, the impact on BMI was small. For people to really lose weight, more 'holistic' approaches are needed, in which dietary interventions are combined with physical activity plans and eating behaviour interventions (full report in Hermsen et al., 2018). Conclusion VI: our findings confirm the conclusions from our literary review, and also show that in certain cases, feedback from digital technology can lead to lasting behaviour change. These effects, albeit small, give confidence in the potential effect of feedback from digital technology on undesired habits that until recently proved put-near impossible to change. But new challenges also emerged. Both in our user experience evaluation study and in our lab study, participants did not particularly feel the need to change their eating rate. Even after receiving information about the detrimental long-term effects of eating too fast on our health, they did not feel motivated to slow down. In general, it is very hard to get people to accept the gravity of a problem, and even harder to convince them to accept solution as being 'for them'. How do we get people to start engaging with our feedback? We have seen previously that motivation to use a design needs to be very high, but people also need to realise both the problematic behaviour and the severity of its consequences. 2534 Design challenge: How can we design a product or service in such a way that people understand that they are the target group, and in such a way that motivates uptake, without discouraging users by scaring them off or triggering cognitive dissonance reduction? 7 Further challenges: measurement and privacy 7.1 Where do the data for the feedback come from? In the past years, we have seen a steep increase in designs that provide feedback on a range of behaviours. Many of these rely on machine learning principles to signal events that warrant feedback: there are designs that predict influenza (Barlacchi et al., 2017) and depression (Merothra, Hendley, & Musolesi, 2016) from human activity patterns, and it is now possible to reliably predict when people who just quit smoking are in danger to start again (Naughton et al., 2016). These developments broaden the scope of potential designed solutions that provide us with feedback on our behaviour. However, many behaviours and human practices are (and will be for quite some time) too complicated to measure. For instance, the automatic analysis of nutrition is not yet feasible, even though the first products that claim to do so have already appeared online (e.g. Fitly, 2017). To obtain feedback on eating behaviour, the user still has to painstakingly provide their own data. This can be expected to have a detrimental effect on sustained use and may form part of the explanation for why only a small segment of users of the water drinking app made it to the second week. Similarly, it is as yet very hard, if not practically impossible, to reliably detect human emotions. Yet, there are many products that claim to do just that (e.g. Sensoree, 2015; Bonte, 2017). This practice of introducing designs to the marketplace before they are technologically feasible is questionable, because it will kindle hope in people in need of such solutions, which will then inevitably lead to disappointment. Design challenge: Automatic generation of data for feedback on behaviour can greatly increase engagement with a design by taking away the frustrating task of self- monitoring. However, this is at this moment only possible for a small range of behaviours. How can we develop ways to measure more and more complicated behaviours? 7.2 Where do the feedback data end up? Machine learning techniques and other forms of automatic measurement of behavioural data have their advantages, but they also come at a cost. Literature has described these forms of self-tracking in Foucaultian terms, where subjects willingly regulate, govern, and optimise themselves (Whitson, 2014). There is indeed a fine line between beneficial self-regulation through feedback, and the use of automatically generated behavioural data to subject people to standardisation and regulation. In order to give feedback, most products rely on data analysis that takes place on the vendor's servers, and visualisation of feedback through online and mobile applications. This process gives rise to concerns about privacy. Who owns the data that is generated by measuring your behaviour? Who guarantees that this data remains within the closed loop of measurement - analysis - feedback - measurement, and does not get transferred on to third parties? Is your data accessible? The 10sFork in the vibrating fork project registers each and every bite with a unique time stamp. This data is then used for direct, vibrotactile feedback, and for retrospective visualisations of your eating rate patterns. However, this data set was until recently unavailable for users of the fork. Similarly, many activity trackers will tell you how many steps you have taken, will show you historic trends in your activity and how you compare to others, but the entire data set with every registered step remains unavailable, stored in the vendor's server park for who knows what use. Design challenge: As yet, there are no best practices of designs that use feedback for behavioural change that satisfactorily address privacy concerns. We need solutions that 2535 provide open, usable data for their users which remain closed off to anybody else. In other words, we need to start designing for privacy, or at least for privacy awareness. 8 Conclusion This paper aimed to answer two questions: does feedback through digital technology have an effect on undesired habitual behaviour, and what determinants and feedback properties enhance the efficacy of the feedback? We have seen that feedback from digital technology can disrupt the automatic cue-response-pair of habitual behaviour, which makes that behaviour available for conscious scrutiny. All this does not necessarily mean that we can expect the technology and the feedback itself to lead to behaviour change. Current evidence allows us to see feedback through digital technology as a vehicle for behaviour change, but not (yet) as a driver (Patel, Asch, & Volpp, 2015). The vibrotactile fork project, however, shows that in specific cases, where people are adequately motivated to choose the design that provides the feedback as their vehicle, feedback on undesired habits can be an effective behaviour change technique. Our second question: which determinants or feedback property enhance the efficacy of feedback designs, proved harder to answer. Our research shows that user experience and engagement play an important role. Challenges lie in keeping users engaged long enough for behaviour change to occur, and in guaranteeing users' privacy. Unfortunately, although research presented in this paper shows the potential efficacy of designs that provide feedback, we are currently a long way from firmly establishing in what cases feedback through digital technology can sustainably change behaviour, and from finding out what works for whom in what contexts. This state of affairs is not helped by current methodological standards and reporting traditions in design research and HCI. These are insufficient to generate generalisable knowledge about the efficacy of our designs for behavioural change. To get there, we need to overcome one final challenge: improve the way we as design researchers report our designs. We need to put more effort in evaluating our designs, be it through qualitative or quantitative measurements of its effects, or by more thorough reporting of the design process and its iterations. Only then can we make more generalised conclusions about what works when and for whom. Acknowledgements: I would like to thank all the people who worked with me on the projects mentioned in this paper: (in alphabetical order) Tim van Eijl, Jeana Frost, Martijn de Groot, Roel Hermans, Suzanne Higgs, Peter Kerkhof, Monica Mars, Jonas Moons, Reint Jan Renes, Eric Robinson, and Carina Wiekens. 9 References Barlacchi, G., Perentis, C., Mehrotra, A., Musolesi, M., & Lepri, B. (2017). Are you getting sick? Predicting influenza-like symptoms using human mobility behaviors. EPJ Data Science, 6(1). doi:10.1140/epjds/s13688-017-0124-6. Berenson, G. S. (2011). Health Consequences of Obesity. Pediatric Blood & Cancer, 58(1), 117–121. doi:10.1002/pbc.23373. Bonte, S. (2017). Numinous. URL:https://bontestefanie.wixsite.com/numinous. Accessed: 2017-11-08. (Archived by WebCite at http://www.webcitation.org/6up3fDkk0). Carver, C. S., & Scheier, M. F. (1985). A Control-Systems Approach to the Self-Regulation of Action. In: Kuhl, J. (Ed.) Action Control: From Cognition to Behavior, 237–265. doi:10.1007/978-3-642-69746-3_11. Chen A. 2015. New data shows losing 80% of mobile users is normal, and why the best apps do better. Retrieved May 17, 2016, from URL: http://andrewchen.co/new-data-shows-why-losing-80-of-your-mobile- users-is-normal-and-that-the-best-apps-do-much-better/ (archived by Webcite at http://www.webcitation.org/6hjwoLkyM). Couper, M. P., Alexander, G. L., Zhang, N., Little, R. J., Maddy, N., Nowak, M. A., … Cole Johnson, C. (2010). Engagement and Retention: Measuring Breadth and Depth of Participant Use of an Online Intervention. Journal of Medical Internet Research, 12(4), e52. doi:10.2196/jmir.1430. Darby, S. (2001). Making it Obvious: Designing Feedback into Energy Consumption. In: Energy Efficiency in Household Appliances and Lighting, 685–696. doi:10.1007/978-3-642-56531-1_73. 2536 Donkin, L., Christensen, H., Naismith, S. L., Neal, B., Hickie, I. B., & Glozier, N. (2011). A Systematic Review of the Impact of Adherence on the Effectiveness of e-Therapies. Journal of Medical Internet Research, 13(3), e52. doi:10.2196/jmir.1772. Fischer, C. (2008). Feedback on household electricity consumption: a tool for saving energy? Energy Efficiency, 1(1), 79–104. doi:10.1007/s12053-008-9009-7. Fitly. (2017). SmartPlate: Eating Just Got Smarter. URL:https://getsmartplate.com/. Accessed: 2017-11-08. (Archived by WebCite at http://www.webcitation.org/6up2rgYF3). Fox S, Duggan M. 2013. Tracking for Health. URL:http://www.pewinternet.org/2013/01/28/tracking-for- health/. Accessed: 2016-12-15. (Archived by WebCite at http://www.webcitation.org/6mm2N2YSM). Froehlich, J., Findlater, L., & Landay, J. (2010). The design of eco-feedback technology. Proceedings of the 28th International Conference on Human Factors in Computing Systems - CHI ’10. doi:10.1145/1753326.1753629. Funk, K. L., Stevens, V. J., Appel, L. J., Bauck, A., Brantley, P. J., Champagne, C. M., … Vollmer, W. M. (2010). Associations of Internet Website Use with Weight Change in a Long-term Weight Loss Maintenance Program. Journal of Medical Internet Research, 12(3), e29. doi:10.2196/jmir.1504. Gardner, B. (2014). A review and analysis of the use of “habit” in understanding, predicting and influencing health-related behaviour. Health Psychology Review, 9(3), 277–295. doi:10.1080/17437199.2013.876238. Gouveia, R., Karapanos, E., & Hassenzahl, M. (2015). How do we engage with activity trackers? Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’15. doi:10.1145/2750858.2804290. Gouveia, R., Pereira, F., Karapanos, E., Munson, S. A., & Hassenzahl, M. (2016). Exploring the design space of glanceable feedback for physical activity trackers. Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’16. doi:10.1145/2971648.2971754. Hänsel, K., Wilde, N., Haddadi, H., & Alomainy, A. (2015). Wearable Computing for Health and Fitness: Exploring the Relationship between Data and Human Behaviour. arXiv preprint arXiv:1509.05238. Hermans, R. C. J.*, Hermsen, S.*, Robinson, E., Higgs, S., Mars, M., & Frost, J. H. (2017). The effect of real-time vibrotactile feedback delivered through an augmented fork on eating rate, satiation, and food intake. Appetite, 113, 7–13. doi:10.1016/j.appet.2017.02.014. * shared first authorship Hermsen, S., & Frost, J. (2018). Lessons from a failed attempt at increasing sustained use of a mobile app providing digital feedback on water drinking. Extended abstract presented at the Etmaal voor de Communicatiewetenschap 2018, Ghent, Belgium. Ghent, Belgium: NeFCA. http://doi.org/10.17605/OSF.IO/9VC42. Hermsen, S., Frost, J., Renes, R. J., & Kerkhof, P. (2016). Using feedback through digital technology to disrupt and change habitual behavior: A critical review of current literature. Computers in Human Behavior, 57, 61– 74. Elsevier Ltd. doi:10.1016/j.chb.2015.12.023. Hermsen, S., Frost, J. H., Robinson, E., Higgs, S., Mars, M., & Hermans, R. C. J. (2016). Evaluation of a Smart Fork to Decelerate Eating Rate. Journal of the Academy of Nutrition and Dietetics, 116(7), 1066–1068. doi:10.1016/j.jand.2015.11.004. Hermsen, S., Mars, M., Higgs, S., Robinson, E., Frost, J.H., & Hermans, R.C.J. (2018). Effects of a technology- based intervention to decelerate eating rate on eating rate and BMI: a randomized controlled trial. Manuscript under review. Hermsen, S., Moons, J., Kerkhof, P., Wiekens, C., & De Groot, M. (2017). Determinants for Sustained Use of an Activity Tracker: Observational Study. JMIR MHealth and UHealth, 5(10), e164. doi:10.2196/mhealth.7311. Hermsen, S., Van der Lugt, R., Mulder, S., & Renes, R. J. (2016). How I learned to appreciate our tame social scientist : experiences in integrating design research and the behavioural sciences. In: P. Lloyd & E. Bohemia, eds. 2016 Design Research Society 50th Anniversary Conference, 4, 1375–1389. doi.org:10.21606/drs.2016.17. Hermsen, S., & Van Eijl (2018). Determinants for Sustained Use of an App that provides feedback on nutritional value of meals: Qualitative Study. Unpublished Manuscript. Higgs, S. (2015). Social norms and their influence on eating behaviours. Appetite, 86, 38–44. doi:10.1016/j.appet.2014.10.021. Karapanos, E., Gouveia, R., Hassenzahl, M., & Forlizzi, J. (2016). Wellbeing in the Making: Peoples’ Experiences with Wearable Activity Trackers. Psychology of Well-Being, 6(1). doi:10.1186/s13612-016-0042-6. Kim, J. Y., Lee, K. H., Kim, S. H., Kim, K. H., Kim, J. H., Han, J. S., … Bae, W. K. (2013). Needs analysis and development of a tailored mobile message program linked with electronic health records for weight reduction. International Journal of Medical Informatics, 82(11), 1123–1132. doi:10.1016/j.ijmedinf.2013.08.004. 2537 Kirkham, R., Ploetz, T., Mellor, S., Green, D., Lin, J.-S., Ladha, K., … Wright, P. (2013). The break-time barometer. Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’13. doi:10.1145/2493432.2493468. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119(2), 254–284. doi:10.1037/0033-2909.119.2.254. Kooiman, T. J. M., Dontje, M. L., Sprenger, S. R., Krijnen, W. P., van der Schans, C. P., & de Groot, M. (2015). Reliability and validity of ten consumer activity trackers. BMC Sports Science, Medicine and Rehabilitation, 7(1). doi:10.1186/s13102-015-0018-5. Li, I., Dey, A., & Forlizzi, J. (2010). A stage-based model of personal informatics systems. Proceedings of the 28th International Conference on Human Factors in Computing Systems - CHI ’10. doi:10.1145/1753326.1753409. Lichtman, S. W., Pisarska, K., Berman, E. R., Pestone, M., Dowling, H., Offenbacher, E., … Heymsfield, S. B. (1992). Discrepancy between Self-Reported and Actual Caloric Intake and Exercise in Obese Subjects. New England Journal of Medicine, 327(27), 1893–1898. doi:10.1056/nejm199212313272701. Ludden, G.D.S. (2013). Designing feedback. Multimodality and specificity. Paper presented at IASDR 2013, Tokyo, August 25th – 30th. Mehrotra, A., Hendley, R., & Musolesi, M. (2016). Towards multi-mod...