Privacy and human–machine communication Christoph Lutz Introduction In recent years, smart and interactive technologies such as chatbots, mobile virtual assistants, social robots and smart toys have become increasingly prevalent. These technologies are used not primarily as media through which communication takes place but as communicative agents themselves. Human–machine communication (HMC), as a field within media and communication research interested in the “creation of meaning among humans and machines” (Guzman, 2018, p. 1), has enhanced our understanding of such technologies (Fortunati & Edwards, 2020; Gambino et al. 2020; Guzman, 2020; Rodriguez-Hidalgo, 2020). A concern when it comes to HMC is privacy. Privacy vulnerabilities of smart speakers, for example, have received substantial media attention (Lutz & Newlands, 2021) and have been investigated from a communication and media perspective (Brause & Blank, 2020; Liao et al., 2019). Similarly, the introduction of AI-based technology, such as facial recognition software, has sparked fears about the curtailment of civil liberties through increased dataveillance (Stark et al., 2020), especially with the extraordinary situation due to the Covid-19-pandemic (Hargittai et al, 2020; Newlands et al., 2020; Vitak & Zimmer, 2020). In this contribution, I provide an overview of privacy as a key topic for HMC theory and research. The chapter is intended as an accessible introduction rather than a comprehensive summary. The goal is to give the readers a grasp of theoretical debates and empirical findings, enabling them to develop their own research questions and study designs. Given that HMC research is still nascent, I will 1 draw on scholarly literature beyond the confines of the field and consult scholarship from adjacent disciplines such as sociology, information systems, ethics, and law where appropriate. The chapter is divided into four sections. After the introduction, I will define and discuss privacy in general. This section contains an overview of key privacy theories that are fruitful in HMC research. In the third section, I will look at privacy and HMC more specifically. I will discuss three smart and interactive technologies that have received attention in HMC research: social robots, smart speakers, and chatbots. For each of these three technologies, I discuss the privacy implications, review HMC research and provide directions for future research. In the final section, I will conclude the chapter by synthesizing the main points and showing some implications for privacy-oriented HMC scholarship. Privacy in General Privacy has become an important topic in communication research, especially with the spread of information and communication technologies (ICT) that harvest data on a large scale such as e-commerce, social media, and smartphones. However, privacy scholarship draws on contributions that go further back and span fields such law, psychology and sociology (Altman, 1975; Warren & Brandeis, 1890; Westin, 1967). Today, privacy is a highly interdisciplinary topic (Bräunlich et al., 2021). However, the multitude of perspectives complicates a common understanding. Privacy has been understood in many ways, including bodily autonomy, control over personal data, solitude at home, freedom of thought, and protection of one’s reputation (Solove, 2008). Generally, privacy scholarship considers both “freedom from” aspects that describe whether someone is let alone and “freedom to” aspects that center on self-development (Koops et al., 2017). Privacy also has to be understood in a dimensional, zonal and relational way (Aeschlimann et al., 2015; Koops et al., 2017; Marwick & boyd, 2014). However, in recent years, informational privacy has emerged as the key type across disciplines, not least because of the growing importance of ICTs and legal frameworks that situate privacy predominantly as 2 a data protection issue (Smith et al., 2011). This informational understanding is implicit in the widely used definition by Westin (1967, p. 7) of privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.” Here, I adopt a broader perspective and draw on a multi-dimensional framework that goes beyond informational privacy. Such a multi-dimensional understanding is particularly pressing because the communicative affordances of smart and interactive technologies (e.g., portability, availability, locatability, multimediality; Schrock, 2015) call for the consideration of other privacy types. Burgoon’s (1982) differentiation of informational, social, psychological, and physical privacy proves useful (Lutz et al., 2019), not least because it originated in communication research. Physical privacy is understood as “the degree to which one is physically inaccessible to others” (Burgoon, 1982, p. 211). It includes spatial considerations. Social privacy describes the communicational dynamics of privacy and has a relational component tied to interpersonal boundary management, including aspects such as intimacy and protection. Psychological privacy is about cognitive and affective inputs and their control. Thus, this form of privacy falls within the “freedom to” types of privacy, rather than the “freedom from” (Koops et al., 2017), stressing its agentic role for personal growth. Finally, informational privacy describes the extent to which a person’s information is protected. Thus, privacy should be understood in a physical, social, psychological and informational sense. In addition to this multi-dimensional understanding, privacy as contextual integrity (Nissenbaum, 2010), as communication and boundary management (Petronio, 2002), and as networked (Marwick & boyd, 2014) have emerged as fruitful and widely used theories in communication research, particularly when it comes to digital technologies. Table 35.1 provides an overview of these three theories. All of them share a relational understanding of privacy that calls for overcoming individualistic and largely control-based views of privacy (Marwick & boyd, 2014). 3 Table 35.1. Overview of important privacy theories in communication research (selection) Theory Summary and key concepts Privacy as contextual integrity Privacy is the preservation of context-specific information norms. Contexts are understood as institutional and social areas of life (e.g., family, healthcare, politics). Context-specific information norms are described by five parameters: sender, subject, recipient, information type, and transmission principle. (Nissenbaum, 2010) Communication People balance disclosure and privacy withdrawal in a dialectic management process. They set up privacy boundaries that describe which (Petronio, 2002) information is considered private or public. Boundary management includes boundary rule formation, boundary coordination and boundary turbulence. Networked privacy (Marwick & boyd, 2014) Practice-based approach that looks at privacy practices in networked publics. Networked publics are characterized by audiences, technical mechanisms and social norms. Shift from individualistic frame of privacy to networked frame. Context collapse as an important privacy challenge. Agency entails not only control of information but also understanding and shaping of the context. 4 Potential application in HMC Investigating how aspects of communication between humans and interactive machines (e.g., social presence, anthropomorphization) affect information norms in different contexts. Modulating the five parameters to study the acceptability of HMC scenarios. Investigating privacy rules between humans and machines. Identifying potential differences in boundary management between HMC and interpersonal communication. Investigating privacy turbulence, for example through glitches and malfunctions. Studying which new privacy strategies emerge when individuals are confronted with networked publics that increasingly include machines in addition to people. Privacy literacy and selfefficacy as emerging constructs. Not only from a theoretical but also from an empirical angle, privacy has become a key topic in media and communication research. Zhang and Leung (2015), for example, identified privacy as an important topic in communication research about social network sites, with about 13 percent of the analyzed articles being about privacy. In terms of outlets, privacy research in communication and media studies is published in a wide range of journals, including the Journal of Communication (e.g., Baruh et al., 2017), Human Communication Research (e.g., Sundar & Marathe, 2010), Social Media + Society (e.g., Trepte et al., 2017), the Journal of Computer-Mediated Communication (e.g., Dienlin & Metzger, 2016) and Computers in Human Behavior (e.g., Maltseva & Lutz, 2018). Privacy and HMC HMC goes beyond privacy literature on more established digital technologies such as social media. Such privacy research has often adopted a computer-mediated communication (CMC) perspective, where privacy relations between users themselves or between users and services/platforms are investigated. Technology has mainly a mediating role in such research by affecting, through affordances, how privacy enfolds (e.g., how privacy-invasive a given platform or context is perceived to be). At the same time, CMC privacy research tends to situate privacy online, focusing on the informational dimension but neglecting the physical and spatial embeddedness of technologies. By contrast, HMC considers emerging technologies as actants and interlocutors within privacy networks, rather than media (Lutz & Tamò, 2018; Peter & Kühne, 2018). By foregrounding the (inter)active role of technologies and the relationships users develop with such technologies, HMC introduces novel privacy questions and addresses the four privacy dimensions by Burgoon (1982) more holistically. Particularly the embodiment, portability, social presence, interactivity, and tendency to anthropomorphize social robots, smart speakers, smart toys, and similar technologies has made privacy a key issue of HMC research (Lutz et al., 2019). 5 In the following, I will discuss research on privacy with smart and interactive technologies from a HMC perspective (see Table 35.1, last column, for how HMC research can adopt the three privacy theories introduced earlier). More specifically, I differentiate three technologies that have received increasing attention by HMC scholars who focus on privacy: social robots, smart speakers, and chatbots. While these three technologies have high heterogeneity within, the degree of social presence, mobility and situatedness in space generally decreases going from social robots, to smart speakers, to chatbots. Social robots, privacy and HMC Privacy implications Calo (2010) differentiates three privacy implications of robots: increased access, direct surveillance and social meaning. Increased access describes the capacity of robots to enter protected areas such as bedrooms and bathrooms. Direct surveillance risks stem from the technological sophistication of robots, including processors and sensors that allow for in-depth monitoring, for example in law enforcement and military contexts. Finally, social meaning refers to the design of robots that triggers interaction and facilitates the establishment of relationships, leading to the disclosure of potentially compromising information. The last privacy implication of social meaning is particularly interesting for HMC scholarship since it has the strongest connection to the HMC agenda, where the emergence of meaning between humans and machines is of core interest. Empirical research While Calo (2010) provided an influential contribution on (social) robots and privacy, his work was not situated within HMC or communication more broadly. However, since then, several contributions have furthered our understanding of privacy in the context of social robots, also from a communication perspective. Lutz et al. (2019) provide a systematic review of research on social robots and privacy, acknowledging HMC as a particularly fruitful theory. Their 6 scoping review draws on Burgoon’s (1982) privacy typology and notes a general increase in research interest in recent years. However, empirical insights are still scarce. The research on privacy and social robots is highly interdisciplinary, with law being the discipline with most contributions, followed by computer science and the medical sciences. Only one article from media and communication was found.1 A clear majority of the analyzed articles is conceptual and relatively small sample sizes and specific interaction contexts make it difficult to identify generalizable conclusions. The review also identifies privacy as contextual integrity and fair information practices as important conceptual underpinnings. A dominance of informational privacy, rather than physical, social, and psychological privacy, was further noted. As an example for an empirical study, Krupp and colleagues (2017) used focus groups to identify salient privacy concerns in the context of social robots. They found that informational concerns were most strongly discussed (106 code occurrences). However, physical concerns also received much attention (60 occurrences). Social and psychological privacy, by contrast, received far less attention (both 16 occurrences). In addition, the study found novel categories that can be understood in privacy terms, for example marketing and theft. Lutz and Tamò-Larrieux (2020), situated within HMC research, investigated the prevalence of three privacy concern types in social robots. Informational privacy concerns about the robot manufacturer were most pronounced, followed by informational concerns about other users (e.g., stalking through the robot), and physical privacy concerns (e.g., robot entering areas that it should not access). Thus, the robot as a social actor seems to be perceived as less 1 However, since the analysis was conducted (summer 2017), an increasing uptick in interest in social robots in media and communication research can be noted, strongly driven by HMC scholars (e.g., A. Edwards et al., 2019; Lee & Liang, 2019; Peter & Kühne, 2018; RodríguezHidalgo, 2020). 7 risky for privacy than the robot as a medium and the actions of its creator. A structural equation model provided evidence for a robot privacy paradox, in analogy to the privacy paradox more generally (Kokolakis, 2017). Future research directions HMC research on social robots and privacy has strong connections to human–robot interaction (HRI) research, where privacy has become an increasingly important research theme in recent years (Horstmann et al., 2020; Rueben et al, 2017, 2018). The two fields are increasingly speaking to each other (C. Edwards et al., 2019). Future privacy research from a HMC angle could thus consolidate the findings from HRI and integrate the aforementioned theories (Table 35.1) into the study of social robots. The communicative affordances of social robots (Rodríguez-Hidalgo, 2020) and their interplay with privacy perceptions and behavior are also a promising topic for HMC. Smart speakers, privacy, and HMC Privacy implications Compared to social robots, smart speakers such as Amazon Echo and Google Home devices have less mobility but still some physical presence. These technologies have enjoyed great popularity and are distinctly conversational as they function via speech recognition. Privacy implications include the connectedness of smart speakers to the cloud and their embeddedness in a platform eco-system. Amazon and Google can use the data collected via smart speakers to further their core business activities (i.e., more sophisticated targeted advertising in the case of Google; integration with its shopping platform in the case of Amazon) and enhance their market position (Pridmore et al., 2019). Another issue can be bystander privacy, where individuals in the vicinity of the smart speaker are inadvertently recorded (Ahmad et al., 2020). This is a particular issue for children, who have special data protection status. 8 Empirical research A number of empirical studies have investigated privacy perceptions in the context of smart speakers (Apthorpe et al. 2018; Liao et al., 2019; Lutz & Newlands, 2021; Malkin et al., 2019; Pridmore et al., 2019; Zheng et al., 2018). The evidence from these studies suggests that users have relatively low privacy concerns and limited privacy protection behavior but the concerns vary depending on the information recipient (Apthorpe et al., 2018; Lutz & Newlands, 2021) and the culture (US vs. Netherlands; Liao et al., 2019). Privacy acts as a barrier for adoption among non-users. Qualitative studies have focused on domestication and shown how users integrate smart speakers in their daily lives (Brause & Blank, 2020), where the technology “renegotiates the boundaries between the private home and outside world” (p. 758). Spatial affordances such as ubiquity, link-ability and control-ability emerged as salient themes, showing potential privacy implications where the home becomes an increasingly commodified sphere (rather than a sanctuary). Similar concerns were raised about home-cleaning robots such as Roomba (Astor, 2017). Future research directions Research on smart speakers and privacy is still in its infancy and many fruitful research questions for HMC research exist. The CASA paradigm (Gambino et al., 2020) can be used to study users’ conversations with smart speakers in a controlled setting (e.g., a lab). Researchers could develop specific apps or skills for behavioral experiments that study self-disclosure and compare participants’ engagement with a smart speaker to that with a human interlocutor. Longitudinal and ethnographic research could look into the lifecycle of smart speaker use in terms of privacy. Is there a pattern that concealment practices become less prevalent with the duration of use? Is the data systematically wiped if a smart speaker is abandoned? Finally, the interplay of overtrust and privacy is a topic that merits attention (Aroyo et al., 2021). Do users overestimate the intelligence of smart speakers and does this affect their privacy concerns and behavior? 9 Chatbots, Privacy, and HMC Privacy implications Compared to social robots and smart speakers, chatbots are disembodied. They have been increasingly used in customer service and are a key application of AI that many non-experts might actively interact with (Brandtzæg & Følstad, 2018). Compared with social robots and smart speakers, chatbots are used more sporadically and instrumentally. The privacy implications are therefore narrower and more focused on the informational dimension. For virtual mobile agents (e.g., Apple Siri) that rely on voice control rather than textual interaction, the privacy implications tend to be more similar to smart speakers. Since chatbots are often operated by humans and AI together and it is not always clear if a human or AI is on the other end, the ontological opacity of communication can present novel ethical challenges (Guzman, 2020; Guzman & Lewis, 2020), including privacy challenges. Empirical research HMC research on privacy and chatbots is even scarcer than HMC research on privacy and social robots and smart speakers. Ischen et al. (2019) provide one of the few studies on the topic. Using HMC, they conduct an experiment on how the type of technology (human-like chatbot vs. machine-like chatbot vs. website) affects information disclosure, attitudes, and recommendation adherence. The study also considers the mediating role of privacy concerns and mindless anthropomorphism, finding some evidence for CASA, where the human-like chatbot – and the website – scored higher on anthropomorphism than the machine-like chatbot. Privacy concerns affected information disclosure and recommendation adherence negatively. Mediation effects were identified, so “that a human-like chatbot […] is higher in perceived anthropomorphism, leading to less privacy concerns and subsequently, more comfort with disclosure and information adherence.” The analysis shows the relevance of including designbased characteristics and their perception, for example in the form of anthropomorphism. 10 Another study, which is not inspired by HMC, explored the use of chatbots for disclosing privacy policies and thus more effective notice and consent (Harkous et al., 2016). Future research directions Contextual integrity theory can be fruitfully applied to the study of privacy and chatbots. Experiments could modify the five parameters mentioned (sender, subject, recipient, information type, transmission principle) and assess participants’ acceptability of the different configurations. Particularly, the transmission principle should be investigated further. Moreover, chatbots could be compared with social robots on privacy-related aspects to test how the embodiment of the technology affects privacy concerns and behavior. Conclusion The rise of smart and interactive technologies has been accompanied by the establishment of HMC as an independent discipline within media and communication research (see contributions in this volume). In this chapter, I have shown how HMC has a lot to say about privacy and how it enhances our understanding beyond CMC. Conceptually, the fact that humans engage with smart and interactive technologies as interlocutors, rather than media, opens up additional privacy implications, especially in physical and social terms. As for the latter, social meaning has been discussed as a privacy implication of robots (Calo, 2010) and the creation of meaning equally has a central role within HMC. We have shown that privacy theories that were developed with human communication in mind (privacy as contextual integrity, communication privacy management, networked privacy) can be fruitfully applied to HMC. Emerging empirical research on privacy in HMC has shown differentiated privacy attitudes and behaviors that are tied to interaction with smart and interactive technologies (e.g., anthropomorphism, social presence). Humans seem to have less privacy concerns about smart 11 and emerging technologies as social actors themselves but worry more about their instrumental use by other actors (Lutz & Tamò-Larrieux, 2020; Lutz & Newlands, 2021). Given recent findings on the domestication of smart speakers (Brause & Blank, 2020) and the tendency to adopt interaction scripts from interpersonal communication to smart and interactive technologies, privacy risks are certainly elevated. A mix of legal (regulation), technological (security, privacy-by-design) and social (user-friendly privacy controls) protection is needed to address these risks. HMC research should continue to study privacy in the context of smart and interactive technologies. It can integrate widely used theories in HMC (e.g., CASA, Gambino et al., 2020) and their concepts (e.g., mindlessness, anthropomorphism, uncanniness) into established privacy frameworks such as the privacy calculus. Methodologically, observational approaches, where individuals interact in natural settings (e.g., at home) with machines, are particularly fruitful and qualitative and ethnographic research shows great potential. Such observational studies enhance our understanding of privacy practices and could be followed up with or paired with quantitative studies (e.g., surveys) that allow generalizations about privacy attitudes and behaviors. Experimental methods have enjoyed a long tradition in HRI research and should be used by HMC scholars who study privacy. In the absence of specifically configured robots, smart speakers, or chatbots for lab experiments (e.g., due to budget constraints, social distancing requirements in the wake of Covid-19 or ethical concerns), experimental vignette studies can be used, where users are confronted with scenarios or videos of smart and interactive technologies (e.g., Lutz & Tamo-Larrieux, 2021). HMC privacy research could also adopt critical discourse analysis, case studies, algorithm audits and socio-technical walkthroughs. Such methods bring to light power-related aspects and allow for a more contextualized understanding of the societal embeddedness of smart and interactive technologies. 12 References Aeschlimann, L., Harasgama, R., Kehr, F., Lutz, C., Milanova, V., Müller, S., ... & Tamò-Larrieux, A. (2015). Re-setting the stage for privacy: A multi-layered privacy interaction framework and its application. In S. Brändli, R. Harasgama, R. Schister, & A. Tamò (Eds.), Mensch und Maschine - Symbiose oder Parasitismus (pp. 1–41). Stämpfli. Altman, I. (1975). The environment and social behavior – privacy, personal space, territory, crowding. Wadsworth Publishing Company. Apthorpe, N., Shvartzshnaider, Y., Mathur, A., Reisman, D., & Feamster, N. (2018). Discovering smart home internet of things privacy norms using contextual integrity. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(2), 1–23. Aroyo, A. M., De Bruyne, J., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H., ... & Tamò-Larrieux, A. (2021). Overtrusting robots: Setting a research agenda to mitigate overtrust in automation. Paladyn, Journal of Behavioral Robotics, 12(1), 423–436. Astor, M. (2017). Your Roomba may be mapping your home, collecting data that could be shared. New York Times, July 25. www.nytimes.com/2017/07/25/technology/roomba-irobot-data-privacy.html Baruh, L., Secinti, E., & Cemalcilar, Z. (2017). Online privacy concerns and privacy management: A meta-analytical review. Journal of Communication, 67(1), 26–53. Brause, S. R., & Blank, G. (2020). Externalized domestication: smart speaker assistants, networks and domestication theory. Information, Communication & Society, 23(5), 751– 763. Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: Changing user needs and motivations. Interactions, 25(5), 38–43. Bräunlich, K., Dienlin, T., Eichenhofer, J., Helm, P., Trepte, S., Grimm, R., Seubert, S., & Gusy, C. (2021). Linking loose ends: An interdisciplinary privacy and communication model. New Media & Society, 23(6), 1443–1464. Burgoon, J. K. (1982). Privacy and communication. Annals of the International Communication Association, 6(1), 206–249. Calo, R. (2010). Robots and privacy. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 187–201). MIT Press. Dienlin, T., & Metzger, M. J. (2016). An extended privacy calculus model for SNSs: Analyzing self-disclosure and self-withdrawal in a representative US sample. Journal of ComputerMediated Communication, 21(5), 368–383. 13 Edwards, C., Edwards, A., Kim, J., Spence, P. R., de Graaf, M., Nah, S., & Rosenthal-von der Pütten, A. (2019). Human-machine communication: What does/could communication science contribute to HRI?. In 2019 14th ACM/IEEE International Conference on HumanRobot Interaction (HRI) (pp. 673–674). IEEE. Edwards, A., Edwards, C., Westerman, D., & Spence, P. R. (2019). Initial expectations, interactions, and beyond with social robots. Computers in Human Behavior, 90, 308–314. Feiner, L. (2019). Apple’s smart speaker is struggling against rivals from Amazon and Google. CNBC, February 5. www.cnbc.com/2019/02/05/apple-homepod-smart-speakermarket-share.html Fortunati, L., & Edwards, A. P. (2020). Opening space for theoretical, methodological, and empirical issues in human-machine communication. Human-Machine Communication, 1(1), 7–18. Gambino, A., Fox, J., & Ratan, R. A. (2020). Building a stronger CASA: Extending the computers are social actors paradigm. Human-Machine Communication, 1, 71–85. Guzman, A. L. (2018). Introduction: What is human-machine communication, anyway? In A. Guzman (Ed.), Human-machine communication: Rethinking communication, technology, and ourselves (pp. 1–28). Peter Lang. Guzman, A. L. (2020). Ontological boundaries between humans and computers and the implications for human-machine communication. Human-Machine Communication, 1, 37–54. Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human– Machine Communication research agenda. New Media & Society, 22(1), 70–86. Hargittai, E., Redmiles, E. M., Vitak, J., & Zimmer, M. (2020). Americans’ willingness to adopt a COVID-19 tracking app. First Monday, 25(11). https://doi.org/10.5210/fm.v25i11.11095 Harkous, H., Fawaz, K., Shin, K. G., & Aberer, K. (2016). Pribots: Conversational privacy with chatbots. In Twelfth Symposium on Usable Privacy and Security ({SOUPS} 2016). Horstmann, B., Diekmann, N., Buschmeier, H., & Hassan, T. (2020). Towards designing privacy-compliant social robots for use in private households: A use case based identification of privacy implications and potential technical measures for mitigation. In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) (pp. 869–876). IEEE. Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2019, November). Privacy concerns in chatbot interactions. In International Workshop on Chatbot Research and Design (pp. 34–48). Springer. Kokolakis, S. (2017). Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & Security, 64, 122–134. 14 Koops, B. J., Newell, B. C., Timan, T., Skorvanek, I., Chokrevski, T., & Galic, M. (2017). A typology of privacy. University of Pennsylvania Journal of International Law Review, 38, 483–575. Krupp, M. M., Rueben, M., Grimm, C. M., & Smart, W. D. (2017, August). A focus group study of privacy concerns about telepresence robots. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 1451– 1458). IEEE. Lee, S. A., & Liang, Y. J. (2019). Robotic foot-in-the-door: Using sequential-request persuasive strategies in human-robot interaction. Computers in Human Behavior, 90, 351–356. Liao, Y., Vitak, J., Kumar, P., Zimmer, M., & Kritikos, K. (2019). Understanding the role of privacy and trust in intelligent personal assistant adoption. In International Conference on Information (pp. 102–113). Springer. Lutz, C., & Newlands, C. (2021). Privacy and smart speakers: A multi-dimensional approach. The Information Society, 37(3), 147–162. Lutz, C., Schöttler, M., & Hoffmann, C. P. (2019). The privacy implications of social robots: Scoping review and expert interviews. Mobile Media & Communication, 7(3), 412–434. Lutz, C., & Tamò, A. (2018). Communicating with robots: ANTalyzing the interaction between healthcare robots and humans with regards to privacy. In A. Guzman (Ed.), HumanMachine Communication: Rethinking Communication, Technology, and Ourselves (pp. 145–165). Peter Lang. Lutz, C., & Tamò-Larrieux, A. (2020). The robot privacy paradox: Understanding how privacy concerns shape intentions to use social robots. Human-Machine Communication, 1, 87– 111. Lutz, C., & Tamò-Larrieux, A. (2021). Do privacy concerns about social robots affect use intentions? Evidence from an experimental vignette study. Frontiers in Robotics and AI, 8, 63. Malkin, N., Deatrick, J., Tong, A., Wijesekera, P., Egelman, S., & Wagner, D. (2019). Privacy attitudes of smart speaker users. Proceedings on Privacy Enhancing Technologies, 2019(4), 250–271. Maltseva, K., & Lutz, C. (2018). A quantum of self: A study of self-quantification and selfdisclosure. Computers in Human Behavior, 81, 102–114. Marwick, A. E., & boyd, d. (2014). Networked privacy: How teenagers negotiate context in social media. New Media & Society, 16(7), 1051–1067. Newlands, G., Lutz, C., Tamò-Larrieux, A., Villaronga, E. F., Harasgama, R., & Scheitlin, G. (2020). Innovation under pressure: Implications for data privacy during the Covid-19 pandemic. Big Data & Society, 7(2), 1–14. 15 Nissenbaum, H. (2010). Privacy in context. Technology, policy and the integrity of social life. Stanford University Press. Peter, J., & Kühne, R. (2018). The new frontier in communication research: Why we should study social robots. Media and Communication, 6(3), 73–76. Petronio, S. (2002). Boundaries of privacy: Dialectics of disclosure. State University of New York Press. Pridmore, J., & Mols, A. (2020). Personal choices and situated data: Privacy negotiations and the acceptance of household intelligent personal assistants. Big Data & Society, 7(1), 1– 12. Pridmore, J., Zimmer, M., Vitak, J., Mols, A., Trottier, D., Kumar, P. C., & Liao, Y. (2019). Intelligent personal assistants and the intercultural negotiations of dataveillance in platformed households. Surveillance & Society, 17(1/2), 125–131. Rodríguez-Hidalgo, C. (2020). Me and my robot smiled at one another: The process of socially enacted communicative affordance in human-machine communication. Human-Machine Communication, 1, 55–69. Rueben, M., Aroyo, A. M., Lutz, C., Schmölz, J., Van Cleynenbreugel, P., Corti, A., ... & Smart, W. D. (2018). Themes and research directions in privacy-sensitive robotics. In 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO) (pp. 77–84). IEEE. Rueben, M., Grimm, C. M., Bernieri, F. J., & Smart, W. D. (2017). A taxonomy of privacy constructs for privacy-sensitive robotics. arXiv, 1701.00841. Schrock, A. R. (2015). Communicative affordances of mobile media: Portability, availability, locatability, and multimediality. International Journal of Communication, 9, 1229–1246. Smith, H. J., Dinev, T., & Xu, H. (2011). Information privacy research: an interdisciplinary review. MIS Quarterly, 35(4), 989–1015. Solove, D. J. (2008). Understanding privacy. Harvard University Press. Stark, L., Stanhaus, A., & Anthony, D. L. (2020). “I don’t want someone to watch me while I’m working”: Gendered views of facial recognition technology in workplace surveillance. Journal of the Association for Information Science and Technology, 71(9), 1074–1088. Sundar, S. S., & Marathe, S. S. (2010). Personalization versus customization: The importance of agency, privacy, and power usage. Human Communication Research, 36(3), 298–322. Trepte, S., Reinecke, L., Ellison, N. B., Quiring, O., Yao, M. Z., & Ziegele, M. (2017). A crosscultural perspective on the privacy calculus. Social Media + Society, 3(1), 1–13. Vitak, J., & Zimmer, M. (2020). More than just privacy: Using contextual integrity to evaluate the long-term risks from COVID-19 surveillance technologies. Social Media + Society, 6(3), 1–4. 16 Warren, S. D. & Brandeis, L. D. (1890). The right to privacy. Harvard Law Review, 4(5), 193– 220. West, S. M. (2019). Data capitalism: Redefining the logics of surveillance and privacy. Business & Society, 58(1), 20–41. Westin, A. (1967). Privacy and freedom. Atheneum Press. Zhang, Y., & Leung, L. (2015). A review of social networking service (SNS) research in communication journals from 2006 to 2011. New Media & Society, 17(7), 1007–1024. Zheng, S., Apthorpe, N., Chetty, M., & Feamster, N. (2018). User perceptions of smart home IoT privacy. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1–20. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books. 17
US