The Trap of Customization: Capitalism Goes Benevolent

Document Type : Original article

Author

Department of Commutation, Faculty of Social Sciences, University of Tehran, Tehran, Iranndependent Researcher

10.22059/jcss.2024.95891

Abstract

Big-tech corporations like Google, Meta, Microsoft etc. extensively utilize customization to collect and analyze user data, a practice integral to their business models. Google leverages user data to personalize services across its platforms, notably in its search engine and YouTube, to enhance user experience and bolster its targeted advertising strategies. Similarly, Meta uses algorithmic content curation on Facebook and Instagram, tailoring user feeds to individual preferences and behaviors, thereby generating detailed user profiles for marketing purposes. Microsoft's approach, particularly with Office 365 and LinkedIn, focuses on productivity enhancements while also gathering user data for feature refinement and targeted advertising. These practices, I argue, while improving user engagement, raise significant privacy concerns. The extensive data collection often occurs without full transparency or user consent, leading to debates about ethical implications, digital surveillance, and societal impacts. In response, there is a growing demand for stricter data governance and privacy regulations, as seen in initiatives like the GDPR and CCPA, aiming to balance the benefits of personalization with the rights and privacy of users.

Keywords


Good companies customize

From Microsoft, Apple, Amazon, Google (Alphabet), and Facebook (Meta Platforms) to Tencent, Alibaba, IBM, Intel, Samsung Electronics, Cisco Systems, to Oracle, SAP, Adobe Systems, Salesforce, Broadcom or Qualcomm, Sony, Huawei, Dell Technologies, all big electronic and internet companies seem to be in a continuous contest to ‘customize’ every service to us. We the consumers have become the king of consumerism at last. We can now be content that big producing companies like Lada -and Ford before that- can no longer force us to buy what they decided we have to.

The Lada, a renowned car brand from Russian manufacturer AvtoVAZ, traditionally featured limited options and amenities, particularly in its earlier models. This characteristic stemmed from a combination of factors rooted in the economic and manufacturing philosophies of the Soviet era. In the centrally planned economy of the Soviet Union, the overarching goal in automobile production, as with many industries, was to ensure utility and widespread accessibility. This approach was less about catering to a variety of consumer preferences and more about mass-producing vehicles that were affordable for the general population. The design and manufacturing philosophy of Lada cars, influenced by their collaboration with Fiat (notably the Fiat 124 model), emphasized simplicity and durability. These vehicles were engineered to be robust and easy to repair, often sacrificing luxury and advanced features to achieve these ends. Resource constraints also played a significant role. The Soviet era was marked by limited access to diverse materials and advanced automotive technologies. Efficient use of available resources was a priority, which often meant producing cars with fewer complexities and options. Additionally, the market dynamics within the Soviet Union differed significantly from those in capitalist economies. The lack of intense market competition reduced the need for manufacturers like AvtoVAZ to diversify their offerings with various features and options to attract customers.

Decades before that, Ford's Model T, introduced in 1908, was renowned for its lack of options, a decision deeply rooted in Henry Ford's commitment to efficiency, affordability, and simplicity in automobile production. One reason was that standardization was central to Ford's strategy. The uniform design of the Model T allowed for the effective use of assembly line production methods, which was revolutionary at the time. This standardization significantly reduced the complexity and duration of the manufacturing process, enabling quicker and more efficient production. The assembly line itself became a hallmark of industrial manufacturing, thanks largely to its implementation in producing the Model T. Cost reduction was another critical factor. By limiting the Model T to a single design, Ford was able to buy materials in bulk and streamline the manufacturing process, leading to substantial cost savings. These savings were then passed on to consumers, making the Model T one of the most affordable cars of its era. This affordability was a key factor in the Model T's widespread popularity and is a testament to Ford's vision of producing a vehicle that was accessible to the masses. The reliability and simplicity of the Model T were also significant advantages. The standardized design meant that parts were interchangeable, and the car's mechanics were straightforward. This simplicity made the Model T easier to repair and maintain, an important consideration at a time when professional automotive services were not as readily available as they are today. Finally, Ford's personal philosophy played a critical role. He famously stated that customers could have the Model T in any color "so long as it is black." This quote not only underscores his commitment to uniformity but also highlights a broader vision to produce a car that was suitable for a wide audience, without the need for customization.

Nevertheless, customization is not something big-tech corporations have ushered in. In the pre-industrial era, customization was predominantly artisan-based. Goods such as clothing, furniture, and tools were meticulously handcrafted to suit the individual tastes and requirements of customers. This form of personalization was a hallmark of luxury, primarily accessible to the affluent due to its labor-intensive nature and the skill required. Artisans, with their specialized knowledge and craftsmanship, were able to create unique, personalized products that catered to the specific desires of each patron.

The onset of the Industrial Revolution marked a significant shift in the landscape of customization. Mass production techniques, introduced during this period, revolutionized the way products were manufactured. The emphasis moved from individualized, artisanal production to efficiency and standardization in factories. This shift led to a dramatic reduction in the availability of customized products, as mass production focused on creating standardized items that catered to a broad market. The unique, tailored approach of the artisan was largely replaced by the uniformity and scale of factory production.

However, the post-World War II era saw a resurgence in the demand for customized products. Driven by a booming economy and the emergence of a consumer culture that prized individuality and personal expression, companies began to reintroduce customization into their offerings. This period saw a return to offering a variety of product options, albeit within the constraints and efficiencies of mass production. Consumers were presented with choices in colors, styles, and limited design variations, signaling the early stages of a new era of customization.

In the 1970s, the stability of the postwar era's mass production, distribution, and consumption began to falter, primarily due to the deterioration of the postwar economy. A key event that precipitated this shift was the 1973 oil crisis. During the crisis, oil-producing countries in the Middle East embargoed oil sales to the United States in response to its support of Israeli military forces in the Yom Kippur War. This event led to a significant spike in oil and gas prices in the U.S., pushing an already faltering American economy into a deep recession. The phenomenon of "stagflation," characterized by stagnant economic growth combined with inflation, emerged during this period, leading to interest rates more than doubling by 1974. These high interest rates posed challenges to the strategies of mass production as they made it riskier and more difficult to invest heavily in product development and production, with returns on these investments taking months or years (Havens & Lotz, 2017).

The unpredictability of interest rates, which remained erratic even as they dropped to near zero by the mid-2000s, made profits similarly unpredictable. This uncertainty led firms to be cautious about borrowing extensively for long-term periods. The response to these economic pressures was a gradual shift in corporate strategies, which over several years, percolated through the economy. By the 21st century, many of these changes had become commonplace, leading to a greater adoption of mass customization strategies (ibid). In the context of media industries, this economic shift had significant implications. The mid-20th century had been characterized by mass production, distribution, and consumption, especially in American culture, with media industries like television playing a central role. Large corporations invested heavily in producing goods and waited considerable time before seeing returns on these investments. This era saw the rise of mass entertainment, with standardized shows and movies aimed at common cultural tastes. However, by the 1960s, industries like magazines and radio had already begun adopting mass customization principles, targeting specific demographics and interests, partly due to competition from television (ibid).

The newspaper industry, however, largely remained organized around local and regional focuses rather than mass production. Although national newspapers like The New York Times did publish general-interest news, they also included significant local content and produced different editions for different parts of the country. The shift to mass customization in media industries was more pronounced in magazines, which began focusing on niche audiences and specific interests like fashion or gossip (ibid), but the real boom in this industry had to wait at least two decades before the spread of fake news as a phenomenon become a global problem (Sabzali et al., 2022; Sabbar & Hyun, 2016).

Anyway, entering the 21st century, the concept of customization underwent further evolution with the advent of hyper-personalization. E-commerce platforms and advancements in big data analytics enabled an unprecedented level of personalized shopping experiences. Online retailers began to harness consumer data to provide tailored recommendations and targeted advertising, based on individual browsing and purchasing histories. The integration of social media into the mix provided deeper insights into consumer behaviors and preferences, further enhancing the capability for customization.

Technological innovations such as 3D printing and artificial intelligence have further expanded the scope of customization. 3D printing technology allowed for the creation of bespoke items in various sectors, from fashion to healthcare and from politics to arts (Aris et al., 2023), facilitating affordable and efficient production of customized products. Artificial intelligence and machine learning algorithms have enabled the analysis of vast amounts of data to predict consumer preferences, ushering in a new era of personalized products and services.

Despite the advancements and benefits, this era of hyper-personalization has also raised significant concerns, particularly regarding privacy and data security. The extensive collection and analysis of personal data for customization purposes have sparked debates about ethical use and data protection. Additionally, a growing awareness of environmental impacts has led to a trend towards sustainable customization, focusing on environmentally friendly materials and production methods. Yet, big-tech corporations like Google, Meta, Microsoft etc. don’t miss any chance of taking advantage of this customization fever.

How they turned it into a trophy

 

Since the internet has changes many dimensions of human and social life (see for example Shahghasemi, 2020a, 2020b; Sabbar & Matheson, 2019), all players in this realm have worked hard to be a winner in the new game for the new sources of data. Big-tech corporations’ needs and insecurities for outsize gain hinges on the prediction imperative, where personalization is a method of individualizing supply operations to secure a continuous flow of behavioral surplus. This is driven by an unrelenting hunger for recognition, appreciation, and support. Big-tech corporations, charted this course, emphasizing personalization and customization as key in computer-mediated transactions. Google Now, Google’s first digital assistant, embodies this approach, needing extensive knowledge about the user to function effectively. Varian equates sharing information with Google to confiding in doctors or lawyers, suggesting that the benefits of digital assistants outweigh privacy concerns. However, this comparison is flawed. Unlike relationships with doctors, accountants, and attorneys, which are governed by professional ethics and mutual dependencies, big-tech corporations without such constraints. Big-tech corporations’ announcement reveal how technology rhetoric often obscures the exploitation of social and economic inequality inherent in “surveillance capitalism”. (Surveillance capitalism, a term coined by Shoshana Zuboff, refers to the monetization of personal data collected through digital surveillance. This phenomenon has grown exponentially with the advent of digital technology, profoundly impacting individual privacy, autonomy, and even the fabric of democracy). They suggest that what the rich have today, like personal assistants, will eventually become desired by all social classes, reinforcing the cycle where luxuries become necessities (Zuboff, 2018).

Big-tech corporations casts personalization as the twenty-first century's new necessaries for people struggling with stagnant wages, dual-career obligations, and hollowed-out public institutions. They claim digital assistants will become so essential that people will accept their substantial privacy forfeitures. Big-tech corporations, seeing this as inevitable, predict that continuous monitoring will become the norm, although the wealthy may escape these impositions. Historically, lower-cost goods and services have led to economic expansions and improved standards of living. However, big-tech corporations' approach uses people's insecurities to further surveillance capitalism, not to reciprocate societal benefits. Google Now, a precursor to more advanced systems, was an early step in this direction. It combined Google's technologies to predict users' needs in real-time, going beyond just selling ads to providing information based on various personal data points (Bolton et al., 2021).

Facebook's "M", another example, aimed to capture user intent for transactions, learning from human behavior to eventually facilitate commerce. By 2017, Facebook shifted M's focus more towards commerce, embedding commercial opportunities within Messenger interactions. Facebook's "M", the AI-powered digital assistant, raises significant privacy concerns by potentially infringing on user rights. By analyzing vast amounts of personal data, including private conversations and behavioral patterns, M operates in a realm where the boundaries of user consent are often blurred. This extensive data collection, aimed at enhancing user experience and targeted advertising, risks unauthorized surveillance, leading to the commodification of personal information without explicit, informed consent from users (Chowdhury et al., 2019)

In our digital era, personal digital assistants are market avatars, cleverly disguised as helpful tools while aggressively pursuing commercialization of every aspect of daily life. They may adapt to individual preferences, but they are fundamentally shaped by hidden market forces, turning everyday activities into opportunities for monetization. Tech companies, including Google, are focusing on making "conversation" the primary medium for human interaction with their technologies. The move towards voice recognition and voice-activated devices is driven by the desire for cost-effective, scalable service interactions. The dominant voice in this space will control a significant amount of behavioral surplus, with vast competitive advantages. The concept of "conversation" with digital devices blurs the line between technology and human interaction, encouraging people to view these devices as confidantes or assistants. This increases the amount of personal experience rendered to these devices, enriching their data collection capabilities. Conversational interfaces are attractive for their ease of use, triggering actions with simple voice commands, which also promotes more spontaneous and uninhibited consumer behavior (Zuboff, 2018).

Big-tech companies are transforming personal digital assistants into intermediaries between individuals' lives and the market. These devices not only respond to what is said but also how it is said, analyzing content and speech patterns. Google's Assistant, integrated across various devices and services, exemplifies this approach, offering personalized assistance based on extensive data analysis. The ultimate goal of these technologies is to render as much of an individual's life as possible, turning daily activities and interactions into opportunities for market transactions. This includes capturing not just the content of speech but also its structural aspects like vocabulary and intonation, which are valuable for refining voice recognition technologies. Big-tech companies globally are collecting vast amounts of spoken words to train their machines. This includes recording conversations in diverse settings and languages, aiming to understand and respond to commands and queries more effectively. Even though these recordings are supposed to be anonymous, there are concerns about privacy and the potential for personal identification (Ebbers et al., 2021).

Substantial investment is being directed towards developing technologies that use voice as a means of interaction. Samsung's Smart TV is an example, highlighting the growing market for internet-enabled appliances. These TVs were found to be recording everything said nearby and sending this data to Nuance Communications for transcription, raising privacy concerns. Samsung’s policy indicated that voice commands and surrounding conversations, which might contain sensitive information, are captured and transmitted to a third party, with the company disclaiming responsibility for third-party policies (Zuboff, 2018).

In response to such practices, California passed a law to regulate the collection of voice data by connected TVs. Despite this, companies like Samsung continued to develop their smart TV capabilities, integrating them into a broader smart-home ecosystem. Vizio, another major smart TV manufacturer, faced legal action for collecting detailed viewing data from its TVs and selling this information, including personal details like IP addresses, to advertisers. These developments illustrate a broader trend where everyday devices, from TVs to toys, are being used to collect behavioral data. Interactive dolls and toy robots are now designed to record children's conversations, process this data, and use it for various purposes, often without adequate data protection. Companies like Genesis Toys and Mattel are at the forefront of this, creating toys that not only interact with children, but also collect data from these interactions (Abdugani, 2020).

In a future dominated by technologies like "One Voice", children grow up in a world where boundaries between self and market are non-existent. The One Voice, represented by technologies like voice-activated devices, permeates every aspect of life, reshaping the concept of intimacy and solitude. This integration teaches children that their desires and commands are seamlessly catered to by technology, blurring the line between personal agency and market-driven suggestions (Zuboff, 2018).

The Cayla doll, an interactive toy, raised significant privacy concerns due to its capability to record and transmit children's conversations. Equipped with internet connectivity and voice recognition technology, Cayla could collect personal data without explicit consent or awareness of the users or their guardians. This surreptitious data collection and potential for unauthorized access to sensitive information posed a serious violation of privacy rights, particularly alarming given its primary users were children. Germany's ban of the Cayla doll, an interactive toy that was deemed a surveillance device, highlights the growing concern over such technologies. Despite this, the trend towards connected environments continues, with companies like Mattel pushing towards connected rooms and homes, further normalizing the surveillance culture (ibid).

Big-tech companies like Google, Amazon, and Samsung are engaged in a competitive race to gather and leverage user data, a contest driven by the immense value data holds in today's economy. Google, with its vast array of services including search, email, and maps, amasses a wealth of information on user preferences, search histories, and geographical movements. This data is crucial for their advertising business model, as it enables highly targeted and effective advertising solutions. Amazon, primarily an e-commerce platform, collects data on purchasing habits, browsing history, and consumer preferences. This information is not only vital for personalizing the shopping experience, but also feeds into their growing advertising and cloud computing sectors. Amazon's use of data extends to its AI-powered voice assistant, Alexa, which gathers voice data to improve user experience and offer tailored services. Samsung, known for its electronics and smart appliances, integrates data collection across its product ecosystem. From smartphones to smart refrigerators, Samsung devices collect user data to enhance product functionality and offer customized user experiences. This data collection is also pivotal for Samsung's Bixby voice assistant, which competes in the same space as Google's Assistant and Amazon's Alexa (Aleksanjan, 2019). Amazon-owned Ring camera feeds were accessible to employees, revealing private footage from homes globally. Amazon also faced scrutiny when its employees listened to Alexa voice assistant recordings, with some reporting overhearing a sexual assault (Stanley, 2023).

Google's voice assistant recordings were audited by contractors, leading to privacy breaches where individuals were identifiable, and sensitive information like medical discussions was exposed. Microsoft contractors listened to personal conversations via the Skype translation app, encountering private interactions including phone sex. Apple's Siri voice assistant recordings were also subject to human review, with contractors hearing confidential medical information, drug deals, and intimate moments. Facebook engaged contractors to transcribe audio from its services, further illustrating the widespread practice of monitoring customer data (ibid).

In the race to establish the dominant voice-activated technology, known as the "One Voice", various companies are competing fiercely. Google with its Google Home, Samsung with its acquisition of Viv (created by Siri's original developers), and others are all vying for this position. These technologies aim to simplify life by responding to voice commands, but they also represent a method of controlling and commodifying human behavior by converting daily life into a source of behavioral data (Zuboff, 2018).

The term "personalization" is used to describe how these technologies adapt to individual users, but in reality, it often serves as a cover for more invasive forms of data collection and analysis. This process has evolved from simply crawling the web for information to a more invasive crawling of real-life behaviors, personal experiences, and even inner selves. The research on Facebook profiles in 2010 by a team of German and US scholars revealed that these profiles reflect users' actual personalities rather than idealized self-portraits. This finding spurred further research, particularly at the University of Maryland, where researchers developed methods to predict a user's personality from their Facebook profile using sophisticated analytics and machine intelligence. They discovered that behavioral metrics, like the amount of information shared, were more predictive than the actual content shared (ibid). This research evolved into a tool for manipulation and behavioral modification. The team envisioned using these personality insights for tailoring social media, e-commerce, and advertising to individual users, enhancing the effectiveness of marketing and trust in product reviews (Lulandala, 2020).

Further studies, including those by Michal Kosinski and David Stillwell from Cambridge University, built upon this foundation. They utilized the myPersonality database, a massive collection of psychometric test results and Facebook profiles, to refine these predictive models. This database later inspired Cambridge Analytica's approach to behavioral micro-targeting for political purposes. Kosinski and Stillwell's research demonstrated that a wide range of personal attributes could be accurately estimated from public data, such as Facebook "likes". This raised concerns about privacy and the unintended sharing of personal information. They acknowledged the potential benefits of these predictive capabilities for improving products and services, including psychologically tailored marketing, but also warned of the risks. These include the potential misuse of data by companies, governments, or even Facebook itself to uncover sensitive information without individual consent or awareness, posing threats to personal well-being and freedom (Alegre, 2021).

The evolution of Michal Kosinski's research, which moved from the University of Maryland to Stanford University, continued to attract significant funding and interest from major corporations and organizations, including Microsoft, Boeing, Google, the National Science Foundation, and DARPA. This research refined the process of analyzing and predicting personality traits and other personal attributes from social media behavior and meta-data, advancing the concept of behavioral surplus. A significant finding from this research was that computer-based predictions could match or exceed human judges in assessing personality traits from Facebook likes and predicting life outcomes. The researchers developed efficient tools for personality assessment that analyze not just the substance of what is shared on social media but the form and manner of sharing. For instance, how one writes, the choice of words, or even the use of filters in pictures can reveal personality traits. These insights went beyond traditional methods of personality evaluation, offering a more covert and comprehensive way to analyze individuals' behaviors (Zuboff, 2018). This is one example that shows how university discourse might play an important role in helping big-tech corporations plunder our data. In their -otherwise- controversial study, Sarfi et al. (2021: 181) argue:

 

Academia plays a significant role in fostering a sense of gratitude among users for Google's services, despite relinquishing their rights and privacies. Google's substantial financial support for academic conferences and grants to researchers fuels the narrative that its data usage is both legitimate and altruistic. However, [...] Google has exerted pressure on academics to produce favorable articles and penalized those who refused to comply, effectively influencing academic discourse to bolster its image as a benevolent corporation.

 

 

Despite recognizing the potential intrusiveness and ethical concerns of their work, the researchers noted the immense possibilities for commercial exploitation of these data. IBM's Watson Personality Service exemplified this commercial application. It offered detailed personality assessments based on social media behavior, promising various applications from marketing to personalized customer service. IBM's research showed that certain personality traits could predict consumer behaviors, such as response rates to marketing efforts (Solove, 2021).

This shift in data analysis represents a significant ethical dilemma. Qualities we value and teach, like trust and friendliness, are being exploited for commercial gain. In contrast, traits like paranoia and anxiety might offer some defense against such invasive analysis. The researchers questioned the societal implications of this new reality, where behavioral surveillance and analysis become so deeply ingrained that they fundamentally alter the nature of human interaction and privacy. There is a rapid institutionalization and normalization of personality analysis using behavioral surplus data, primarily from social media platforms like Facebook. This practice, initially seen in academic research, quickly found application in commercial enterprises, most notably by companies like Cambridge Analytica.

Cambridge Analytica, backed by Robert Mercer and involved in both the Brexit and Trump campaigns, boasted about its ability to perform micro-behavioral targeting based on personality. The firm claimed to have thousands of data points on every adult in the United States, which it used for political campaigning and planned to apply commercially in areas like car sales, using personality analysis to tailor sales tactics. A leaked Facebook document highlighted the company’s use of its extensive data for predictive purposes, showcasing a "loyalty prediction" service for advertisers. This service exemplifies how companies like Facebook use data to predict, intervene, and modify future consumer behavior. Facebook's "prediction engine", FBLearner Flow, is described as processing trillions of data points daily to create personalized experiences (Ward, 2022).

The controversy around Cambridge Analytica, particularly its methods of obtaining and using Facebook data for political purposes, brought attention to the broader practices of surveillance capitalism. Cambridge Analytica's strategy of behavioral micro-targeting was based on personality predictions using data obtained through questionable means, including a personality quiz app developed by Alexander Kogan. Kogan's app not only gathered data from its users but also from their Facebook friends without consent, resulting in profiles of millions of users (Hu, 2020).

These developments highlight the ethical and privacy concerns surrounding the use of social media data for personality analysis and behavioral prediction. These practices have far-reaching implications for individual autonomy and privacy, as they turn personal attributes and behaviors into commodities for manipulation and profit. The use of such data, whether for political campaigning or commercial purposes, raises critical questions about consent, data protection, and the responsibility of tech companies in handling user data.

This manipulation of personal data for behavioral influence is characterized as a form of information warfare, undermining the agency of individuals and posing a threat to democratic processes. It highlights the asymmetries of knowledge and power inherent in these practices, where personal data is used without consent or awareness for manipulation.

Affective computing is a field at the intersection of computer science, psychology, and cognitive science, dedicated to the design and development of systems capable of recognizing, interpreting, and responding to human emotions (Picard, 1997). Coined by Rosalind Picard, a professor at the Massachusetts Institute of Technology, the term encapsulates a range of computational technologies aimed at enhancing the emotional intelligence of machines. Central to affective computing is the ability of systems to detect and process emotional cues through advanced algorithms and sensor technologies. These systems analyze various human outputs, such as facial expressions, vocal nuances, body language, and physiological signals, to infer emotional states (Calvo et al., 2015). The data derived from these outputs are processed using machine learning techniques, enabling the nuanced interpretation of complex emotional patterns. We can now clearly see that our most passionate affections have become a potential ground for others’ interests (Sarfi et al., 2023)

The integration of affective computing in human-computer interaction (HCI) is a significant area of development. This integration aims to create more intuitive and empathetic interactions between users and digital systems, enhancing user experience and fostering more natural interactions (Scherer et al., 2010). Furthermore, affective computing extends to the simulation of emotions in digital agents, making them appear more life-like and relatable. This aspect is particularly relevant in customer service applications and therapeutic tools, where empathetic responses can be crucial (McDuff et al., 2018). The application of affective computing spans various sectors, including mental health, where it assists in therapy and emotional well-being assessments; education, where it enhances learning experiences; customer service, where it improves user engagement; and automotive safety, where it contributes to detecting driver fatigue or stress. One of the most staggering cases in this respect can be seen in the new online education industry where denaturalization of education has contributed in decline in education at schools (Shahghasemi et al., 2023).

Despite Rosalind Picard’s initial vision of affective computing being used for beneficial or benign purposes, the commercial demand for these technologies has led to their application in ways that raise significant privacy concerns. Picard herself expressed apprehensions about the potential misuse of affective data by advertisers, employers, or even governments for manipulative purposes (Zuboff, 2018). The Facebook patent for emotion detection exemplifies the growing interest in leveraging emotional data for customizing user experiences, particularly in advertising. This development is part of a larger trend where the affective computing market is driven by the demand for mapping human emotions, particularly in the marketing and advertising sectors.

The transformation of Affectiva from a company with a focus on "do-good" applications to one that primarily serves the advertising and marketing industry illustrates the shift towards exploiting emotional data for profit. Affectiva's journey from helping autistic children to focusing on advertisement effectiveness marks a significant departure from its original mission (Zuboff, 2018). The commercialization of emotion analytics has led to the development of concepts like emotion as a service, where companies offer to analyze emotional data for clients. This raises the possibility of not just observing but also modifying emotions for commercial gain, like incentivizing positive moods in customers.

As surveillance capitalism delves deeper into personal lives, extracting behavioral surplus from the most private aspects of existence, it raises critical questions about the sanctity of the self and the right to personal autonomy. Surveillance capitalists, driven by the prediction imperative, do not just seek observable behaviors but are increasingly interested in the inner workings of the human mind and emotions. This drive for comprehensive understanding and prediction of human behavior poses a threat to the fundamental right to privacy and self-expression.

Joseph Weizenbaum, a computer scientist who warned about the unintended consequences of technology was vocal about these dangers. Like Weizenbaum, Picard's journey reflects the dilemma faced by many innovators: their creations, intended for good, can be co-opted into systems that challenge ethical boundaries and personal freedoms (Zuboff, 2018). In April 2023 it was revealed that Tesla employees reportedly circulated videos captured by the company's vehicle cameras, including footage from car owners' private garages, revealing personal and intimate activities. This incident highlights a significant privacy abuse almost under the name of customization, reflecting a recurring pattern where companies' recording devices compromise user privacy (Stanley, 2023). While some companies have reviewed their practices or offered opt-out options in response to these privacy concerns, the extent of ongoing surveillance is unclear. Additionally, security breaches have revealed unauthorized access to surveillance cameras. For instance, hackers discovered a backdoor in Verkada's system, granting access to a vast network of cameras, and a similar vulnerability was found in Hikvision cameras, leading to its ban by the U.S. government (ibid).

These incidents underscore the challenges and risks associated with AI-driven products, where companies often access customer data to train algorithms. In Tesla's case, video collection primarily serves AI training, but employee misuse demonstrates the potential for abuse. This pattern of surveillance, often driven by curiosity or power, poses significant privacy concerns for consumers.

Conclusion

 

Customization in the digital world offers numerous benefits, significantly enhancing user experience and engagement across various platforms and services (see for example Nosrati et al., 2023). Personalization algorithms tailor content and services to individual preferences, creating a more relevant and satisfying user experience. For instance, streaming services like Netflix and Spotify use customization to recommend movies, shows, and music, improving content discovery and keeping users engaged for longer periods. In e-commerce, platforms like Amazon offer personalized shopping experiences, suggesting products based on past purchases and browsing habits, thereby streamlining the shopping process and increasing customer satisfaction.

Customization also aids in information management, especially in digital applications like news aggregators and social media, where it filters and prioritizes content according to user interests. This not only saves time but also enhances the relevance of the information presented. In educational technology, customized learning platforms adapt to individual learning styles and progress, making education more effective and accessible. Furthermore, personalization in digital health apps provides tailored health and wellness advice, contributing to improved health outcomes (Nosrati, et al., 2020). Customization in the digital world, therefore, brings convenience, efficiency, and a more personalized interaction with technology, aligning digital experiences more closely with individual needs and preferences, but this is only one part of the story.

Big-tech corporations such as Google, Meta (formerly Facebook), Microsoft etc. have strategically utilized customization as a conduit for extensive data collection and utilization. This approach, deeply integrated into their business models, serves dual purposes: enhancing user experience and facilitating comprehensive data gathering. Google, a pioneer in digital advertising, leverages user data to customize experiences across its various platforms, including Search, Gmail, and Google Maps. This data-driven personalization is central to Google’s business model, as it enables the company to offer targeted advertising, thereby maximizing its revenue potential. For instance, Google Search tailors results based on users' search histories, while YouTube uses viewing patterns to recommend videos, thereby increasing user engagement and gathering more detailed data profiles.

Similarly, Meta's social media platforms, particularly Facebook and Instagram, exemplify the use of algorithms for content curation. These platforms analyze user interactions, relationships, and preferences to personalize content feeds and advertisements. This level of customization not only ensures user retention but also allows Meta to construct detailed user profiles, which are invaluable for targeted marketing purposes. Microsoft's approach to customization, particularly within its Office 365 and LinkedIn platforms, focuses on enhancing productivity. Through data analysis, these platforms offer personalized experiences in document creation, team collaboration, and professional networking. While this customization improves efficiency, it also involves the collection of user data, utilized to refine product features and, in the case of LinkedIn, for targeted advertising.

The extensive data collection strategies employed by these tech giants, under the guise of customization, raise significant privacy concerns (Aeini et al., 2023). There is a growing discourse around the ethical implications of such data practices, particularly regarding transparency and user consent. The potential for manipulation and the influence these companies wield over user behavior and public opinion have also sparked debates about digital surveillance and its societal impacts. In response to these privacy concerns, there is a notable shift towards stricter data governance and privacy regulations. The European Union’s GDPR and the California CCPA represent legislative efforts to grant users more control over their personal data and demand greater transparency from tech companies. We think these are good steps, but much more action is needed if we are to be -and remain- in control of our own creations.

Ethical considerations

The author has completely considered ethical issues, including informed consent, plagiarism, data fabrication, misconduct, and/or falsification, double publication and/or redundancy, submission, etc.

Conflicts of interests

The author declares that there is no conflict of interests.

Data availability

The dataset generated and analyzed during the current study is available from the corresponding author on reasonable request.

Abdugani, A. (2020). Privacy Analysis of Smart TV Communication: A Case Study of Privacy Threats in Smart TVs. Master's thesis.
Aeini, B.; Zohouri, M.; Mousavand, M. (2023). “Iranians and privacy preservation on social media: A systematic review”. Positif Journal. 23(10): 88-100.
Alegre, S. (2021). “Protecting freedom of thought in the digital age”. Policy Brief. 165: 1-10.
Aleksanjan, A. (2019). Data Protection in the Age of Virtual Personal Assistants. Doctoral dissertation, Ghent University, Songdo, South Korea.
Aris, S.; Aeini, B. & Nosrati, S. (2023). “A digital aesthetics? Artificial intelligence and the future of the art”. Journal of Cyberspace Studies. 7(2): 219-236. doi: 10.22059/jcss.2023.366256.1097.
Bolton, T.; Dargahi, T.; Belguith, S.; Al-Rakhami, M.S. & Sodhro, A.H. (2021). “On the security and privacy challenges of virtual assistants”. Sensors. 21(7): 2312.
Calvo, R.A., D'Mello, S.K., Gratch, J. & Kappas, A. (Eds.) (2015). The Oxford Handbook of Affective Computing. Oxford University Press.
Chowdhury, N.; Chopra, S. & Arora, M. (2019). “Virtual personal assistant security: A retrospect”. Think India Journal. 22(3): 8222-8233.
Ebbers, F.; Zibuschka, J.; Zimmermann, C. & Hinz, O. (2021). “User preferences for privacy features in digital assistants”. Electronic Markets. 31: 411-426.
Havens T. & Lotz A.D. (2017). Understanding media industries. Second ed. Oxford University Press.
Hu, M. (2020). “Cambridge analytica’s black box”. Big Data & Society. 7(2): doi: https://doi.org/10.1177/2053951720938091.
Lulandala, E.E. (2020). “Facebook data breach: a systematic review of its consequences on consumers’ behaviour towards advertising”. In Kapur, P.K.; Singh, O., Kumar Khatri,  S. & Kumar Verma, A. (eds.). Strategic System Assurance and Business Analytics. 45-68.
McDuff, D.; Czerwinski, M. & Roseway, A. (2018). “Affective computing in HCI”. The 2018 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3173574.3174228.
Nosrati, S.; Sabzali, M.; Arsalani, A.; Darvishi, M. & Aris, S. (2023). “Partner choices in the age of social media: are there significant relationships between following influencers on Instagram and partner choice criteria?”. Revista De Gestão E Secretariado (Management and Administrative Professional Review). 14(10): 19191–19210. https://doi.org/10.7769/gesec.v14i10.3022.
Nosrati, S.; Sabzali, M.; Heidari, A.; Sarfi, T. & Sabbar, S. (2020). “Chatbots, counselling, and discontents of the digital life”. Journal of Cyberspace Studies. 4(2): 153-172. doi: 10.22059/JCSS.2020.93910.
Picard, R.W. (1997). Affective Computing. MIT Press.
Sabbar, S. & Hyun, D. (2016). “What do we trust? a study on credibility of new and old media and relations with medium, content and audience characteristics”. New Media Studies. 1(4): 205-245. https://doi.org/10.22054/cs.2016.5733.
Sabbar, S. & Matheson, D. (2019). “Mass media vs. the mass of media: a study on the human nodes in a social network and their chosen messages”. Journal of Cyberspace Studies. 3(1): 23-42. doi: 10.22059/jcss.2019.271467.1031.
Sabzali, M.; Sarfi, M.; Zohouri, M.; Sarfi, T.; Darvishi, M. (2022). “Fake news and freedom of expression: An Iranian perspective”. Journal of Cyberspace Studies. 6(2): 205-218. doi: 10.22059/JCSS.2023.356295.1087.
Sarfi, M.; Darvishi, M.; Zohouri, M.; Nosrati, S. & Zamani, M. (2021). “Google’s University? An exploration of academic influence on the Tech Giant's propaganda”. Journal of Cyberspace Studies. 5(2): 181-202. doi: https://doi.org/10.22059/jcss.2021.93901.
Sarfi, M.; Darvishi, M. & Zohouri, M. (2023). “Why people may view online crimes as less criminal: Exploring the perception of cybercrime”. International E-Journal of Criminal Sciences. 18(3): 1-17.
Scherer, K.R.; Bänziger, T. & Roesch, E.B. (Eds.) (2010). A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford University Press.
Shahghasemi, E. (2020a). “Pornography of networked feminism: The case of Iranian ‘Feminist’ Instagramers”. ‏The 2nd International Conference on Future of Social Sciences and Humanities. Prague. https://www.doi.org/10.33422/2nd.fshconf.2020.09.170.
---------------- (2020b). “Pornography of poverty: Celebrities’ sexual appeal at service to the poor?”. ‏The 2nd International Conference on Future of Social Sciences and Humanities. Prague. https://www.doi.org/10.33422/2nd.fshconf.2020.09.172.
Shahghasemi, E.; Sabbar, S.; Zohouri, M. & Sabzali, M. (2023). “New communication technologies and the demise of ‘Natural’ education”. Digitalization and Society Symposium. Istanbul, Turkey.
Solove, D.J. (2021). “The myth of the privacy paradox”. Geo. Wash. L. Rev. 89: 1.
Stanley, J. (2023 April 7). “Tesla Camera Scandal is the Latest Lesson in Dangers of Letting Companies Record You”. American Civil Liberties Union. Retrieved December 22, 2023 from https://www.aclu.org/news/privacy-technology/tesla-camera-scandal-is-the-latest-lesson-in-dangers-of-letting-companies-record-you.
Ward, A. (2022). “The oldest trick in the Facebook: Would the general data protection regulation have stopped the Cambridge analytica scandal?”. Trinity CL Rev. 25: 221.
Zuboff S. (2018). The Age of Surveillance Capitalism : The Fight for a Human Future at the New Frontier of Power. First Trade paperback. PublicAffairs.