Publications

Explore articles, jobs, talks, news, privacy,...

Learn about dark patterns, fair patterns and much more

Willing to dig further on dark patterns? Here are curated resources, including hundreds of publications we analyzed in our R&D Lab, conferences, webinars and job opportunities to fight dark patterns.

Jeulin, Par Betty (2021)

In light of the regulation of manipulative interfaces by the United States, questions have been raised about the advisability of national or even European regulation of the exploitation of our cognitive biases by designers of digital interfaces. The article looks at the extent then to which the legislation regulates abusive practices, i.e. dark patterns which exploit cognitive biases. Finally, the author proposes that consideration could be given to a principle of the purpose of capturing attention (in particular the collection of attention for specific, explicit and legitimate purposes and the absence of further processing in a manner incompatible with the purposes initially intended).

Li, Danyang (2022)

This article analyzes the definition of dark patterns introduced by the California Privacy Rights Act (CPRA), the first legislation explicitly regulating dark patterns in the United States. The authors discuss the factors that make defining and regulating privacy-focused dark patterns challenging, review current regulatory approaches, consider the challenges of measuring and evaluating dark patterns, and provide recommendations for policymakers. They argue that California’s model offers the opportunity for the state to take a leadership role in regulating dark patterns generally and privacy dark patterns specifically, and that the CPRA’s definition of dark patterns, which relies on outcomes and avoids targeting issues of designer intent, presents a potential model for others to follow.

Luguri, Jamie & Strahilevitz, Lior Jacob (2021)

This article discusses the results of the authors’ two large-scale experiments in which representative samples of American consumers were exposed to dark patterns. The research also showed the susceptibility of certain groups- particularly those less educated, to dark patterns and identified the dark patterns that seem most likely to nudge consumers into making decisions that they are likely to regret or misunderstand. Hidden information, trick question, and obstruction strategies were shown to be particularly likely to manipulate.

Roffarello, Alberto Monge & Lukoff, Kai, et al. (2023)

Many tech companies exploit psychological vulnerabilities to design digital interfaces that maximize the frequency and duration of user visits. Consequently, users often report feeling dissatisfied with time spent on such services. Prior work has developed typologies of damaging design patterns (or dark patterns) that contribute to financial and privacy harms, which has helped designers to resist these patterns and policymakers to regulate them. However, there is a missing collection of similar problematic patterns that lead to attentional harms. To close this gap, the authors conducted a systematic literature review for what they call ‘attention capture damaging patterns’ (ACDPs). They analyzed 43 papers to identify their characteristics, the psychological vulnerabilities they exploit, and their impact on digital wellbeing. They propose a definition of ACDPs and identify eleven common types, from Time Fog to Infinite Scroll. The typology offers technologists and policymakers a common reference to advocate, design, and regulate against attentional harms.

Maier, Maximilian & Harr, Rikard (2020)

“How does the end user perceive, experience, and respond to dark patterns?” This is the research question which drives this inquiry. The paper contributes to an increased awareness of the phenomenon of dark patterns by exploring how users perceive and experience them. Hence, the authors chose a qualitative research approach, with focus groups and interviews for this. Their analysis shows that participants were moderately aware of these deceptive techniques, several of which were perceived as sneaky and dishonest. They further expressed a resigned attitude toward such techniques and primarily blamed businesses for their occurrence. Users also considered their dependency on services employing these practices, thus making it difficult to avoid fully dark patterns.

Mansur Hasan SM, et al. (2023)

In this paper, the authors examine the extent to which common UI dark patterns can be automatically recognized in modern software applications. They introduce AIDUI, a novel automated approach that uses computer vision and natural language processing techniques to recognize a set of visual and textual cues in application screenshots that signify the presence of ten unique UI dark patterns, allowing for their detection, classification, and localization. To evaluate this approach, they constructed CONTEXTDP, the current largest dataset of fully-localized UI dark patterns that spans 175 mobile and 83 web UI screenshots containing 301 dark pattern instances. Overall, this work demonstrates the plausibility of developing tools to aid developers in recognizing and appropriately rectifying deceptive UI patterns.

Morozovaite, Viktorija (2023)

With the nascent rise of the voice intelligence industry, consumer engagement is evolving. The expected shift from navigating digital environments by a “click” of a mouse or a “touch” of a screen to “voice commands” has set digital platforms for a race to become leaders in voice-based services. The European Commission's inquiry into the consumer IoT sector revealed that the development of the market for general-purpose voice assistants is spearheaded by a handful of big technology companies, highlighting the concerns over the contestability and growing concentration in these markets. This article posits that voice assistants are uniquely positioned to engage in dynamically personalized steering – hypernudging – of consumers toward market outcomes. It examines hypernudging by voice assistants through the lens of abuse of dominance prohibition enshrined in article 102 TFEU, showcasing that advanced user influencing, such as hypernudging, could become a vehicle for engaging in a more subtle anticompetitive self-preferencing.

Mathur, Arunesh et al (2019)

The authors present automated techniques that enable experts to identify dark patterns on a large set of websites. Using these techniques, they study shopping websites, which often use dark patterns to influence users into making more purchases or disclosing more information than they would otherwise. They examine these dark patterns for deceptive practices, and find 183 erring websites . They also uncover 22 third-party entities that offer dark patterns as a turnkey solution. Finally, they develop a taxonomy of dark pattern characteristics that describes their underlying influence and their potential harm on user decision-making. Based on these findings, they make recommendations for stakeholders including researchers and regulators to study, mitigate, and minimize the use of these patterns.

Mathur, Arunesh et al (2021)

The article reviews recent work on dark patterns and demonstrates that the literature does not reflect a singular concern or consistent definition, but rather, a set of thematically related considerations. Drawing from scholarship in psychology, economics, ethics, philosophy, and law, the authors articulate a set of normative perspectives for analyzing dark patterns and their effects on individuals and society and show how future research on dark patterns can go beyond subjective criticism of user interface designs and apply empirical methods grounded in normative perspectives

Our clients