Natalia Mikhaylovna Esmurzayeva
Independent Researcher, Ukraine
Correspondence to: Natalia Mikhaylovna Esmurzayeva, esmur89@gmail.com

Additional information
- Ethical approval: N/a
- Consent: N/a
- Funding: No industry funding
- Conflicts of interest: N/a
- Author contribution: Natalia Mikhaylovna Esmurzayeva – Conceptualization, Writing – original draft, review and editing.
- Guarantor: Natalia Mikhaylovna Esmurzayeva
- Provenance and peer-review: Unsolicited and externally peer-reviewed
- Data availability statement: N/a
Keywords: Linguistic choice architecture, Digital nudging, Cognitive bias framing, Hypernudging ethics, Online consumer decision-making.
Peer Review
Received: 11 December 2025
Last revised: 4 February 2026
Accepted: 6 February 2026
Version accepted: 8
Published: 20 February 2026
Plain Language Summary Infographic

Abstract
Behavioural economics has established that cognitive biases and framing effects systematically shape human decision-making. However, the linguistic component of framing and its interaction with digital choice environments remain under-theorised. This article is a structured conceptual/theoretical review (theory-building) designed for model development. Accordingly, the manuscript should not be interpreted as a systematic review or meta-analysis; the PRISMA-inspired flow diagram is used solely as an illustrative transparency overview for evidence mapping. It synthesises behavioural economics, cognitive psychology, linguistics, and HCI/UX evidence to conceptualise linguistic cues as structural components of digital choice architecture.
We show that linguistic cues – such as gain–loss wording, emotional valence, normative phrasing, grammatical tense, and ownership language – activate core behavioural mechanisms including loss aversion, affect heuristics, intertemporal preferences, and social proof. These mechanisms are further shaped by digital environments where defaults, microcopy, and interface cues reinforce or counteract linguistic framing. Building on interdisciplinary findings, the article introduces three author-developed models that extend behavioural economics theory. The Linguistic Choice Architecture (LCA) Matrix maps linguistic cues onto behavioural mechanisms and their digital expressions, positioning language as an integral component of choice architecture.
The Process Mechanism Model (PMeM) explains how linguistic triggers, cognitive evaluations, neural activation, and digital architectures sequentially transform stimuli into economic decisions. The Mind–Interface–Message (MIM) Alignment Model conceptualises economic choices as outcomes emerging from the alignment or misalignment of cognitive mechanisms (Mind), digital environments (Interface), and linguistic framing (Message). These models collectively demonstrate that linguistic framing is not merely communicative but constitutes behavioural infrastructure that shapes decision processes in contemporary digital markets.
Highlights
- This study conceptualises linguistic cues as structural components of choice architecture that systematically influence consumer decision processes in digital environments.
- Linguistic Choice Architecture (LCA) Matrix how wording features activate behavioural mechanisms and are reinforced through interface design.
- The Process Mechanism Model demonstrates how linguistic triggers, cognitive evaluations, and digital architectures interact sequentially to shape decision outcomes.
- The Mind–Interface–Message Alignment Model explains why specific combinations of cognitive, linguistic, and digital factors intensify or weaken behavioural effects.
- The review highlights the need for ethical, transparent communication practices that reduce manipulation risks and support autonomous consumer decision-making.
Introduction
Language has long been recognised as a powerful framing device in human decision-making, yet its role within behavioural economics remains comparatively under-theorised. Classical and contemporary findings show that subtle linguistic cues can shift risk preferences, alter emotional evaluations, and reshape perceived utility, especially in digital environments where decisions are increasingly mediated by interface design. Recent work in persuasive communication confirms that message framing systematically modulates cognitive and affective pathways involved in consumer choice.1 However, most behavioural research still prioritises cognitive biases and environmental nudges, paying limited attention to the linguistic mechanisms through which these biases are triggered.
At the same time, scholars caution that behavioural interventions must be understood within broader structural contexts. Chater and Loewenstein argue that excessive focus on individual-level nudges (“i-frame”) obscures systemic influences on behaviour (“s-frame”),2 while Sunstein highlights that nudges may fail when linguistic framing interacts unpredictably with cognitive heuristics.3 Ethical concerns further complicate this landscape: digital choice environments increasingly deploy personalised or opaque nudges, raising questions about fairness and autonomy.4 Yeung describes these dynamics as “hypernudging,” where big-data systems continuously adjust linguistic and interface cues,5 and audits of dark patterns reveal how such mechanisms can be used manipulatively.6
Building on this regulatory perspective, Shin conceptualises algorithmic infrastructures as key vectors of misinformation and bias that reshape how users perceive and trust digital communication, emphasising that opaque recommendation and curation systems structurally distort online choice environments.7,8 Against this backdrop, the present study examines how linguistic framing, cognitive mechanisms, and digital architectures jointly shape economic behaviour. This study advances a conceptual framework that integrates linguistic, cognitive, and digital factors in contemporary choice architecture. The research is guided by the following questions:
- How do linguistic cues activate behavioural mechanisms central to economic decision-making?
- How do digital environments reinforce or counteract these linguistic effects?
- How can linguistic and digital architectures be aligned to support informed, autonomous consumer choice?
- This integrative framework positions language as a structural component of contemporary choice architecture and expands behavioural economics toward a more comprehensive account of digitally mediated decision-making.
Methodological Approach and Evidence Base
This manuscript is a structured conceptual/theoretical review (theory-building) rather than a PRISMA-compliant systematic review or meta-analysis. A staged screening process was used (including full-text eligibility assessment) to narrow the conceptual evidence base. The PRISMA-inspired figure is presented solely as an illustrative transparency overview; no protocol registration and no formal risk-of-bias assessment were conducted.
Search Strategy and Inclusion Criteria
The literature search covered the period 2006–2024, reflecting the expansion of behavioural research into digital environments. Searches were conducted in Scopus, Web of Science, Google Scholar, PubMed, and HCI/UX digital libraries (ACM Digital Library, IEEE Xplore). Keywords included: “linguistic framing,” “behavioural economics,” “choice architecture,” “digital nudging,” “interface design,” “cognitive biases,” “affect heuristic,” “loss aversion,” “dark patterns,” “hypernudging,” “foreign-language effect,” “intertemporal choice.”
Inclusion criteria comprised: peer-reviewed behavioural, linguistic, cognitive, and neuroeconomic studies; experimental research on framing effects and decision biases; HCI and UX studies on digital nudging; legal, policy, and ethics analyses of nudging and digital manipulation.
We excluded non-verified online content, opinion essays, and sources lacking empirical grounding. Sources were examined with attention to methodological transparency, replicability, sample robustness, cross-cultural reliability, and relevance to decision-architecture mechanisms.
Evidence Base and Analytical Procedures
In total, 186 records were identified through database searching. After title/abstract screening, 91 records were screened and 26 were excluded. Full-text eligibility was assessed for the remaining 65 articles, of which 25 were excluded at full-text stage. This yielded 40 core studies included in the final synthesis and mapped to the LCA/PMeM/MIM frameworks. The most common full-text exclusion reasons were insufficient methodological transparency, lack of direct relevance to decision-architecture mechanisms (language/choice architecture/ digital interfaces), and non-empirical or purely opinion-based claims (Figure 1).

Source: Developed by the authors.
Table 1 summarises the categories of evidence and analytical procedures used in the study.
| Table 1: Evidence base and analytical procedures. | |
| Category/Method | Description |
| Behavioural economics research | Experimental and theoretical work on framing, heuristics, nudges, intertemporal choice, bounded rationality, and decision biases. |
| Linguistic and psycholinguistic studies | Evidence on emotional valence, gain–loss wording, grammatical framing, social-norm activation, and the foreign-language effect. |
| HCI and digital-design research | Studies on digital nudging, interface-mediated choice architecture, attention-economy dynamics, and dark-pattern design strategies. |
| Ethical, regulatory, and theoretical frameworks | Analyses of autonomy, transparency, hypernudging, fairness, and critiques of behavioural public policy. |
| Conceptual content analysis | Categorisation of evidence into behavioural mechanisms (loss aversion, affect heuristic, intertemporal preferences), linguistic cues (emotional wording, grammatical tense, ownership framing), and digital architectures (defaults, scarcity cues, microcopy, algorithmic reinforcement). |
| Integrative modelling | Synthesis of cross-disciplinary insights to construct three author-developed models LCA Matrix, the PMeM, and the MIM Alignment Model. |
| Cross-disciplinary triangulation | Comparison of behavioural, linguistic, neural, and digital-interaction findings to identify convergent mechanisms shaping decision processes. |
| Source: Developed by the authors. | |
Screening rationale and common exclusion reasons. At title/abstract stage, records were most frequently excluded because they (a) did not examine decision-making or choice behaviour, (b) lacked a linguistic component (framing/microcopy/normative wording), (c) did not involve digital choice environments (UI/UX/platform architectures), or (d) were non-academic or non-verifiable sources. At full-text stage, the most common exclusion reasons were (a) insufficient methodological transparency to interpret claims, (b) indirect or tangential relevance to the paper’s mechanisms (e.g., generic persuasion without decision or interface linkage), and (c) purely opinion-based or commentary formats without empirical grounding.
Mapping and annotation procedure. Each retained source was annotated using a lightweight template capturing the study domain and design/type, sample/data (where applicable), the key contribution to decision-architecture mechanisms, and its relevance mapping to LCA, PMeM, and/or MIM. Where a source directly specified linguistic cues, behavioural mechanisms, or interface expressions, these were noted within the “Key contribution” field to support traceability. Mapping decisions are summarised in Appendix A. Although no formal risk-of-bias assessment was performed, retained empirical studies were prioritised when they reported clear experimental manipulations, transparent measures, interpretable samples/data, and direct relevance to decision-architecture mechanisms. Conceptual and policy sources were retained when they offered well-cited, field-recognised frameworks directly shaping the theoretical integration.
Ethical Considerations
Only publicly accessible academic sources were used. No personal or sensitive data were collected or processed.
Results
Behavioural Economics Mechanisms (Frames, Nudges, Cognitive Biases)
Behavioural economics reveals that individuals often deviate from “rational” decision-making due to cognitive biases and framing effects. Kahneman and Tversky famously demonstrated how the same choice presented in different linguistic frames can lead to opposite preferences, violating consistency axioms of rational choice. For example, when a disease outbreak scenario was framed in terms of lives “saved” vs “lost”, people switched from risk-averse to risk-seeking choices even though the outcomes were equivalent.9 Such experiments underline that humans use heuristics and are susceptible to biases in judgment under uncertainty.
Acknowledging these systematic biases, Thaler and Sunstein proposed the concept of nudges – subtle changes in choice architecture that leverage cognitive biases to steer people toward better decisions without restricting freedom. In their view, seemingly minor tweaks (e.g., setting a default option or changing wording) can significantly influence choices by exploiting tendencies like loss aversion or inertia. Thaler and Sunstein coined this approach libertarian paternalism, suggesting that well-designed nudges help individuals make “better” choices (e.g., enrolling in savings plans) while preserving choice autonomy.10 A classic nudge example is placing healthy foods at eye level to encourage nutritious choices – it doesn’t forbid junk food, but it changes the decision frame to favor the desired option.
Not all scholars agree on emphasizing biases and nudges. Gigerenzer argues that behavioural economics has developed a “bias bias”, portraying people as irrational while neglecting their adaptive reasoning abilities.11 He points out that many so-called biases may not lead to large harms in real life and can reflect ecological rationality – efficient heuristics tuned to environments.12 Gigerenzer cautions against overreliance on paternalistic nudging based on a presumed latent irrationality. He contends that the evidence for our “irrationality” is often overstated or cherry-picked. For instance, rather than assuming individuals cannot learn, he emphasises educating people to improve decision-making (“boosting” their competencies) as a viable alternative to nudging.
Gigerenzer also notes that focusing blame on individuals’ biases ignores external forces – industries spend billions to sway consumer behaviour (e.g., toward unhealthy foods or unsafe loans) using similar psychological tricks. In summary, the behavioural mechanisms of framing, heuristics, and nudges highlight the malleability of choices; however, scholars debate whether to leverage these factors for benign guidance or to address the root causes (education and structural factors) of suboptimal decisions. This line of critique aligns with broader behavioural public policy debates, which argue that nudges should be complemented by “boosts” that strengthen statistical literacy, decision skills, and structural safeguards rather than relying solely on paternalistic interventions.2,11
Digital Decision-Making Environments (Platform Architectures, Attention Economy)
In the digital era, the context in which decisions are made has shifted to online platforms and apps, bringing new dimensions to choice architecture. Researchers note that “individuals make increasingly more decisions on screens”, where interface design and information overload strongly influence outcomes.13 The concept of digital nudging has emerged to describe how user interface (UI) elements guide behaviour in these environments.14 For example, Weinmann et al. define digital nudging as “the use of user-interface design elements to guide people’s behaviour in digital choice environments”.15 In practice, this means everything from the placement of a button, the default settings in an app, to the wording of an on-screen prompt can nudge users toward particular actions (such as adding an item to cart or sharing content). In online choice environments, there is essentially no neutral way to present options – the platform architecture inherently frames and filters decisions.
Critically, digital platforms operate within what has been called the attention economy, in which user attention is a scarce commodity being intensely competed for. Simon presciently observed as early as 1971 that “a wealth of information creates a poverty of attention”.16 In other words, when people are inundated with information online, their cognitive bandwidth becomes the limiting factor. Modern social media and e-commerce platforms are designed to capture and hold this limited attention through constant notifications, infinite scrolling feeds, personalised recommendations, and other architectural features. The platform architecture is often optimized to maximize engagement metrics, sometimes at the expense of user well-being. Indeed, techniques now employed by major websites explicitly leverage behavioural mechanisms to influence decisions.
Michel and Gandon17 note that social media companies have brought attention-capture to an “unprecedented scale” by using insights from psychology and neuroscience. They document how specific design features exploit cognitive biases: for instance, the “Like” button triggers social reward feedback loops, push notifications prey on our novelty-seeking instinct, and the pull-to-refresh mechanism on apps mimics the uncertain reward pattern of a slot machine. Likewise, infinite scroll feeds exploit fear of missing out (FOMO), making it hard for users to disengage. These interface designs are not accidental – they intentionally harness biases (e.g., our susceptibility to intermittent rewards or social approval) to keep users clicking and scrolling.
Importantly, the goals of digital nudges in platform architectures can diverge from the individual’s own interests. While Thaler and Sunstein’s nudges aimed to benefit the decision-maker (e.g., better health or savings), many attention-economy tactics serve the platform’s interest in prolonged usage or ad revenue. This reality confirms Gigerenzer’s warning that external agents are actively “nudging” people in directions that may harm them.11 For example, an autoplay feature that queues the next video can exploit our inertia to increase watch time, even if it leads to excessive screen time. Scholars are increasingly concerned with such “dark patterns”, defined as interface designs that subvert user autonomy or manipulate choices for corporate gain. The net effect is that digital decision environments can amplify cognitive biases: users may act more impulsively or narrowly than they would in a less distracting setting.
As Simon’s theory predicts, when attention is stretched thin, people rely even more on mental shortcuts. Thus, platform architecture and the attention economy have become central to understanding decision-making today – they can either mitigate cognitive limitations (e.g., supportive choice architecture that simplifies decisions) or exacerbate biases and exploitation. An emerging consensus is that digital choice environments must be designed ethically, balancing persuasive techniques with user welfare in mind.17 Recent work on human–algorithm interaction further argues that misinformation dynamics, biased recommendations, and algorithmic governance co-evolve with these architectures, making issues of trust, accountability, and epistemic quality central to any behavioural analysis of digital communication.7,8
The Linguistic Dimension in Behavioural Economics (Language as a Choice-Modifying Instrument)
Language itself can be a powerful instrument for shaping decisions. Behavioural economists recognize that the way choices are described – the framing, wording, and context – systematically influences preferences. As noted, Tversky and Kahneman’s studies on framing demonstrated that simple linguistic tweaks (e.g., “200 people will be saved” vs “400 people will die”) produce significant shifts in choice outcomes.9 Words carry connotations and emotional weight, which can trigger cognitive biases. A positively framed outcome highlights benefits and tends to encourage risk-aversion, whereas a negatively framed but equivalent outcome highlights losses and can induce risk-seeking behaviour. This framing effect underlines that language is not a neutral vehicle of information; it actively shapes perception and decision criteria.
Contemporary research deepens this understanding of the linguistic dimension. Capraro proposes a “LENS” model to explain how linguistic content alters decision-making.18 According to this model, the Linguistic framing of a decision activates certain Emotional responses and suggests social Norms, which together influence the individual’s ultimate Strategic choice. In other words, how a choice is worded can evoke emotions (like fear, empathy, excitement) and implicit norms (“what one is supposed to do in this situation”), thereby steering behaviour beyond what standard economic utility would predict. For example, labeling a tax as a “carbon offset” versus a “carbon tax” has been shown to affect people’s willingness to pay – the former phrase induces a positive normative connotation of doing good, whereas the word “tax” invokes negative emotions, even though the economic outcome is the same. Language can thus serve as a subtle form of nudging: by choosing certain words or frames, policymakers and marketers can non-coercively influence choices.
The influence of language on decision-making is also evident in cross-cultural and experimental findings. Keysar and colleagues discovered what’s known as the foreign-language effect: when people make decisions in a non-native language, they tend to be less swayed by emotional biases and framing nuances. For instance, in experiments, bilingual participants presented with classic decision problems (like financial gambles or moral dilemmas) in their second language showed more consistent, less loss-averse choices compared to when they responded in their native tongue. The hypothesis is that a foreign language provides greater cognitive distance – words have less emotional resonance – allowing more analytical processing and reducing intuitive biases. This finding reinforces the idea that the emotional tone of language (which is dampened in a foreign tongue) is a key driver of biases. Likewise, subtle shifts in wording can alter mental accounting: saying a fee is a “small loss” versus a “small cost” might change willingness to pay due to loss aversion triggered by the word “loss.”19
Behavioural economist Chen even found that the grammar of one’s language correlates with economic behaviours: languages that grammatically separate the future (like English “will rain”) versus those that treat it more like the present (“rains tomorrow” in Chinese) were associated with differences in saving rates, presumably because speaking about the future as a separate category makes it feel more distant and less urgent.20 In sum, language shapes thought – by framing expectations, evoking emotions, and invoking norms – thus serving as a potent tool for modifying choices within the behavioural economics framework. At the same time, broader debates in behavioural public policy and economic psychology caution that such language–savings associations are likely to be context-sensitive and sensitive to measurement choices, and are therefore better understood as probabilistic tendencies rather than universal laws.12,21
To operationalize this conceptualization of language as a choice-shaping instrument, this study introduces an authorial framework – the LCA Matrix. The framework systematizes how specific linguistic cues activate behavioural mechanisms and how these mechanisms are subsequently reinforced or expressed within digital interface design. The matrix serves as an applied bridge between linguistic theory, behavioural economics, and digital choice architecture, illustrating how wording choices can directly shape consumer decision outcomes (Figure 2).

Source: Developed by the authors.
To illustrate how to read the LCA Matrix, we provide brief examples for the ownership and scarcity rows. Scarcity-oriented microcopy (e.g., “Only 3 left”, “Offer ends tonight”), when combined with countdown timers or salient defaults, is expected to heighten perceived urgency and FOMO and thereby increase the likelihood of urgency-driven purchases under time pressure and low transparency. Ownership wording (e.g., “keep your plan”, “your subscription”) in the presence of preselected renewal options is expected to strengthen endowment and anticipated loss, increasing renewal or upgrade intentions. These effects are likely to attenuate or even elicit reactance when cues are overused or perceived as overtly manipulative. Accordingly, the LCA Matrix is intended as a hypothesis-generating framework that links linguistic triggers, behavioural mechanisms and digital expressions with expected effect directions and boundary conditions, rather than as a fixed taxonomic list.
A framework illustrating how linguistic cues activate behavioural mechanisms, how these mechanisms are expressed within digital interface design, and how they jointly shape expected consumer decision outcomes.
Empirical and Neural Evidence on Linguistic and Digital Framing
Empirical findings from neuroscience, linguistics and human–computer interaction further demonstrate how language and digital environments shape decision biases. Neuroeconomic studies show that framing identical outcomes as gains or losses produces distinct neural activation patterns: loss frames increase amygdala activity and correlate with risk-seeking,19 while individuals who resist linguistic framing biases exhibit greater frontal activation, associated with cognitive control.22 These results support the claim that framing taps into domain-specific emotional valuation mechanisms.
Linguistic and cross-cultural studies provide converging evidence that the emotional resonance of language modulates decision-making. The foreign-language effect demonstrates that people acting in a non-native language show reduced loss aversion and more consistent choices,19,23 although the effect can weaken under specific heuristics or task structures.23,24 This aligns with Capraro’s LENS model,18 which proposes that linguistic cues evoke emotional and normative responses that guide strategic behaviour. Similarly, grammar influences intertemporal preferences: speakers of languages with explicit future tense marking exhibit lower saving rates, presumably because the future feels psychologically distant.20
Digital communication introduces additional empirical layers. UX researchers have shown that microcopy–brief textual cues in buttons or forms–acts as a digital nudge, promoting action when framed in terms of benefits or positive outcomes, whereas anxiety-inducing or cautionary wording can backfire.25 At the same time, large-scale audits reveal the prevalence of dark patterns that exploit default effects, framing, and user inertia for commercial gain.26,27 These findings highlight that linguistic and interface-driven framing jointly shape real-time decisions in digital spaces. Building on these empirical and theoretical insights, we integrate linguistic, cognitive and digital determinants of decision-making into a unified processual perspective. The author-developed PMeM (Figure 3) illustrates how a stimulus is transformed into a consumer decision through sequential interactions between linguistic triggers, cognitive mechanisms, neurobiological responses and the architecture of digital environments.

Source: Developed by the authors.
Figure 3 visualizes the sequential pathway through which a stimulus is converted into a consumer decision. Linguistic cues initiate the process by shaping perception; cognitive mechanisms respond through heuristics, biases and emotional evaluations; and the digital environment further reinforces or redirects these tendencies through interface design, defaults and algorithmic cues. This model clarifies how these factors interact over time and why decisions in digital contexts emerge as the cumulative result of multiple layers of influence.
Ethical and Structural Debates in Behavioural Digital Communication
The interpretation of linguistic and digital framing is contested. Gigerenzer11,12 argues that behavioural science overstates human irrationality and neglects ecological rationality–adaptive heuristics that function efficiently in real environments. Related critiques by Chater and Loewenstein2 and Andreas and Jabakhanji27 warn that an excessive “i-frame” focus on individual nudges obscures broader structural determinants of behaviour, proposing instead “s-frame” interventions targeting systemic conditions.
Ethical concerns further complicate the landscape. Scholars highlight that digital platforms increasingly employ algorithmic and big-data-driven “hypernudges”5 that continuously adjust linguistic and interface cues, raising questions about autonomy and manipulation. Others emphasise that framing is inherent to communication and cannot be eliminated, only governed ethically.3,4 Collective evidence suggests that linguistic and interface-level cues must be evaluated not only for effectiveness but also for fairness, transparency and respect for user autonomy. Shin’s work on debiasing AI extends this line of reasoning by arguing that governance must move beyond isolated nudges to the design of sustainable, value-sensitive algorithmic systems, foregrounding distributive justice, long-term societal trust, and the environmental footprint of AI-driven choice infrastructures.7,28
Taken together, these findings suggest that language functions not merely as a medium of communication but as a structural element shaping how choices are perceived, evaluated and enacted. Building on this synthesis, we introduce the concept of linguistic choice architecture – the idea that wording, framing and narrative cues operate as an infrastructural layer of decision-making, analogous to interface-based choice architecture in digital environments. To our knowledge, prior research has examined linguistic effects in isolation (framing, emotional valence, norms) but has not conceptualised language as a behavioural infrastructure that systematically organizes the decision space. This article develops this perspective further and incorporates it into the broader theoretical framework proposed by the author.
Applied Implications: Linguistic Framing in Digital Markets
Digital markets demonstrate how linguistic and interface cues interact to steer consumer behaviour. Platforms frequently rely on defaults, social-proof cues, scarcity messages and FOMO-inducing microcopy to leverage cognitive biases such as inertia, loss aversion and herd behaviour.29 E-commerce environments depend heavily on textual persuasion: concrete, sensory product descriptions increase trust, and subtle changes in call-to-action phrasing (“Buy now” vs “Add to cart”) shift purchasing intentions. Applied A/B testing reports frequently favour gain-framed variants in some commercial contexts;30 however, effect direction and magnitude are highly context-dependent and sensitive to audience, domain, and implementation details.
Worked example (audit vignette): subscription cancellation flow (LCA × MIM). Consider a subscription cancellation screen where users encounter loss-framed microcopy (“You will lose premium access today”), ownership wording (“Keep my plan”), and a countdown timer. From an LCA perspective, this stacks loss aversion, endowment effects, and urgency heuristics, reinforced by asymmetric button salience. While such design may increase short-term retention, it concentrates multiple behavioural mechanisms at a high-stakes billing decision. Applying the MIM model reveals a likely misalignment: the platform’s retention incentive (Interface) and pressure-laden wording (Message) conflict with the user’s immediate goal to stop charges (Mind). This misalignment predicts increased reactance, reduced trust, and post-decision dissatisfaction. An LCA × MIM-informed redesign would replace loss framing with neutral clarity (“Charges stop after [date]”), remove urgency cues, and restore symmetric opt-out salience. Outcomes to assess include cancellation completion, perceived autonomy and trust, and downstream churn reactivation, illustrating how the models guide ethical audit and design decisions beyond raw conversion metrics.
Yet these mechanisms can be used manipulatively. Misleading countdown timers, fabricated scarcity phrases (“Only 3 left!”), and confirmshaming wording exploit emotional leverage rather than aid decision-making.27 Regulatory bodies increasingly scrutinise dark patterns that obscure terms or guilt-induce compliance, as reflected for example in recent provisions of the EU Digital Services Act and in US Federal Trade Commission guidance on deceptive design practices. Best practices emphasise transparent, user-centric nudges – “nudges, not sludge” – that simplify choices without concealing intentions. These applied cases underscore why linguistic framing must be analyzed jointly with digital choice architecture. These findings align with broader evidence that dark patterns can intentionally distort user choice, operating at the boundary of ethical and legal acceptability.6,25
In practical terms, this implies subjecting microcopy and interface flows to simple “nudges, not sludge” checks, including transparency of costs and conditions, reversibility and ease of opt-out, and multilingual fairness, whereby semantically equivalent, non-coercive wording is maintained across language versions.
Emerging Research Directions in AI-Mediated Choice Architecture
As digital ecosystems evolve, new research questions arise concerning the long-term effects of linguistic framing and digital nudging. Repeated exposure may attenuate framing impact or increase user skepticism, underscoring the need for longitudinal designs.29 The rise of algorithmic personalization generates “hypernudges,” where AI tailors linguistic cues in real time based on user data.5,31 Such systems raise concerns about opacity, autonomy and potential coercion, prompting calls for transparent, explainable AI nudges and oversight structures such as ethics boards and regulatory sandboxes.17 Further directions include multimodal framing (language + visuals + voice agents), cross-cultural variability in digital persuasion, and models incorporating “framing utility,” where linguistic form itself becomes part of the decision function. These trends highlight the need for integrative frameworks capable of unifying cognitive, linguistic and interface-level mechanisms.
The Mind–Interface–Message Alignment Model
Drawing together these insights, we can conceptualise decision-making as the outcome of an interplay between cognitive mechanisms, environmental context, and linguistic framing. Behavioural mechanisms (like heuristics, biases, and preferences) form the psychological substrate that makes humans susceptible to certain influences. The decision environment, especially today’s digital platforms, acts as a choice architecture that can amplify or dampen those tendencies – a well-designed environment might guide users toward beneficial choices, while a manipulative one exploits biases for profit.
Meanwhile, the language and messaging surrounding choices provide the narrative that triggers emotional and normative cues, effectively tilting the decision one way or another. Each scholar’s viewpoint adds a piece to this puzzle. The digital-age researchers warn that when our environment is engineered to capture attention and exploit habit, our decisions can be subtly coerced without us realising – raising ethical questions. In response, the linguistic and cultural findings (Capraro, Keysar, Chen) suggest that reframing problems or adjusting our mode of thinking (even down to which language we use) can reclaim some agency by altering how we process information.
The MIM Alignment Model is an author-proposed conceptual framework explaining how cognitive mechanisms (Mind), digital choice environments (Interface) and linguistic cues (Message) jointly shape consumer decision outcomes. Building on the reviewed theoretical perspectives, we propose the MIM Alignment Model, which conceptualises decision-making as an outcome of interaction between three forces:
- Behavioural mechanisms (biases, heuristics, nudges),
- Digital choice architecture (platform design, UI pathways, attention economy), and
- Linguistic framing (verbal cues, emotional tone, norm activation).
The model assumes that decision outcomes improve when cognitive mechanisms, digital environments, and linguistic framing are aligned, and weaken when misaligned. Linguistic cues shape how users perceive the digital environment, while interface structures amplify or counteract these cognitive tendencies. Figure 4 visualizes this interplay.

Source: Developed by the authors.
Decision effects do not stem from any single factor. Instead, the influence of a digital nudge depends simultaneously on UI design, message wording, and the cognitive mechanisms it activates. To facilitate empirical testing and audit applications, the MIM Alignment Model can be operationalised using a minimal measurement plan. Mind-level outcomes may be assessed via validated self-report scales capturing perceived autonomy (e.g., validated autonomy-support measures grounded in Self-Determination Theory), trust (e.g., validated trust-in-technology or platform-trust scales), and psychological reactance (e.g., validated psychological reactance scales).
Interface-level outcomes include observable behavioural indicators such as choice completion, opt-out rates, time-to-click, dwell time, and downstream retention or churn. Message-level effects can be examined through controlled microcopy manipulations (gain–loss framing, ownership wording, normative cues) combined with interface variations (defaults, salience, timing). A minimal reporting checklist should specify: (a) linguistic manipulation, (b) interface configuration, (c) hypothesised alignment or misalignment condition, (d) primary behavioural outcome, (e) mediators (autonomy, trust, reactance), and (f) boundary conditions (time pressure, literacy, transparency cues).
Positioning the LCA, PMeM and MIM Models within Existing Frameworks
Existing behavioural and HCI frameworks offer important reference points for the three models proposed in this article. MINDSPACE32 and the EAST framework33,34 specify psychological levers and design principles for nudging, while HCI nudging taxonomies catalogue interface patterns such as default settings, reminders and salience cues in digital choice environments. Capraro’s LENS model brings language more explicitly into focus by showing how linguistic content activates emotions and social norms that guide strategic behaviour. However, these approaches typically treat wording, interface design and cognitive processing as separate layers. Related work on persuasive systems design35 and the Fogg Behaviour Model36 further demonstrates how digital architectures combine cues, triggers and perceived simplicity to steer user behaviour, but they do not systematically differentiate linguistic features within the choice-architecture layer.
By contrast, the Linguistic Choice Architecture (LCA) Matrix treats specific linguistic features (e.g., gain–loss framing, emotional adjectives, normative cues, grammatical tense, ownership wording and scarcity phrases) as structural elements of choice architecture that are directly linked to behavioural mechanisms and concrete interface expressions (discount displays, microcopy, pop-ups, “Pay now/Pay later” buttons, personalised notifications, countdown timers) and their expected decision outcomes. The PMeM extends this by modelling the full sequence from stimulus through linguistic triggers, cognitive and neurobiological processes, and the digital environment to algorithmically modified decisions and ethical safeguards.
Finally, the Mind–Interface–Message (MIM) Alignment Model formalises how cognitive mechanisms (Mind), digital choice architecture and dark/transparent nudges (Interface), and framing, emotional valence and verbal triggers (Message) must align or misalign to amplify, neutralise or reverse behavioural effects. Table 2 summarises how these models complement and extend existing frameworks. A detailed side-by-side construct mapping is provided in Appendix D. Unique, falsifiable predictions enabled by LCA/PMeM/MIM (beyond MINDSPACE/EAST/LENS/Fogg/PSD):
- a. Interface × microcopy interactions: the same linguistic trigger changes sign under different default/opt-out architectures.
- b. Alignment vs misalignment predicts reactance under transparency cues (cooling-off, disclosure salience).
- c. Multilingual deployment shifts effect sizes via emotional resonance (foreign-language attenuation vs confusion).
- d. Algorithmic reinforcement amplifies or dampens linguistic effects over repeated exposures (habituation vs escalation).
In preregistered experimental designs, comparative support for the LCA, PMeM, and MIM frameworks over LENS, PSD, or Fogg-type models can be adjudicated using factorial designs manipulating microcopy × default architecture, combined with behavioral outcomes and mediator analyses of perceived autonomy, trust, and psychological reactance, as specified in the minimal measurement plan.
| Table 2: Comparison of existing behavioural/HCI frameworks and the LCA, PMeM and MIM models. | |||
| Framework/Approach | Main Focus and Mechanisms | What It Explains Well | Added Value of LCA/PMeM/MIM |
| MINDSPACE/EAST | Policy-level nudges based on psychological levers (norms, salience, affect, incentives) and design principles. | High-level guidance for behaviour change interventions. | LCA specifies concrete linguistic cues as choice-architecture elements; PMeM and MIM show how these cues interact with digital defaults and heuristics over time. |
| HCI nudging taxonomies | Interface patterns in digital environments (defaults, reminders, salience, social cues). | Classification of digital nudges and UI patterns. | LCA links UI patterns to underlying linguistic triggers and behavioural mechanisms; PMeM and MIM integrate these patterns into a Mind–Interface–Message process. |
| Message-framing and meta-analyses | Effects of gain–loss framing, emotional and normative wording on attitudes and choices. | Strong empirical evidence for framing effects. | LCA embeds framing categories (gain–loss, emotional, normative, temporal, ownership, scarcity) in a matrix with mechanisms and expected outcomes; PMeM and MIM connect framing to digital architectures and alignment conditions. |
| Capraro’s LENS model | Linguistic content → emotions and norms → strategic choices. | Formalises how language triggers emotional and normative pathways. | LCA generalises the linguistic features, PMeM adds digital-environment and neural stages, and MIM situates these dynamics within Mind–Interface–Message alignment in digital markets. |
| Source: Developed by the authors. | |||
Discussion
The proposed Linguistic Choice Architecture Matrix, Process Mechanism Model, and Mind–Interface–Message alignment framework collectively show that specific linguistic cues activate measurable behavioural mechanisms which are subsequently reinforced by interface design. In contrast to established digital nudging taxonomies that focus primarily on interface elements or decision structures, the present conceptualisation positions language as an independent, systematic source of behavioural influence. These claims are consistent with communication-science research on framing and persuasive message design, including meta-analytic evidence that gain–loss wording and emotional tone reliably but context-sensitively shift attitudes and choices,37,38 and with HCI/UX work on digital nudging and interface-mediated persuasion that documents how interface structures and microcopy guide user choices in practice.1,39,40
Together, these strands of evidence suggest that linguistic and digital-choice architectures can reliably shape behaviour, but also that their effects are contingent on context, audience characteristics, and implementation details rather than mechanically reproducible across all settings.2,11,12 While prior work in human–computer interaction has catalogued interface-mediated nudges, and behavioural research has examined framing effects, these domains have rarely been theorised together. The findings therefore bridge a conceptual gap by illustrating how linguistic framing, affective valence, temporal wording, and social-norm cues interact with digital architectures to steer consumer decisions.
For clarity, all three models introduced in this study–the LCA Matrix, the PMeM, and the MIM Alignment Model–are original author-developed conceptual frameworks not present in prior behavioural, linguistic, or HCI literature. To support responsible translation of these testable propositions into applied design and audit practice, Appendix B provides a short glossary of key terms, a practitioner-oriented “nudges, not sludge” checklist, and a conceptual mapping of how the models can inform compliance reviews and interface audits.
Conclusion
This study conceptualises language as a core component of behavioural choice architecture, demonstrating that linguistic cues systematically activate behavioural mechanisms and interact with digital environments to shape economic decisions. The findings demonstrate that linguistic framing interacts dynamically with digital choice architecture, shaping not only how choices are perceived but also how they are enacted. These insights underscore the importance of aligning linguistic and interface-level strategies to preserve user autonomy.
Limitations and Future Research
The study’s primary limitation lies in its conceptual nature: the LCA Matrix, PMeM, and MIM remain untested theoretical frameworks that have not yet been empirically validated. Although grounded in peer-reviewed behavioural, linguistic, and neurocognitive evidence, the proposed models require empirical validation across culturally and technologically diverse decision contexts. Moreover, behavioural effects may vary depending on platform design constraints, user proficiency, and language background, particularly in multilingual environments. Future studies can translate the proposed models into testable hypotheses. Below we outline three illustrative hypotheses (H1–H3) that specify expected effects, moderators, mediators, boundary conditions and measurement strategies, thereby operationalising the LCA, PMeM and MIM as a roadmap for empirical validation rather than claiming that these effects have already been demonstrated.
- H1 (LCA × interface design). Loss-framed microcopy combined with countdown timers and “Pay now” defaults will produce higher rates of urgency-driven purchases than neutral wording with static pricing and “Decide later” options. This effect is expected to be stronger under high time pressure and low financial literacy, and weaker when interfaces provide salient cooling-off periods and clear opt-out options.
- H2 (LCA × ownership mechanisms). Ownership-oriented wording (e.g., “keep your plan”, “your subscription”) will increase renewal and upgrade intentions relative to neutral wording (“the plan”, “the subscription”), with perceived ownership and anticipated loss mediating this relationship and regulatory disclosures moderating its magnitude.
- H3 (MIM alignment). When linguistic framing, interface design and users’ pre-existing goals are aligned (e.g., gain-framed health messages + supportive defaults + health-oriented users), decisions will be perceived as more autonomous and trustworthy than under misaligned conditions (e.g., opaque dark patterns combined with guilt-inducing loss frames), with perceived autonomy, trust and reactance acting as mediators.
To facilitate empirical uptake, Appendix C provides a compact set of testable propositions operationalising key LCA cells. Each proposition specifies the linguistic feature, behavioural mechanism, interface expression, predicted direction, moderators/mediators, boundary conditions, and candidate outcome measures. These hypotheses illustrate how the LCA, PMeM and MIM can be operationalised using standard experimental manipulations (microcopy variants, default settings, time pressure), with explicit moderators, mediators and boundary conditions, and measured through behavioural outcomes (click-through, purchase, opt-out), self-reports (perceived ownership, urgency, autonomy, trust) and, where relevant, psychophysiological indices of arousal.
Future research should operationalise the LCA Matrix and MIM framework in experimental and field settings, testing their predictive power for actual consumer choice behaviour. Integrating linguistic variables into behavioural economics models may improve the precision of choice architecture interventions and provide new methodological tools for evaluating digital communication ethics, transparency, and user autonomy. Future research should validate the LCA, PMeM and MIM across at least two high-impact domains – such as e-commerce and digital health – and across culturally diverse user groups. These domains allow systematic testing of how linguistic framing interacts with platform architectures, regulatory constraints, and user characteristics in real decision environments.
References
- Gier NR, Krampe C, Kenning P. Why it is good to communicate the bad: understanding the influence of message framing in persuasive communication on consumer decision-making processes. Front Hum Neurosci. 2023;17:1085810. https://doi.org/10.3389/fnhum.2023.1085810
- Chater N, Loewenstein G. The i-frame versus the s-frame: how focusing on individual-level solutions has led behavioral public policy astray. Behav Brain Sci. 2022;45:e131. https://doi.org/10.1017/s0140525x22002023
- Sunstein CR. Nudges that fail. Behav Public Policy. 2017;1(1):4–25. https://doi.org/10.1017/bpp.2016.3
- Schmidt AT, Engelen B. The ethics of nudging: an overview. Philos Compass. 2020;15(4):e12658. https://doi.org/10.1111/phc3.12658
- Yeung K. ‘Hypernudge’: Big Data as a mode of regulation by design. Inf Commun Soc. 2017;20(1):118–36. https://doi.org/10.1080/1369118X.2016.1186713
- Luguri JB, Strahilevitz LJ. Shining a light on dark patterns. J Legal Anal. 2021;13:43–109. https://doi.org/10.1093/jla/laaa006
- Shin D. Artificial misinformation: exploring human–algorithm interaction online. Cham: Palgrave Macmillan; 2024. https://doi.org/10.1007/978-3-031-52569-8
- Shin D. Minds, machines, and misinformation: decoding bias, algorithms, and trust. Amsterdam: Elsevier; 2025.
- Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211(4481):453–8. https://doi.org/10.1126/science.7455683
- Thaler RH, Sunstein C. Nudge: improving decisions about health, wealth, and happiness. New York: Penguin; 2009.
- Gigerenzer G. On the supposed evidence for libertarian paternalism. Rev Philos Psychol. 2015;6(3):361–83. https://doi.org/10.1007/s13164-015-0248-1
- Gigerenzer G. The bias bias in behavioral economics. Rev Behav Econ. 2018;5(3–4):303–36. http://dx.doi.org/10.1561/105.00000092
- Mirsch T, Lehrer C, Jung R. Digital nudging: altering user behavior in digital environments. In: Leimeister JM, Brenner W, editors. Proceedings of the 13th international conference on Wirtschaftsinformatik (WI 2017), St. Gallen. St. Gallen: [Publisher if available]; 2017. p. 634–48.
- Caraban Y, Karapanos E, Gonçalves D, Campos P. 23 ways to nudge: a review of technology-mediated nudging in human-computer interaction. Proc ACM Hum-Comput Interact. 2019;3(CSCW):Article 203 (1–32). https://doi.org/10.1145/3290605.3300733
- Weinmann M, Schneider C, vom Brocke J. Digital nudging: guiding online user choices through interface design. Bus Inf Syst Eng. 2016;58(6):433–6. https://doi.org/10.1007/s12599-016-0453-1
- Simon HA. Designing organizations for an information-rich world. In: Greenberger M, editor. Computers, communications, and the public interest. Baltimore, MD: Johns Hopkins Press; 1971. p. 37–72.
- Michel F, Gandon F. Pay Attention: a call to regulate the attention market and prevent algorithmic emotional governance. arXiv preprint arXiv:2402.16670. 2024. Available from: https://hal.science/hal-04758276v1
- Capraro V. Human behaviour through a LENS: how linguistic content triggers emotions and norms and determines strategy choices. Curr Opin Psychol. 2025;64:102024. https://doi.org/10.1016/j.copsyc.2025.102024
- Keysar B, Hayakawa SL, An SG. The foreign-language effect: thinking in a foreign tongue reduces decision biases. Psychol Sci. 2012;23(6):661–8. https://doi.org/10.1177/0956797611432178
- Chen M.K. The effect of language on economic behavior: evidence from savings rates, health behaviors, and retirement assets. Am Econ Rev. 2013;103(2):690–731. http://dx.doi.org/10.1257/aer.103.2.690
- Andreas M, Jabakhanji SB. The I-frame vs. S-frame: how neoliberalism has led behavioral sciences astray. Front Psychol. 2023;14:1247703. https://doi.org/10.3389/fpsyg.2023.1247703
- Sun S, Hu J, Yu R. Domain-specific neural substrates underlie the framing effect. Neuroimage Rep. 2022;2(4):100119. https://doi.org/10.1016/j.ynirp.2022.100119
- Vives ML, Aparici M, Costa A. The limits of the foreign language effect on decision-making: the case of the outcome bias and the representativeness heuristic. PLoS One. 2018;13(9):e0203528. https://doi.org/10.1371/journal.pone.0203528
- Wang Z, Yip MCW. The foreign language effects on strategic behavior games. PLoS One. 2022;17(11):e0277556. https://doi.org/10.1371/journal.pone.0277556
- Muto M, Yang W. The influence of microcopy on user decision-making. In: Proceedings of the AHFE international conference on interdisciplinary practice in industrial design. AHFE Open Access; 2024. http://doi.org/10.54941/ahfe1005131
- Gray CM, Kou Y, Battles B, Hoggatt J, Toombs AL. The dark (patterns) side of UX design. In: Proceedings of the 2018 CHI conference on human factors in computing systems; 2018. p. 1–14. https://doi.org/10.1145/3173574.3174108
- Mathur A, Acar G, Friedman MJ, Lucherini E, Mayer J, Chetty M, et al. Dark patterns at scale: findings from a crawl of 11K shopping websites. Proc ACM Hum-Comput Interact. 2019;3(CSCW):81:1–32. https://doi.org/10.1145/3359183
- Shin D. Debiasing AI: rethinking the intersection of innovation and sustainability. New York: Routledge; 2025. https://doi.org/10.1201/9781003530244
- Khound K, Mishra V. Nudging in digital environments: a review of behavioral economics interventions and consumer decision-making. Adv Consum Res. 2025;2(4):2810–22.
- Guo Z. The application of framing effect in marketing and advertising. Adv Econ Manag Political Sci. 2023;12(1):255–60. https://doi.org/10.54254/2754-1169/12/20230632
- Sætra HS. When nudge comes to shove: liberty and nudging in the era of big data. Technol Soc. 2019;59:101130. https://doi.org/10.1016/j.techsoc.2019.04.006
- Dolan P, Hallsworth M, Halpern D, King D, Metcalfe R, Vlaev I. MINDSPACE: influencing behaviour through public policy (report). London: Institute for Government; 2023.
- Behavioural Insights Team. EAST: four simple ways to apply behavioural insights. London: Behavioural Insights Team; 2014.
- Behavioural Insights Team. EAST: a framework for behaviour change. Updated edition. London: Behavioural Insights Team; 2019.
- Oinas-Kukkonen H, Harjumaa M. Persuasive systems design: key issues, process model, and system features. Commun Assoc Inf Syst. 2009;24(1):28. https://doi.org/10.17705/1CAIS.02428
- Fogg BJ. A behavior model for persuasive design. In: Proceedings of the 4th international conference on persuasive technology; 2009. p. 1–7. https://doi.org/10.1145/1541948.1541999
- O’Keefe DJ, Jensen JD. The relative persuasiveness of gain-framed and loss-framed messages for encouraging disease-prevention behaviors: a meta-analytic review. J Health Commun. 2007;12(7):623–44. https://doi.org/10.1080/10810730701615198
- Gallagher KM, Updegraff JA. Health message framing effects on attitudes, intentions, and behavior: a meta-analytic review. Ann Behav Med. 2012;43(1):101–16. https://doi.org/10.1007/s12160-011-9308-7
- De Martino B, Kumaran D, Seymour B, Dolan RJ. Frames, biases, and rational decision-making in the human brain. Science. 2006;313(5787):684–7. https://doi.org/10.1126/science.1128356
- Del Maschio N, Crespi F, Peressotti F, Abutalebi J, Sulpizio S. Decision-making depends on language: a meta-analysis of the foreign language effect. Bilingualism: Lang Cogn. 2022;25(4):617–30. https://doi.org/10.1017/S1366728921001012
Appendix
| Appendix A: Summary of the retained core sources (n = 40). | ||||||
| Ref | Core Source (Short) | Domain | Design/Type | Sample/Data | Key Contribution | Mapping |
| 1 | Gier et al. (2023) | Persuasive comms/neuro | Empirical (neuro/consumer decision) | Empirical dataset (see original) | How message framing shapes decision processes | LCA/PMeM |
| 2 | Chater and Loewenstein (2022) | Behavioural public policy | Conceptual/theoretical | N/A | i-frame vs s-frame critique; structural context | MIM |
| 3 | Sunstein (2017) | Nudging | Conceptual + evidence synthesis | N/A | When/why nudges fail; boundary conditions | MIM/PMeM |
| 4 | Schmidt and Engelen (2020) | Ethics of nudging | Narrative review | N/A | Ethical constraints: autonomy, transparency | MIM |
| 5 | Yeung (2017) | Regulation by design | Conceptual/legal-theoretical | N/A | “Hypernudge” via big data; continuous optimisation | MIM/PMeM |
| 6 | Luguri and Strahilevitz (2021) | Dark patterns/legal analysis | Empirical + legal analysis (review of practices) | Mixed evidence base | Taxonomy/impacts of dark patterns | LCA/MIM |
| 7 | Shin (2024) (book) | Human–algorithm interaction | Scholarly monograph | N/A | Trust/bias/misinformation as structural environment | MIM |
| 8 | Shin (2025) (book) | Algorithms and trust | Scholarly monograph | N/A | Bias/algorithms/trust; governance implications | MIM |
| 9 | Tversky and Kahneman (1981) | Framing | Empirical experiments | Human participants (see original) | Canonical gain/loss framing → preference reversal | LCA/PMeM |
| 10 | Thaler and Sunstein (2009) (book) | Nudging | Scholarly monograph | N/A | Choice architecture principles; defaults etc. | MIM/PMeM |
| 11 | Gigerenzer (2015) | Critique of LP | Conceptual + critique | N/A | Challenges “libertarian paternalism” evidence | MIM |
| 12 | Gigerenzer (2018) | Behavioural econ critique | Conceptual | N/A | “Bias bias” critique; ecological rationality | MIM |
| 13 | Mirsch et al. (2017) | Digital nudging | Conference paper/conceptual + examples | N/A | Digital nudging concept and behaviour change online | PMeM/MIM |
| 14 | Caraban et al. (2019) | HCI nudging | Systematic-style review/taxonomy | Multiple HCI studies | “23 ways to nudge” taxonomy | LCA/PMeM |
| 15 | Weinmann et al. (2016) | Digital nudging | Conceptual/position paper | N/A | Definition + mechanism of UI nudges | PMeM |
| 16 | Simon (1971) | Attention/cognition | Foundational theoretical chapter | N/A | Attention scarcity in info-rich world | PMeM |
| 17 | Michel and Gandon (2024) | Attention economy | Preprint (arXiv)/conceptual-regulatory | N/A | Attention market + algorithmic emotional governance | MIM/PMeM |
| 18 | Capraro (2025) | Psycholinguistics | Theoretical integration (LENS model) | N/A | Linguistic content → emotions/norms → strategy | LCA/MIM |
| 19 | Keysar et al. (2012) | Foreign-language effect | Experiments | Bilingual participants (see original) | Reduced biases when thinking in L2 | LCA/PMeM |
| 20 | Chen (2013) | Language and econ behaviour | Quantitative/cross-language analysis | Large-scale datasets (see original) | Future tense marking ↔ saving/health behaviours | LCA |
| 21 | Andreas and Jabakhanji (2023) | Behavioural science critique | Conceptual (policy critique) | N/A | Neoliberal framing; i-frame vs s-frame extension | MIM |
| 22 | Sun et al. (2022) | Neuroeconomics | Empirical neuroimaging | Neuroimaging dataset (see original) | Neural substrates of framing effect | PMeM |
| 23 | Vives et al. (2018) | Foreign-language effect | Experiments (boundary conditions) | Bilingual participants (see original) | Limits of FLE (heuristics/outcome bias) | LCA/PMeM |
| 24 | Wang and Yip (2022) | FLE in games | Experiments (strategic games) | Participants (see original) | FLE effects in strategic behaviour | LCA/PMeM |
| 25 | Muto and Yang (2024) | UX microcopy | Conference proceedings (empirical) | UX/user study (see original) | Microcopy influence on user choice | LCA |
| 26 | Gray et al. (2018) | Dark patterns | CHI paper (qual/empirical) | Design cases + analysis (see original) | Dark pattern repertoire; UX practice | LCA/MIM |
| 27 | Mathur et al. (2019) | Dark patterns at scale | Large-scale crawl + analysis | 11K shopping sites | Prevalence and typology of dark patterns | LCA/MIM |
| 28 | Shin (2025) (Routledge) | AI governance | Scholarly monograph | N/A | Debiasing AI; sustainability/justice framing | MIM |
| 29 | Khound and Mishra (2025) | Digital nudging review | Review article | Multiple studies | Consumer nudging interventions overview | PMeM |
| 30 | Guo (2023) | Framing in marketing | Review/illustrative applied paper | N/A | Marketing applications of framing | LCA (supporting) |
| 31 | Sætra (2019) | Big data and nudging | Conceptual/ethics-policy | N/A | Nudging under big data; liberty concerns | MIM |
| 32 | Dolan et al. (2023) (MINDSPACE) | Policy framework | Report/framework | N/A | Behavioural levers for intervention design | MIM (comparative baseline) |
| 33 | BIT (2014) (EAST) | Policy framework | Practitioner framework | N/A | Easy/Attractive/Social/Timely principles | MIM (comparative baseline) |
| 34 | BIT (2019) (EAST upd.) | Policy framework | Updated practitioner framework | N/A | Updated EAST guidance | MIM (comparative baseline) |
| 35 | Oinas-Kukkonen and Harjumaa (2009) | Persuasive systems design | Conceptual model/framework | N/A | PSD features; system persuasion components | PMeM/MIM |
| 36 | Fogg (2009) | Persuasion model | Conference model paper | N/A | Behaviour = motivation × ability × trigger | PMeM |
| 37 | O’Keefe and Jensen (2007) | Health framing | Meta-analysis | Multiple studies | Gain vs loss framing effects (prevention) | LCA |
| 38 | Gallagher and Updegraff (2012) | Health framing | Meta-analysis | Multiple studies | Framing → attitudes/intentions/behaviour | LCA |
| 39 | De Martino et al. (2006) | Neuro framing | Empirical neuroimaging | Neuroimaging dataset (see original) | Neural basis of framing and “rationality” | PMeM |
| 40 | Del Maschio et al. (2022) | Foreign-language effect | Meta-analysis | Multiple studies | Meta-analytic FLE evidence | LCA/PMeM |
Appendix B
Glossary, Practitioner Checklist, and Compliance-Oriented Mapping
Glossary of Key Terms
Microcopy
Short, task-oriented textual elements embedded within digital interfaces (e.g., button labels, prompts, tooltips, error messages) that guide user action at critical decision points. Within the Linguistic Choice Architecture (LCA), microcopy functions as a linguistic trigger capable of activating behavioural mechanisms such as loss aversion, urgency, affect heuristics, or perceived ownership.
Hypernudging
A form of adaptive, data-driven choice architecture in which linguistic cues and interface features are continuously personalised and modified in real time using algorithmic systems. Hypernudging extends traditional nudging by enabling persistent behavioural steering at scale, intensifying concerns related to transparency, autonomy, accountability, and long-term user welfare. In the present framework, hypernudging operates primarily at the Interface level of the MIM model and influences the processing stages described in the PMeM.
Confirmshaming
A manipulative linguistic practice in which refusal or opt-out options are framed using guilt-inducing, socially disapproving, or self-deprecating language (e.g., “No, I don’t care about saving money”). Confirmshaming exploits social norms and affective pressure, undermining autonomous choice and increasing the risk of reactance. Within the LCA Matrix, confirmshaming represents a misaligned linguistic trigger that can shift nudges toward dark-pattern design.
Practitioner Checklist: “Nudges, Not Sludge”
The following checklist provides a compact diagnostic tool for evaluating whether linguistic and interface-level nudges support informed decision-making or drift toward manipulative design. While the same questions apply across contexts, they map onto different analytical levels of the proposed models (LCA, PMeM, MIM).
Checklist Items
Transparency
- Is the persuasive intent of the wording clear and non-deceptive?
- Are key conditions, costs, and consequences communicated explicitly rather than obscured?
Proportionality
- Is the intensity of linguistic cues (urgency, scarcity, emotional tone) proportionate to the decision’s stakes?
- Are highly affective or loss-framed cues avoided in low-risk or routine decisions?
Reversibility
- Is opting out as easy, salient, and frictionless as opting in?
- Are defaults and preselected options clearly indicated and easily changeable?
Alignment (Mind–Interface–Message)
- Do linguistic cues, interface design, and likely user goals reinforce one another rather than exploit misalignment?
- Are transparency cues (e.g., cooling-off periods, disclosures) provided where cognitive overload or emotional pressure is high?
Avoidance of Manipulation
- Are confirmshaming, false scarcity, and misleading urgency cues avoided?
- Are behavioural mechanisms activated in a way that preserves deliberation rather than suppresses it?
Fairness and Multilingual Consistency
- Is semantically equivalent, non-coercive wording maintained across language versions?
- Are foreign-language users protected from increased confusion or unintended bias amplification?
This checklist is intended as a diagnostic aid rather than a compliance certification tool and supports ethical, user-centred deployment of linguistic choice architecture.
Regulatory and Compliance-Oriented Mapping
Although this study does not provide a legal analysis of specific regulatory provisions, the proposed models offer a structured lens for supporting compliance reviews and interface audits in regulated digital environments.
LCA (Linguistic Choice Architecture)
Supports the identification of high-risk linguistic features (e.g., loss framing, scarcity phrases, confirmshaming) that may undermine transparency or fairness in consumer communication and warrant closer review during copy and content audits.
PMeM (Process Mechanism Model)
Facilitates assessment of how linguistic triggers propagate through cognitive and interface-level processes, helping auditors identify stages at which behavioural influence becomes excessive, opaque, or difficult to reverse.
MIM (Mind–Interface–Message Alignment Model)
Provides a system-level framework for evaluating whether cognitive mechanisms (Mind), interface structures (Interface), and linguistic framing (Message) are aligned in a way that supports informed, autonomous choice rather than exploitative persuasion. Across applied contexts, the models can inform internal reviews by highlighting risks related to opacity, disproportional pressure, limited reversibility, and misalignment between user goals and platform incentives. In this sense, the frameworks are compatible with emerging expectations for transparency, accountability, and user protection in digital communication without being tied to any single jurisdiction or regulatory regime.
| Appendix C: Testable propositions operationalising key linguistic choice architecture (LCA) cells. | ||||||||
| Proposition | Linguistic Feature | Behavioural Mechanism | Interface Expression | Predicted Effect Direction | Key Moderators | Key Mediators | Boundary Conditions | Candidate Outcome Measures |
| P1 | Loss-framed wording | Loss aversion | Countdown timers, “Pay now” default | ↑ urgency-driven choice | Time pressure, literacy | Perceived urgency | Cooling-off periods | Purchase rate, time-to-click |
| P2 | Gain-framed wording | Risk aversion | Benefit-focused CTA, static pricing | ↑ deliberate choice | Decision stakes | Perceived benefit | Low involvement | Conversion, dwell time |
| P3 | Emotional adjectives | Affect heuristic | Product microcopy | ↑ impulsive engagement | Emotional susceptibility | Arousal | Skepticism, ad fatigue | Click-through, impulse buy |
| P4 | Normative cues (“Most users…”) | Social proof | Pop-up badges, counters | ↑ compliance | Group identification | Norm salience | Reactance | Acceptance rate |
| P5 | Ownership wording (“your plan”) | Endowment effect | Preselected renewal | ↑ retention/upgrade | Subscription tenure | Perceived ownership | Disclosure salience | Renewal rate |
| P6 | Scarcity phrases | FOMO | Stock alerts, timers | ↑ urgency behaviour | Trait anxiety | Anticipated regret | Repeated exposure | Add-to-cart rate |
| P7 | Future-oriented tense | Intertemporal discounting | “Pay later” options | ↑ present bias | Financial stress | Temporal distance | Budget constraints | Deferred payment uptake |
| P8 | Neutral wording | Reduced affect | Plain UI text | ↓ bias-driven shifts | Cognitive reflection | Reduced arousal | High task load | Choice consistency |
| P9 | Foreign-language interface | Emotional distancing | Multilingual UI | ↓ framing bias | Language proficiency | Cognitive distance | Task complexity | Risk preference stability |
| P10 | Personalized wording | Authority & relevance | Algorithmic recommendations | ↑ perceived relevance | Trust in platform | Perceived fit | Privacy concerns | Recommendation acceptance |
| P11 | Confirmshaming language | Guilt aversion | Forced-choice pop-ups | Short-term ↑ / long-term ↓ trust | Ethical sensitivity | Shame/reactance | Transparency cues | Opt-out rate, trust |
| P12 | Transparent explanatory text | Autonomy support | Just-in-time disclosures | ↑ perceived autonomy | Regulatory literacy | Perceived fairness | Information overload | Trust, satisfaction |
| Appendix D: Side-by-side construct mapping: LENS vs PSD/Fogg vs LCA/PMeM/MIM. | |||||
| Framework | Primary Level of Analysis | Core Constructs | Causal Logic (Simplified) | What it Explains Best | Distinct Contribution vs Others |
| Overlap (common ground) | Message + interface + cognition | Triggers/cues, bounded rationality, behaviour change | Cues shape behaviour under cognitive constraints | Why small design/message changes can shift choices | Shared premise: choices are context-sensitive and cue-responsive |
| LENS (Capraro) | Message-level mechanisms | Linguistic content, emotions, social norms, strategic choice | Linguistic framing → emotions/norms → strategy selection | How wording activates affect and norm compliance in decisions | Strong linguistic mechanism focus; less explicit on UI defaults/architecture |
| PSD/Fogg | Interface/system-level mechanisms | Triggers, ability, motivation; persuasive system features | Trigger + ability + motivation → behaviour | How system features and UI affordances drive action | Strong system design lens; does not systematically unpack linguistic features as choice-architecture elements |
| LCA (this paper) | Message × interface (choice-architecture layer) | Linguistic features (gain/loss, norms, ownership, scarcity), behavioural mechanisms, digital expressions (microcopy × UI patterns) | Linguistic cue → behavioural mechanism ↔ digital expression → expected outcome | Where and how microcopy interacts with UI patterns (defaults, timers, counters) | Treats language as structural choice architecture, not just “message framing” |
| PMeM (this paper) | Processual (multi-stage pathway) | Triggers, cognitive evaluation, neural/affective response, interface reinforcement, decision | Trigger → cognition → affect/neural response → interface reinforcement → decision | Why effects vary by stage; where to measure and intervene | Enables stage-specific hypotheses, audit checkpoints, and mediator measurement |
| MIM (this paper) | System-level alignment model | Mind (mechanisms/goals), Interface (architecture/incentives), Message (framing/cues); alignment vs misalignment | Alignment amplifies beneficial effects; misalignment → reactance/trust shifts | Predicting trust/autonomy/reactance under transparency cues and platform incentives | Formalises alignment/misalignment as a driver of effect direction, reversals, and downstream trust outcomes |








