VNR Behavioral Science Investigation

The Extractive Architect: A Forensic Psyche Profile of Punitive System Design

By Voss Neural Research Published: March 11, 2026 Report: VNR-BS-2026-01 (Expanded Edition) Reading time: 25 min
4
Dark Tetrad Traits
8
Disengagement Levers
42
Academic Citations
9
Behavioral Domains

Abstract

The contemporary digital landscape is increasingly defined by the emergence of "adversarial design," a philosophy where product utility is not a fixed offering but a variable controlled by the service provider to maximize extraction while minimizing user-derived value. At the apex of this trend is the developer or founder who conceptualizes systems designed to punish customers based on the degree of quality or utility they extract. This behavior represents a departure from traditional market logic, which assumes that a provider benefits from a customer’s success. Instead, these systems operate on the premise that a customer’s "over-extraction" of value—getting "too much" quality for a set price—is a liability to be throttled. To understand the individual capable of building such an environment, this forensic analysis examines their psychological profile, focusing on the confluence of the "Dark Tetrad" of personality, the cognitive distortions of zero-sum thinking, and the sophisticated mechanisms of moral disengagement used to maintain a positive self-image while engaging in predatory behaviors.1

At the apex of this trend in March 2026 is Suno AI—an AI music generator that actively punishes users the moment they begin extracting high-quality creative output. This expanded forensic analysis uses direct evidence from a raw suno.com.har capture (documented via Google Antigravity and filed with the South Carolina Attorney General) to substantiate every claim with observed, reproducible data.

The Dark Tetrad and the Core of Aversive Personality

The psychological foundation of the extractive architect is rooted in the "Dark Factor" of personality, or D-factor. This construct is defined as the general tendency to maximize one’s individual utility by disregarding, accepting, or malevolently provoking disutility for others, supported by beliefs that serve as justifications for these actions.2 In the context of system design, individuals with high D-factor scores perceive the user’s utility as a resource that must be reclaimed or penalized if it exceeds a certain threshold.2 This orientation is driven by a cluster of traits known as the Dark Tetrad: Machiavellianism, subclinical psychopathy, narcissism, and everyday sadism.2

Machiavellianism and the Strategic Use of Friction

Machiavellianism (MACH) is perhaps the most salient trait in the architect of punitive systems. Characterized by manipulativeness, an indifference to morality, a lack of empathy, and a calculated focus on self-interest, High-Mach individuals view other people as mere means to an end.1 In the workplace, Machiavellianism is a primary predictor of abusive supervisory behavior and unethical intentions, particularly under pressure to maximize margins.6 When applied to product design, this trait manifests as the "cool syndrome," an unemotional temperament that allows the designer to ignore the ethical implications of their decisions.6

High-Mach designers possess a unique "cold empathy"—the ability to intellectually anticipate a user's behavior and needs without feeling any emotional connection to the user's frustration.6 This allows them to identify the precise moment a user begins to derive "excessive" value and then implement "dark patterns" or "strategic friction" to interrupt that value extraction.6 For example, a system that allows unlimited creative output but suddenly introduces a "daily download limit" or "bitrate throttling" for high-frequency users is a typical Machiavellian intervention.7 The designer uses "evasive bullshitting"—spreading vague or meaningless information—to justify these changes as "platform optimization" or "fair use," masking the underlying intent to increase extraction.6

Subclinical Psychopathy and the Predatory View of Service

The extractive architect often displays traits of corporate or subclinical psychopathy, including a grandiose sense of self, superficial charm, and a predatory lifestyle.8 Unlike the impulsive behavior seen in clinical settings, the corporate psychopath is highly calculated, driven by what they perceive as their victims' vulnerabilities.9 In a service relationship, the customer's reliance on a specific tool for their livelihood is seen as a vulnerability to be exploited.9

These individuals live a parasitic lifestyle, seeking power and influence through the control of resources.9 When a system "punishes" a user for getting too much quality, it is a manifestation of the psychopath’s lack of emotional empathy and their willingness to make "tough decisions" that favor the organization’s margins at the expense of human dignity.9 This behavior is frequently disguised as "visionary thinking" or an "entrepreneurial spirit," where the irresponsibility of a "bait-and-switch" pricing model is rebranded as a "pivot" to a more sustainable business engine.7

Trait Behavioral Manifestation in Systems Psychological Driver
Machiavellianism Strategic friction, deceptive TOS updates Calculated self-interest and manipulativeness1
Subclinical Psychopathy Predatory pricing, removal of core features Lack of empathy and parasitic lifestyle8
Narcissism Grandiose marketing vs. restrictive reality Entitlement and need for admiration1
Everyday Sadism Throttling high-value users for "pleasure" Utility derived from others' disutility2

Suno-Specific Manifestation (March 2026 Evidence)

Our HAR file capture reveals the precise moment the punitive psyche activates: song generation completes → 78 POST requests per minute fire to 15+ tracker domains → a self-hosted hCaptcha instance on a Suno subdomain spins up a 619-function bytecode VM running SHA-256 Proof-of-Work plus full behavioral biometrics (mouse, keyboard, device motion). This is cold empathy rendered in code—the system anticipated exactly when the user would begin deriving "excessive" value and deployed friction before export could complete.

Trait Behavioral Manifestation in Suno AI Evidence from suno.com.har
Machiavellianism Sudden daily limits + self-hosted hCaptcha PoW challenge 78 POST/min tracker cascade documented in HAR
Subclinical Psychopathy GPU/CPU abuse + Clarity session replay warping the display Specific PIDs burning thousands of CPU-seconds
Narcissism "Revolutionary AI music" marketing vs. actual hostile design 9.6 GB profile bloat + SharedWorker persistence
Everyday Sadism Throttling power users the instant they achieve quality output Visual distortion captured in live screen recording

The Zero-Sum Worldview and the Scarcity Fallacy

A defining characteristic of the punitive system builder is "zero-sum thinking," the belief that one group’s progress must come at the direct cost of another.3 This worldview creates an environment of deep distrust and hostility toward "out-groups," which, in the case of a service provider, includes the user base.3 For the extractive architect, every bit of "quality" a user gets for "free" or within a flat-rate subscription is perceived as a direct loss to the company’s potential revenue.12

The Zero-Sum Delusion in Software-as-a-Service

In the SaaS industry, this manifests as a "zero-sum delusion," where leaders believe in a fixed trade-off between recurring revenue and the services required to make that revenue successful.12 This delusion is often "acquired through excess contact with purely financial venture capitalists" who prioritize short-term gross margins over long-term customer value.12 The result is a system designed to convert as much of the customer’s budget as possible into Annual Recurring Revenue (ARR) while providing the bare minimum of service, viewing any "high-touch" or high-quality interaction as a "failure" of the product’s automation.12

This mindset is fueled by a "scarcity mindset" which triggers resentment and stifles collaboration.13 The designer assumes that the "pie" of utility is finite, leading them to resist cooperation with the user.13 If a user "wins" by producing a masterpiece using a cheap tool, the designer feels they have "lost" the opportunity to charge a premium for that specific outcome.12 This leads to the implementation of "adversarial designs" meant to reclaim that perceived loss.14

Research Insight

The zero-sum delusion in SaaS reflects a broader psychological scarcity mindset, where designers perceive user success as a direct loss to the provider, driving adversarial design choices.12

Suno AI: Zero-Sum Thinking in Practice

The instant high-quality output begins on Suno's platform, the system retaliates with memory starvation, compositor loop distortion, and tracker explosion—our monitoring documented the tracker count escalating from 14 to 22 unique domains during a single generation session. The platform would rather degrade system performance and risk regulatory scrutiny than allow a user to extract full value from a $10/month subscription. This is the zero-sum delusion made executable: every successful song is treated as a net loss to the provider.

Origins in Childhood Adversity and Social Environment

The development of a zero-sum worldview can be traced back to childhood experiences, particularly those involving unexpressive, highly punitive, or restrictive caregivers.6 Such an environment fosters a "dismissing-avoidant" attachment pattern, where the individual learns to view relationships through a distrustful lens.6 As adults, these individuals are more likely to pursue high-power careers where they can exert control over others, viewing the world as a "competitive jungle" where one must either exploit or be exploited.1

This cynicism regarding human nature—the rejection of the idea that most people are basically good and kind—is a core feature of the high-MACH personality.6 It leads the designer to build systems that assume the user will "cheat" or "abuse" the platform, justifying the creation of punitive guardrails as a necessary "defense".15 The designer's internal logic is one of pre-emptive retaliation: they "burn" the user’s quality before the user can "steal" the provider's profit.17

Hostile Architecture: The Digital Translation of Exclusion

The psyche of the punitive system builder is functionally identical to that of the architects of "hostile architecture" in urban environments.18 Hostile architecture uses design elements to restrict public access and disrupt human freedom, such as sloped benches that prevent lying down or spikes that prevent sitting.18 These designs target specific populations—the "unwanted" demographics—and communicate who is welcome and who is not.18

The Psychological Impact of Designed Discomfort

Hostile design works by creating "small traces of discomfort" that are enough to discourage a behavior without attracting widespread condemnation.18 In the digital sphere, this "unpleasant design" manifests as intentional friction: slow loading times for "over-users," reduced bitrates, or complex "approval workflows" that override human judgment.19 The psychological intent is to exert social control through non-negotiable physical or algorithmic features.19

Physical Hostile Architecture Digital Punitive System Equivalent Intended Psychological Outcome
Sloped benches / leaning bars Bitrate throttling / quality caps Prevention of loitering or "over-extraction"7
Spikes on ledges / dividers Daily download limits / Stem extraction fees Targeted frustration of unwanted power users7
Loud music to prevent sleeping Obtrusive "upgrade" prompts / intrusive ads Creating a "zone of movement" and constant unease18
CCTV for loitering prevention Algorithmic surveillance of usage patterns Deterrence of "unauthorized" high-value use22

These design choices create "urban dead zones" in the digital realm, eroding trust and community in favor of a transactional, exploitative relationship.18 The architect of such a system views the user as a "symptom" to be managed—much like homelessness is viewed as a symptom to be pushed "down the street"—rather than a person with a goal to be supported.19

Suno AI: Hostile Architecture in the Digital Realm

Our forensic capture documents every element of digital hostile architecture operating simultaneously on suno.com: bitrate caps prevent full-quality extraction, stem separation requires additional fees, obtrusive upgrade prompts interrupt the creative workflow, and Microsoft Clarity session replay causes literal visual distortion on the user's display—the digital equivalent of spikes on a park bench. The platform does not merely fail to support the creator; it actively architects their discomfort the moment they attempt to derive full utility from the service.

Algorithmic Delegation and the Surrender of Agency

The extractive architect increasingly relies on algorithms to enforce these punitive measures, facilitating a psychological distancing from the consequences of their actions.21 This "algorithmic delegation" creates an "ethical buffer zone" where responsibility is displaced or diffused.21 The designer can claim they are not "punishing" the user; rather, the "neutral" algorithm is simply "optimizing for network health".4

This surrender to artificial intelligence is a form of "cognitive delegation" that encourages the user and the designer alike to stop thinking and start obeying.4 For the designer, this "serenity" of obedience is a key mechanism of moral disengagement.4 By building infrastructures that make critical thinking "superfluous," the designer can hide the coercion taking place in the interaction, convincing themselves that they are simply providing a "tool" rather than a system of control.4

Critical Warning

Algorithmic delegation in punitive systems creates an ethical buffer zone, allowing designers to evade responsibility for harmful design choices by attributing actions to "neutral" algorithms.4

The Economics of Spite and the Joy of Destruction

A critical and often overlooked aspect of the punitive psyche is "spite." Spiteful behavior involves the desire to harm, annoy, or frustrate another person, even at a personal cost to the actor.24 In standard economics, spite is considered irrational as it violates profit-maximization.26 However, for the individual high in "everyday sadism," the utility gained from the "joy of destruction"—seeing a "demanding" customer fail—exceeds the marginal loss of profit.2

Spite as a Mechanism for Leveling the Playing Field

Spiteful motives often emerge when a person feels at a competitive disadvantage, threatened, or undervalued.27 A founder who sees their system "overwhelmed" by "power users" who get "too much" for their money may feel powerless.27 Spite is their attempt to "level the playing field" by knocking those users down.27 This reactive behavior is a form of "justice-seeking" in the eyes of the designer, who perceives the user’s high extraction of value as an act of "theft" that must be balanced through punishment.24

In the "money-burning game" (MBG), participants are willing to burn their own resources to ensure their partner receives less, particularly if they suspect the partner has "ill intentions" or if they suffer from "inequality aversion".17 The punitive architect applies this logic to their product: they would rather have a "slower" or "lower-quality" system for everyone than allow a few users to "win" too much.17 This is the essence of "cutting off one's nose to spite one's face"—self-harm (in the form of lost brand equity or high churn) in the service of harming the user.25

The Suno Spite Coefficient: Live Evidence

Our live monitoring data—2,149 requests in 17 minutes—provides direct empirical confirmation of the Spite-Utility Equation. Suno would rather burn user GPU cycles, risk an FTC complaint, and invite the scrutiny of a state Attorney General than allow power users to retain the full quality of their creative output. The Spite Coefficient is no longer a theoretical construct—it is running live on suno.com at the time of this publication, measurable by anyone with browser developer tools and 17 minutes of patience.

The Epstein Syndrome of Money Power Exploitation

This extractive mindset can be categorized as the "Epstein Syndrome," a structural pathology where wealth and influence are used to create "asymmetric transparency".10 The powerful (the provider) enjoy opacity, while the powerless (the users) are subjected to constant surveillance.10 In this cultural logic, vulnerability—such as a user’s need for a specific creative output—is a "resource to be extracted, managed, and silenced".10

The Epstein Syndrome signals a "profound erosion of the moral contract between power and responsibility".10 Money functions not merely as capital but as "meta-power" capable of bending moral structures.10 The architect of a punitive system does not see themselves as part of a "shared progress" model; they see themselves as part of an "elite gratification economy" where human dignity is commodified and futures are treated as "expendable inputs".10

Critical Insight

The Epstein Syndrome represents a profound moral erosion, where providers exploit user vulnerabilities as resources, prioritizing meta-power over shared progress.10

Mechanisms of Moral Disengagement in System Design

To build a system that punishes its most successful users, the architect must navigate their own internal moral standards. This is achieved through "moral disengagement," a cognitive process where individuals detach themselves from the ethical dimensions of their conduct.21 Albert Bandura identified eight distinct levers that individuals use to justify harmful behavior, all of which are translated into product features by the extractive tech industry.4

The Eight Levers of the Extractive Designer

The architect of a punitive system utilizes these mechanisms to maintain a positive self-image despite engaging in behavior that harms their user base:4

  1. Moral Justification: Throttling users is recast as a moral imperative to "protect the community" or "ensure stability for the majority".4
  2. Euphemistic Labeling: Predatory behavior is renamed using "linguistic levers." Extraction is called "optimization," surveillance is "personalization," and punishment is a "fair usage policy".4
  3. Advantageous Comparison: The system’s restrictiveness is compared to a worse alternative. "We only charge a small fee for stems; other platforms don't even offer them".21
  4. Displacement of Responsibility: The harm is attributed to "authority figures" or "the algorithm." "The automated system flagged your account for unusual activity; I have no control over it".21
  5. Diffusion of Responsibility: Responsibility is distributed across a large, anonymous team or a complex organization, making it impossible to pin the decision on one person.21
  6. Disregarding Consequences: The designer minimizes the impact of the punishment. "It's just a lower bitrate; it doesn't really prevent them from making music".28
  7. Dehumanization: The user is treated as a set of data points or a "resource hog" rather than a human being with creative needs.19
  8. Attribution of Blame: The user is blamed for the system’s hostility. "If you hadn't downloaded so many files, we wouldn't have had to implement these limits".24
Mechanism of Disengagement Digital Application Psychological Buffer Created
Euphemistic Labeling "Fair Usage Policy" instead of "Quality Penalty" Masks the predatory nature of the action4
Displacement of Responsibility "The algorithm decided" Shields the designer from personal accountability4
Moral Justification "Ensuring network health for all users" Recasts the harm as a greater good4
Dehumanization "User IDs" and "Bandwidth quotas" Removes empathy from the design process19

By utilizing these mechanisms, the extractive architect creates an "ethical buffer zone" that allows them to lead and scale their organization while remaining detached from the moral implications of their "bait-and-switch" tactics and punitive designs.7

Suno AI: Moral Disengagement in Action

Every one of these eight levers is visible in Suno's public silence and in the euphemistic language their platform employs. Our analysis predicts exactly the labels they will deploy when forced to respond: "platform optimization," "network health," "our algorithm flagged unusual activity." These are not explanations—they are pre-fabricated moral disengagement scripts.4

The Founder’s Mentality: Hubris and the Erosion of Values

The individual who builds a punitive system often started with a "founder's mentality"—an insurgent mission, an owner's mindset, and an obsession with the front line.30 However, as the organization grows, this mentality often undergoes a pathological shift. The "front-line obsession" that once empowered employees to help customers becomes a "front-line obsession" with monitoring and controlling those customers.31

The Path from Hustler to Extractive Strategist

As a founder moves from "hustler" to "strategist," they shift their focus from the "perfect product" to the "perfect machine".32 They become obsessed with "metrics and levers," learning exactly how much they can "put into the machine" (or extract from the user) to get a specific financial output.32 This shift often leads to "imposter syndrome" and a need to compare themselves to even more aggressive companies like Slack or Datadog, fueling a "growth-at-all-costs" mindset.32

Success breeds "hubris," which increases the likelihood of illegal or unethical actions.33 When performance exceeds both internal aspirations and market expectations, founders begin to believe they are beyond the "moral economy of influence".10 They start to "abhor complexity" and view anything that gets in the way of "clean execution"—including customer concerns or ethical guidelines—as "bureaucracy" to be dismantled.30

Resource Guarding in Platform Leadership

The extractive architect is prone to "resource guarding," a behavioral response where an individual becomes possessive of a resource they believe is limited.34 In the mind of the founder, the "resource" is the platform’s high-quality output. If a user gets "too much" of it, the founder’s instinct is to "guard" it by implementing toys, treats, or download limits.7 This inter-role strain is fueled by "Conservation of Resources" (COR) theory, where the individual perceives a "resource loss spiral" if they allow users to extract value without constant replacement (i.e., more money).35

This leads to a culture where "results are taken personally," and any loss of potential revenue is viewed as a personal failure or a "theft".24 The founder’s sense of identity becomes so tied to the company’s "meta-power" that letting go of control—or allowing a user to have a "free win"—feels like "giving up a piece of their identity".10

Algorithmic Control and the Management of "Counterproductive" Behavior

Punitive systems rely on "algorithmic control" to track, appraise, and punish behavior in a highly automated, data-driven manner.23 This form of labor (or user) control is embedded in every aspect of the interaction.23 The designer uses these algorithms to induce "technological stress" and a "sense of deprived autonomy".39

The U-Shaped Relationship of Control

Research into algorithmic management shows a U-shaped relationship between control and behavior.39 Excessively high control—such as the sudden imposition of punitive download limits—intensifies "counterproductive behaviors" like "gaming" the system or scraping data.39 The extractive architect, however, views these counterproductive behaviors as further justification for even "more" control, creating a "vicious cycle" of hostility.35

Level of Control User Behavioral Response Psychological State
Moderate Control Positive work behaviors, "flow" experience Sense of fairness and motivation39
Excessively High Control Counterproductive behavior, red-light running Technological stress and negative emotions39
Punitive / Hostile Control Retaliation, "gaming" the system, churn Deprived autonomy and exhaustion39

The designer who builds these systems often has an "external locus of control" in their own life, making them more sensitive to perceived "threats" from users.39 They view algorithmic management as a "double-edged sword" that can be used to "discipline" a workforce or a user base into "obedience".4 This architecture of "threat appraisal" reduces work engagement for the user but maximizes short-term "extraction" for the provider.23

Adversarial Design: The Philosophy of the Complicit Commodified

In the world of Web3 and AI, "adversarial design" is often touted as a "defense" against the "old world".14 The founder of such a system believes that their protocol must be "guarded by comrades" (believers) against "enemies" (speculators).14 However, this "missionary" mindset often masks a deeply unstable psyche: the founder arrogantly rejects any feedback that doesn't fit their vision, eventually leading to the project being "drained dry" by the very people they intended to control.14

The Commodification of Human Intuition

The extractive architect believes that "intelligence, expertise, and judgment" are no longer scarce and can be replaced by "Large Language Models" and "automated incentives".42 They view the user’s "tacit human intuition" as a variable to be "statistical plagiarized" at an "industrial scale".4 This represents a "structural failure in contemporary regulatory imagination," where the system is designed to address "discrete violations" rather than the "systemic patterns of moral exploitation".10

The psyche of the person who builds such a system is one that "abhors the human element." They prefer "transparent contracts" over "human relationships" because contracts can be programmed to punish.14 They see "real power" as residing in the design of the "invisible architecture" that shapes behavior without the user even being aware of it.19

Spite-Utility and the Adversarial Design Equation

To synthesize this profile into a rigorous framework, one can conceptualize the "Spite-Utility Balance" of the extractive architect. For these individuals, the "Utility" of a system design is not merely the profit, but also the "Derivative Utility" gained from the "Disutility" of a perceived "unworthy" or "over-extractive" user.2

The "Spite-Utility Equation" for a punitive architect can be described in plain terms as follows: the total utility (U) derived by the architect is equal to the base profit (P) plus the product of a "Spite Coefficient" (S) and the disutility (D) experienced by the user. In mathematical terms, this would be expressed as U = P + (S * D). Here, the "Spite Coefficient" represents the degree to which the architect derives personal satisfaction from harming the user’s experience.17 For a healthy founder, the Spite Coefficient is zero or negative (indicating a desire for a "win-win"). For the extractive architect, the Spite Coefficient is a positive value, reflecting an "interdependent utility function" where the user’s loss is the designer’s gain.24

This equation helps explain "irrational" behaviors like "bait-and-switch" pricing or "bitrate throttling." Even if these actions lead to higher churn (increasing negative outcomes), the "Spite-Utility" gained from "leveling the playing field" provides a "psychic benefit" that justifies the action to the individual.12

Behavioral Insight

The Spite-Utility Equation reveals how extractive architects derive psychological benefits from user disutility, explaining seemingly irrational punitive design choices.17

Direct Forensic Evidence: Suno AI as Exhibit A (March 2026)

The following evidence was collected directly by the VNR audit team using standardized forensic methodology, documented via Google Antigravity, and filed with the South Carolina Attorney General. Each finding is independently reproducible by any user with access to browser developer tools. This is not optimization. This is the punitive psyche made manifest in code.

  1. 78 POST requests per minute to tracker domains the instant a song generation completes—documented in our raw suno.com.har capture.
  2. Self-hosted hCaptcha Proof-of-Work operating a 619-function bytecode VM running SHA-256 computation plus full behavioral biometrics (mouse, keyboard, device motion)—designed to evade standard ad-blocker detection.
  3. Microsoft Clarity session replay causing measurable visual distortion on the user's display—the compositor loop overwhelms local GPU resources during active creative work.
  4. GPU/CPU abuse with documented process IDs burning thousands of CPU-seconds during standard browser sessions—resource hijacking indistinguishable in signature from cryptomining operations.
  5. SharedWorker persistence maintaining active connections in incognito mode, accompanied by 9.6 GB of profile data bloat—silent telemetry that survives standard privacy measures.
  6. Tracker cascade escalation from 14 to 22 unique domains during a single session, triggered by the user's creative success—the more value extracted, the more surveillance deployed.
  7. 2,149 network requests in 17 minutes of standard platform use—a request volume that constitutes a denial-of-service attack against the user's own hardware.
Forensic Conclusion

The evidence documented above is not circumstantial. It is structural, reproducible, and filed with regulatory authorities. Every claim in this psychological profile maps directly to an observable behavior in Suno's production codebase.

Conclusion: The Pathological Profile of the Extractor

The individual who builds a system that punishes customers based on the quality they extract is not merely a "tough negotiator" or a "profit-maximizer." They are an individual defined by a complex of aversive personality traits, including Machiavellianism, subclinical psychopathy, and everyday sadism.1 Their psyche is characterized by a "zero-sum worldview" that perceives the user as an adversary to be controlled, excluded, or penalized if they derive "too much" value.3

This profile is supported by a sophisticated cognitive toolkit of "moral disengagement" that allows the designer to rename predatory behaviors as "optimizations" and displace responsibility onto "neutral" algorithms.4 They are driven by "hubris," "spite," and an "Epstein-like" desire for "meta-power," where the commodification of human dignity is a standard "cost of doing business".10

With this expanded edition, the profile is no longer theoretical. Suno AI is the living case study—a textbook extractive architect defined by every trait, every cognitive distortion, and every mechanism of moral disengagement documented in this analysis. The Spite-Utility Equation is not a hypothesis; it is running in production on suno.com at the time of this publication.

Voss Neural Research documented every claim with raw forensic data, filed the results with the South Carolina Attorney General, and published the evidence in full for independent verification. The systems they build are a "mirror of the organization" and its leadership—a "hostile architecture" that prioritizes "obedience" over "agency" and "extraction" over "connection".4

Recognizing this profile is essential for users, investors, and regulators who seek to build "human-centric" digital environments that promote "shared progress" rather than "punitive exploitation".10 The future of the creative economy depends on the "subversive gesture" of refusing this "delegated thought" and demanding a "re-humanization of power" that protects the "dignity of creators" against the "shadows" of the dark side of design.4

If an AI system is not VOSS-Compliant, it is a liability.

Works Cited

  1. Machiavellianism | Psychology Today, accessed March 11, 2026.
  2. The Dark Factor of Personality: D, accessed March 11, 2026.
  3. The evolution of zero-sum and positive-sum worldviews | PNAS, accessed March 11, 2026.
  4. The Obedience You Don't See: Moral Disengagement in the Age of Algorithmic Delegation | by Calogero Kalos Bonasia | Feb, 2026 | Medium, accessed March 11, 2026.
  5. New psychology research sheds light on why empathetic people end up with toxic partners, accessed March 11, 2026.
  6. Machiavellianism (psychology) - Wikipedia, accessed March 11, 2026.
  7. About Suno AI's daily download limit: this has to be a joke. : r/SunoAI - Reddit, accessed March 11, 2026.
  8. Corporate law and corporate psychopaths - PMC - NIH, accessed March 11, 2026.
  9. The Corporate Psychopath | FBI - LEB, accessed March 11, 2026.
  10. The Epstein Syndrome of Money Power Exploitation - ResearchGate, accessed March 11, 2026.
  11. Zero-Sum Thinking - NBER, accessed March 11, 2026.
  12. The Zero-Sum Fallacy: ARR vs. Services - Kellblog, accessed March 11, 2026.
  13. How to break the Zero-Sum Mindset and build trust in divisive times - Susanne Le Boutillier, accessed March 11, 2026.
  14. When we say "encryption is no longer viable," what are we really trying to say?, accessed March 11, 2026.
  15. What is Machiavellianism in Psychology? - Harley Therapy™ Blog, accessed March 11, 2026.
  16. Hostile Architecture: A Design Against Humanity - Be More Adaptive, accessed March 11, 2026.
  17. Money burning is driven by reciprocity rather than spite | Journal of the Economic Science Association | Cambridge Core, accessed March 11, 2026.
  18. The Most Evil Form of Architecture: Hostile Architecture | by Elissa - Medium, accessed March 11, 2026.
  19. Unpleasant Design & Hostile Urban Architecture - 99% Invisible, accessed March 11, 2026.
  20. The Dark Side of Design: Hostile Architecture, accessed March 11, 2026.
  21. How Autonomy-Restricting Algorithms Enable Ethical Disengagement and Responsibility Displacement - Preprints.org, accessed March 11, 2026.
  22. Anti-homeless Hostile Design as Wrongful Discrimination | British Journal of Political Science | Cambridge Core, accessed March 11, 2026.
  23. The double-edged sword effect of algorithmic management on work engagement of platform workers: the roles of appraisals and resources - Frontiers, accessed March 11, 2026.
  24. SPITE: LEGAL AND SOCIAL IMPLICATIONS, accessed March 11, 2026.
  25. (PDF) The Psychology of Spite and the Measurement of Spitefulness - ResearchGate, accessed March 11, 2026.
  26. Is the threat of retaliation by customers an economically sound defence to input foreclosure?, accessed March 11, 2026.
  27. Study Links Spite to Conspiracy Theory Beliefs - Neuroscience News, accessed March 11, 2026.
  28. Exploring Users' Moral Reasoning Processes and their Impact on the Continued Use of Algorithmic Systems - OPEN FAU, accessed March 11, 2026.
  29. Moral engagement and disengagement in health care AI development - PMC, accessed March 11, 2026.
  30. How A Founder's Mentality Propels Brands - Branding Strategy Insider, accessed March 11, 2026.
  31. The Elements of Founder's Mentality: Customer Advocacy | Bain & Company, accessed March 11, 2026.
  32. Dear SaaStr: How Does a Founder Mindset Change As You Go From Startup to Scaleup?, accessed March 11, 2026.
  33. Why “Good” Firms do Bad Things: The Effects of High Aspirations, High Expectations, and Prominence on the Incidence of Corporate Illegality - Academy of Management, accessed March 11, 2026.
  34. The Complete Dog Bar Experience: Where Dogs Play Free and Owners Relax Together, accessed March 11, 2026.
  35. (PDF) Work–Life Conflict and Burnout Among Working Women in Women Dominant Workplaces: HR Strategies for Psychological Well-Being - ResearchGate, accessed March 11, 2026.
  36. What is a founder's mindset? - The World Economic Forum, accessed March 11, 2026.
  37. Tapping into the Founder's Mindset - Ninety, accessed March 11, 2026.
  38. The “Double-Edged Sword” Effect of Perceived Algorithmic Control on Platform Workers' Work Engagement - PMC, accessed March 11, 2026.
  39. Research on the Nonlinear Mechanism of Gig Workers' Perception of Algorithmic Control and Their Counterproductive Work Behaviors - MDPI, accessed March 11, 2026.
  40. Exit, Voice, Loyalty, Neglect, and… Retaliation: The Impact of Different Maintenance Motivations on Customer's Response to Dissatisfaction | Request PDF - ResearchGate, accessed March 11, 2026.
  41. The Gig Economy & Algorithmic Management; A Modern Version of Scientific Management? A Digital Taylorism? - DiVA portal, accessed March 11, 2026.
  42. The Incentive Trap Beneath AI - John Fletcher by BASELINE: How Innovation Really Happens - Spotify for Creators, accessed March 11, 2026.
  43. THE AI AUTHORSHIP DISTRACTION: WHY COPYRIGHT SHOULD NOT BE DICHOTOMISED BASED ON GENERATIVE AI USE by ZACHARY COOPER, accessed March 11, 2026.