Principles for Building Products That Are Good for the World

More and more businesses benefit from realizing that consumers and employees want to engage with brands that do no harm. We’re all watching behemoth companies struggle with “enshittification” — prioritizing extraction over value. As product and service professionals, we need a common set of tools to align our work toward good. These principles function as heuristics for building, evaluating, and governing products and services, focused not on usability, but on the moral, ethical, and social impact of what we build.

If you’re familiar with Jakob Nielsen’s 10 Usability Heuristics or Bruce Tognazzini’s First Principles of Interaction Design, you can use these the same way — as a design critique tool, a regular audit heuristic, or a governance standard. Similar to a doctor’s Hippocratic oath, we have a responsibility to uphold the character we want to see in the world.

My hope is that this helps further conversation and causes product and service people everywhere to consider the greater morality and ethics of the things we are releasing out into the world and the resonating impact and outcomes those products may have. Similar to a doctor taking the hippocratic oath, we designers have a responsibility to uphold a certain level of character and influence our work with the values we want to see in the world.

Version 2.0 — March 2026 · 18 principles · Full reference document (Google Doc) · GitHub repository

Principles

#PrincipleDefinition
1AccessibilityA system is not finished until it can be used by people with a full range of abilities, devices, connection qualities, languages, and levels of technical familiarity.
2DiversityA system actively designs for, and is shaped by, users of diverse personal identities, backgrounds, languages, cultures, and perspectives; it does not merely invite them.
3EquityA system removes structural barriers that prevent users from different starting points from having a fair opportunity to participate in and benefit from the system.
4InclusionThe system’s structures, processes, defaults, and features are designed to enable all users to participate fully, not merely to gain entry.
5BelongingNo user should be systematically made to feel unwelcome, othered, or dehumanized by the design, defaults, content, moderation, or community norms of the system.
6PrivacyThe system collects only what it needs, uses data only for consented purposes, actively protects users from downstream consequences, and treats user data as a trust obligation, not an asset.
7SecurityThe system protects user data and system integrity from unauthorized access, exploitation, and breach; security is a prerequisite, not a post-launch feature.
8SafetyThe system designs to prevent and mitigate psychological, emotional, mental, and physical harm; proactively protects vulnerable populations; and provides accessible means for users to seek remedy.
9Representation & StigmaThe system does not perpetuate, amplify, or introduce harmful representations of people; it actively examines its own language, imagery, defaults, and content for stigmatizing patterns.
10Algorithmic AccountabilityThe system is transparent about how algorithmic decisions are made, enables users to understand and contest decisions that affect them, and proactively tests for disparate outcomes.
11User HealthThe system respects and supports the health of its users; it does not employ persuasive technology techniques to override user judgment or optimize engagement at the expense of wellbeing.
12Autonomy & AgencyThe system enables meaningful, informed choices; it does not use psychological manipulation to override user judgment; and it provides genuine, accessible control over data, settings, and participation.
13Honesty & TruthThe system does not deceive users through design, language, or content; it discloses algorithmic curation and AI-generated content; and it respects users’ epistemic autonomy.
14False ObsolescenceThe system does not artificially shorten its own useful life, manufacture dependency that traps users, or design features or hardware to fail before their natural end of life.
15Economic JusticeThe system does not exploit information asymmetry, psychological vulnerability, or market power to extract disproportionate value from users; it does not engineer financial harm as a feature.
16Environmental SustainabilityThe system minimizes its ecological footprint across its full lifecycle and does not make false or misleading claims about its environmental impact.
17Labor EthicsThe system’s ethical responsibility extends to the people who build, operate, and support it; the team behind the product has the same claim to dignity and fair conditions as the users in front of it.
18Civic ResponsibilityThe system acknowledges and takes responsibility for its aggregate effects on democratic institutions, social cohesion, and public discourse; it does not corrode civic trust in pursuit of engagement.

For full definitions, intent, anti-patterns, real-world failure modes, and builder obligations — read the complete reference document.

Use It With AI Tools

If you’re working with AI there’s no longer any excuse to not be iterating on and applying ethical safeguards, healthy process, and procedures.

If you’re using Claude Code or Cursor in your development workflow, you can install rules and skills that apply these principles automatically, during feature development, code review, UI work, and spec writing. The Cursor rules fire by file type; the Claude Code skill loads the full framework as persistent project context.

Get the rules and skills on GitHub

Usage

These are broad rules of thumb for creating products or running regular ethical audits of existing ones. Apply them in three modes: design mode (evaluating features before they ship), review mode (auditing existing products at each major release), and governance mode (establishing team norms and escalation standards). The issues that arise from our products are often unintentional, they show up after time, at scale, in the hands of users we didn’t design for. Review frequently.

If you have feedback, find me on LinkedIn.

Acknowledgements

I started building these principles in 2018. Version 1 launched in October 2020. It took six years to expand them meaningfully, informed by the work I was doing and the people I was doing it alongside.

This work was greatly shaped by my time at Benevity, co-founding our BJEDI team, our accessibility community of practice, and all of the like-minded people there. Benevity allowed me to experiment with early versions of these principles in several scenarios and directly influence our own product and design standards.

One person from my time at Acuity Insights, Rodica Ivan, greatly inspired this work. Her ethical AI framework was foundational and incredibly instructive. I have since had her speak to several teams I’ve led and plan to continue doing so.

Thank you also to Jen Reiher, Tara Scott, Skye White, Janelle S, Alan, Ed, Chance, the 2-Minute Tabletop crew, and my wife Ashley Lamantia, for being supports, inspirations, and for helping shape who I am in this journey.

Further Reading