More and more businesses benefit from realizing that consumers and employees want to engage with, follow, and work for brands with positive social missions and products and services that do no harm. Moreover, we're all seeing behemoth companies struggle with "enshittification," where they focus too much on business value and not providing value to actual customers. As product and service professionals, we need a common set of tools to help align our creations and our teams towards good and value. These are a set of principles that can be used as heuristics when building, evaluating, or governing products and services. These principles focus less on the functional or usability aspects designers are used to and more on the morality, ethics, and impact those products can have on the world and the people that use them.
It's not a prerequisite, but if you're familiar with Jakob Nielsen’s 10 general principles for interaction design, or Bruce Tognazzini’s First Principles of Interaction Design, common principles that folks like product designers use to both evaluate products and services or guide their own work, you can use these principles just like you do those.
My hope is that this helps further conversation and causes product and service people everywhere to consider the greater morality and ethics of the things we are releasing out into the world and the resonating impact and outcomes those products may have. Similar to a doctor taking the hippocratic oath, we designers have a responsibility to uphold a certain level of character and influence our work with the values we want to see in the world.
The system should be usable by those with varied abilities, devices, or connection.
The system should invite diversity of personal identities, ideas, perspectives, and backgrounds.
The system should recognize and make accommodations for users approaching the system from different levels of expertise and varied backgrounds so that they have the same opportunity to grow, contribute, and develop the system as other users.
The system should ensure all users feel equally comfortable to connect and contribute.
The system should promote a culture of inclusion that embraces everyone’s differences and involves all voices.
The system should respect users right to be forgotten, disclaim, warn, and ask for consent to collect, use, or store, the user’s information.
The system should promote the wellness of, and not jeopardize the user’s physical, mental, spiritual, or financial security.
The system should ensure and promote psychological, emotional, mental, and physical safety of its users whenever possible, and provide the means for them to seek aid, justice, or remedy, upon an incident.
The system should not promote and instead seek to identify, combat, and rectify incidents of discrimination, stigma, bias, misinformation, fear, and false narratives that may be caused, facilitated by, or promoted by the system.
The system should encourage healthy habits of its users and not seek to addict them to the system.
The system should not be planned or designed as a product with an artificially limited useful life or a purposely frail design that causes it to become obsolete after a certain pre-determined period of time.
The system should seek to promote objective truth and rectify instances when misinformation may be caused or spread by the system or its users.
These are broad rules of thumb that you can use during the creation of a product or as regular heuristics to review existing products. I recommend doing so frequently, as the issues that arise out of our products may often not be intentional and only show up after time.
If you have feedback, feel free to hit me up on LinkedIn.
If this interested you, I'd also recommend checking out: