Mad Max is an Australian film franchise, following a police officer on the brink of societal collapse, where the government has no capacity to protect its citizens in a dystopian wasteland. The main character, Max, is rather wary of others, struggling to decide whether to help or go his own way. While we are not living on the brink of a civilization collapse, a bad user experience can make you feel like so. Applying intentional limitations to the user experience can help to reduce bad behavior.
Constraints in design are often technical, resource, legal, and time-based limitations, which can play a big role in designing products. Besides maximizing profits, Corporate Social Responsibility (CSR) has been an integral part of company initiatives for a few decades already, where businesses have strategies to make a positive impact on the world and have responsibility towards the society in which they operate.
The responsibilities are often categorized into environmental, ethical, philanthropic, and economic. CSR can be summarized as three Ps, meaning profit, people, and the planet. Product user responsibility refers to the duties of the person who uses the product, but what about product provider responsibility?
Technology companies are addressing issues of cyberbullying and how to better protect their users with tools, guides, and reporting, but more is still needed.
When a business is already forging meaningful relationships with customers, is aware of the constraints around design and development, adheres to legalities, has great CSR initiatives, and designs with the user in mind – are there other duties and responsibilities the product team should consider? Employee well-being is often discussed, but how about user well-being?
UX designers are walking in the shoes of the persona they are building for, researching the motivations and behavior of the users, but are they also intentionally protecting and supporting what’s in the best interests of the users? Yes, but besides talking about accessibility, problem-solving, ease of use, or enjoyability; the more invisible factors that can impact the experience, are the duty or responsibility to design and develop for the user’s well-being.
While we can’t know with 100% certainty the real motivations behind one’s actions, the provider can still strive to design to protect the user against possible harm, whether physical, mental, financial, or otherwise.
These 4 different profile images were created with Adobe Firefly 3. Image by the author.
If we intentionally apply a constraint to the user experience in favor of the user, would this be perceived as negative or positive? A limitation tends to have a negative connotation, but it is not always the case. While adding a constraint to the kind of images a user can upload as their profile picture can sound like a limiting factor, it is so if we do not elaborate on why.
AI has advanced in recent years, which is great, but it brings a lot of attention to how to build for security, avoid fraud, and design the experience around the content we interact with. This is not only for detecting illicit content or picking up on certain words to protect the community but also for determining whether something was created by AI.
However, it’s not ethically wrong or breaking the law to use AI-generated content as a profile picture, nor is it to detect AI-generated content and block its usage, so why would it matter?
Implementing such a limitation may not affect the usual Spotify user listening to music, but can make a difference if applied on a platform like LinkedIn, where strangers often interact with each other and exchange sensitive data, sharing a CV in the hopes of employment for example. Context matters, especially when the user’s data is at play.
A re-creation of a LinkedIn post/comment dialogue. The profile images were created with Adobe Firefly 3, except the author’s profile image. None of the content is real. Image by the author.
AI detecting AI can become hard as technology evolves. Online platforms do have trust and safety measures in place, such as verified identities. Such measures can make users feel more confident and trusting when interacting on the platform. However, it is also easy to surpass these measures.
Fraud in the US topped $10 billion in 2023, where one of the most commonly reported categories was imposter scams. Digital tools and platforms are making it even easier. A lot of the data protection acts help users to keep their data private, and not track their activity, but what about the protection around interaction with a product?
The duty and obligation of the business is to prevent wrongdoings as much as possible. One big challenge can be how or when to approach these kinds of initiatives. Should one wait until there is a lot of negative feedback, or would this be similar to designing for edge cases? Could it be simply part of aiming to create the best possible experience?
A combination of manual and automated checks can help to tackle AI misuse through AI authentication methods, such as defined by the Information Technology Industry Council (ITIC) as Labeling:
Watermarks: embedding an invisible or visible signal in text or image, with information. This allows the user to know the content was made with AI. Steganography is a technique that hides information inside the least significant bit of a media file for example.Provenance tracking and metadata authentication: tracing the history, modification, and quality of a dataset. Content provenance, or Content Credentials, binds provenance information to the media at creation or alterations.Human validation of content to verify whether the content was created by AI or not.
Leave a Reply