Reflections on the role of algorithms in creative work.
Illustration by Kristina Gushcheva-Keippilä
In “Filterworld”, Kyle Chayka explores the way algorithms “flatten” culture. One specific culprit, according to Chayka, is the data-driven nature of many social media feeds. He suggests that algorithm-based curation leads us to consume content that is less diverse and more generic. It also shapes the content we create, whether we fully acknowledge that or not.
Chayka argues that culture has become increasingly homogeneous. In a loop of content consumption and creation, algorithms define engagement and trends, dictating what and how we create. This process packages culture into a mathematically compiled feed, presenting us with content it expects us to engage with — regardless of whether we actually find joy in it.
Of course, what is considered to be “good” content is highly subjective, it is a matter of taste — Chayka acknowledges that too, and merely points out the nature of how our taste might be shaped by such data-driven equations. How much has Spotify impacted your listening habits? How do you choose what to watch next? What do you base your design decisions on?
A dim light. I draft a sentence — no, this word won’t do — erase it. I pause, look at the steady flicker of the text caret. Another sentence — this one’s better. On second thought, maybe not.
I select the paragraph, and, alongside all text formatting tools, the software suggests to “improve my writing”. Yeah, why not. Improve it.
Another flicker. How am I feeling now?
There’s been a clear change in a way that we talk about AI. I think a lot about “Dear Sydney”, a Google ad for Gemini AI broadcasted during the 2024 Olympic season. The 1-minute ad is narrated by a father who proudly shares his daughter’s aspirations and passions — in particular, her admiration for the US athlete Sydney McLaughlin-Levrone.
His daughter wants to show Sydney some love, but the message has to be “just right”, the father says. He continues: “So Gemini, help my daughter write a letter telling Sydney how inspiring she is”…
The ad suggests that Gemini AI is not a tool for fixing your typos, or for improving grammar. No. It derives your aspirations, emotions, and desires from a prompt. It calculates. Are you feeling excited, inspired? Or maybe you’re nervous, anxious? Sorrowful, calm? You don’t need to think about that anymore. Let Gemini AI handle that.
Creating something means exposing oneself to great levels of vulnerability. To think about the body of work, to find the pieces that match. Sincerity can be scary. It is hard to get things right, but even harder to admit that any creative process is fundamentally about directly facing the possibility of not getting things right, failing, and learning from it.
When I think of product design, I think about a discipline that explores how and why people might interact with the product in question. I think about the goals they’re trying to achieve. I think about the circumstances of potential customers, and whether the product can accommodate them.
There can be a sense of detachment when it comes to addressing the emotional component of developing and interacting with products. Emotions and feelings can be hard to interpret, and even harder (read: potentially impossible) to translate into universal solutions.
A few questions that I’ve been pondering:
How do we create work that is authentic and ethical in a continuously homogenised landscape?What can we, as designers, do to make our work connect with people in a more meaningful way?
Back in June of 2024, Figma pulled its “Make Designs” feature due to the uncanny results that the tool produced. By September 2024, the feature was repackaged as “First Draft” — a noticeable attempt to change the narrative surrounding the purpose of the tool.
“There’s a lot of tedium along the journey of bringing great designs to life”, says Noah Levin in Figma’s blog post announcing the feature. Levin continues: “We see First Draft as just one more way to explore the option spaces and help bring the ideas in your head to life”.
Starting from scratch is intimidating. Looking at a blank page, a brightly lit empty screen, a liminal space. When I’m stuck, I do need help.
I press “Improve writing”. The software calculates, and then presents me with a result. Yes, maybe this paragraph makes more sense now.
However, I can’t help but feel like I’m giving up control and authenticity during this exchange.
Is this what my writing actually sounds like? What learning opportunities or chances to connect with my peers am I missing? Am I prioritising marginal productivity gains over the integrity of my work? By feeding the tool another set of data, am I willingly gentrifying my craft?
Is this how our product experiences become increasingly homogeneous?
There’s nothing tedious about finding joy in overcoming formidable challenges, in applying your knowledge and skill to craft something, and, yes, struggling along the way. Frankly, I find this to be the most rewarding and meaningful aspect of being a designer and a creator. For me, this is an integral part of learning and evolving as a professional, as an individual.
A few months ago, I shared my thoughts on Jakob Nielsen’s perspective regarding accessibility and AI. It was gratifying to see that the piece resonated with other peers. However, some responses were more critical.
The criticism specifically targeted the perceived rigidity of my stance on AI-powered personalisation and how, in my opinion, it might negatively impact the shared experiences that accessibility aims to provide at its core.
Shouldn’t designers embrace change and adapt? Isn’t AI the future? Am I too rigid? Perhaps. Yet, I don’t find my scepticism to be misplaced. In a reality where representatives of major AI companies are unsure about the training algorithms that their technology uses, a certain lack of confidence is in order.
I want to be better at my work. At times, I wish there was a shortcut. But these feelings prompt many questions: why? Why do I feel the pressure to deliver quickly, or to get better at doing something instantly? Why am I experiencing FOMO each time the industry unanimously claims that I’m missing out on an abstract — but definitely bright! — future of what my work could be instead?
My train of thought is often chaotic. As I’m writing this piece, I endlessly doubt my choice of words. The fear of being unclear or misinterpreted greatly informs each decision I make. I know that I tend to over-explain.
I experience similar feelings at work, too. I wonder if I’m proposing the right solutions to the right problems — even when I can clearly justify my decisions. I question whether my approach to work is reasonable, or even whether my values prevent me from seeing the bigger picture. Remember, the future is bright.
You might find yourself in a position where you’re asked not to take things so seriously. If you’re working in tech, the relentless push for increased productivity is most likely inescapable. Ultimately, as things stand, the AI discourse is just another symptom of this push, disguised as a promise to eliminate redundancy in your craft — only to reveal that the supposed redundancy is the craft itself.
Kristina (she/they) is a design leader based in Finland. Kristina writes about inclusive design, being autistic at work, and thinking in systems.
Acknowledgements and references
Kyla Chayka: Filterworldhttps://www.penguinrandomhouse.com/books/695902/filterworld-by-kyle-chayka/Google: Google + Team USA — Dear Sydneyhttps://youtu.be/NgtHJKn0MckJay Peters, The Verge: Figma pulls AI tool after criticism that it ripped off Apple’s designhttps://www.theverge.com/2024/7/2/24190823/figma-ai-tool-apple-weather-app-copyJay Peters, The Verge: Figma’s AI-powered app generator is back after it was pulled for copying Applehttps://www.theverge.com/2024/9/24/24253252/figma-ai-make-designs-first-draft-app-ui-generatorNoah Levin, Figma: Building a better First Draft for designershttps://www.figma.com/blog/figma-ai-first-draft/Grace Eliza Goodwin, Business Insider: OpenAI’s CTO said she wasn’t sure if Sora was trained on YouTube videos. YouTube’s CEO says that would be a problem.https://www.businessinsider.com/could-openai-be-violating-youtubes-terms-of-service-2024-4Goldman Sachs: Gen AI: Too much spend, too little benfit?https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai–too-much-spend%2C-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf?ref=wheresyoured.at?ref=wheresyoured.atEdward Zitron: Pop Culturehttps://www.wheresyoured.at/pop-culture/
Leave a Reply