Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Thu, 25 Dec 2025 09:32:07 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2025, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[Giving Users A Voice Through Virtual Personas]]> https://smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/ https://smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/ Tue, 23 Dec 2025 10:00:00 GMT In my previous article, I explored how AI can help us create functional personas more efficiently. We looked at building personas that focus on what users are trying to accomplish rather than demographic profiles that look good on posters but rarely change design decisions.

But creating personas is only half the battle. The bigger challenge is getting those insights into the hands of people who need them, at the moment they need them.

Every day, people across your organization make decisions that affect user experience. Product teams decide which features to prioritize. Marketing teams craft campaigns. Finance teams design invoicing processes. Customer support teams write response templates. All of these decisions shape how users experience your product or service.

And most of them happen without any input from actual users.

The Problem With How We Share User Research

You do the research. You create the personas. You write the reports. You give the presentations. You even make fancy infographics. And then what happens?

The research sits in a shared drive somewhere, slowly gathering digital dust. The personas get referenced in kickoff meetings and then forgotten. The reports get skimmed once and never opened again.

When a product manager is deciding whether to add a new feature, they probably do not dig through last year’s research repository. When the finance team is redesigning the invoice email, they almost certainly do not consult the user personas. They make their best guess and move on.

This is not a criticism of those teams. They are busy. They have deadlines. And honestly, even if they wanted to consult the research, they probably would not know where to find it or how to interpret it for their specific question.

The knowledge stays locked inside the heads of the UX team, who cannot possibly be present for every decision being made across the organization.

What If Users Could Actually Speak?
What if, instead of creating static documents that people need to find and interpret, we could give stakeholders a way to consult all of your user personas at once?

Imagine a marketing manager working on a new campaign. Instead of trying to remember what the personas said about messaging preferences, they could simply ask: “I’m thinking about leading with a discount offer in this email. What would our users think?”

And the AI, drawing on all your research data and personas, could respond with a consolidated view: how each persona would likely react, where they agree, where they differ, and a set of recommendations based on their collective perspectives. One question, synthesized insight across your entire user base.

This is not science fiction. With AI, we can build exactly this kind of system. We can take all of that scattered research (the surveys, the interviews, the support tickets, the analytics, the personas themselves) and turn it into an interactive resource that anyone can query for multi-perspective feedback.

Building the User Research Repository

The foundation of this approach is a centralized repository of everything you know about your users. Think of it as a single source of truth that AI can access and draw from.

If you have been doing user research for any length of time, you probably have more data than you realize. It is just scattered across different tools and formats:

  • Survey results sitting in your survey platform,
  • Interview transcripts in Google Docs,
  • Customer support tickets in your helpdesk system,
  • Analytics data in various dashboards,
  • Social media mentions and reviews,
  • Old personas from previous projects,
  • Usability test recordings and notes.

The first step is gathering all of this into one place. It does not need to be perfectly organized. AI is remarkably good at making sense of messy inputs.

If you are starting from scratch and do not have much existing research, you can use AI deep research tools to establish a baseline.

These tools can scan the web for discussions about your product category, competitor reviews, and common questions people ask. This gives you something to work with while you build out your primary research.

Creating Interactive Personas

Once you have your repository, the next step is creating personas that the AI can consult on behalf of stakeholders. This builds directly on the functional persona approach I outlined in my previous article, with one key difference: these personas become lenses through which the AI analyzes questions, not just reference documents.

The process works like this:

  1. Feed your research repository to an AI tool.
  2. Ask it to identify distinct user segments based on goals, tasks, and friction points.
  3. Have it generate detailed personas for each segment.
  4. Configure the AI to consult all personas when stakeholders ask questions, providing consolidated feedback.

Here is where this approach diverges significantly from traditional personas. Because the AI is the primary consumer of these persona documents, they do not need to be scannable or fit on a single page. Traditional personas are constrained by human readability: you have to distill everything down to bullet points and key quotes that someone can absorb at a glance. But AI has no such limitation.

This means your personas can be considerably more detailed. You can include lengthy behavioral observations, contradictory data points, and nuanced context that would never survive the editing process for a traditional persona poster. The AI can hold all of this complexity and draw on it when answering questions.

You can also create different lenses or perspectives within each persona, tailored to specific business functions. Your “Weekend Warrior” persona might have a marketing lens (messaging preferences, channel habits, campaign responses), a product lens (feature priorities, usability patterns, upgrade triggers), and a support lens (common questions, frustration points, resolution preferences). When a marketing manager asks a question, the AI draws on the marketing-relevant information. When a product manager asks, it pulls from the product lens. Same persona, different depth depending on who is asking.

The personas should still include all the functional elements we discussed before: goals and tasks, questions and objections, pain points, touchpoints, and service gaps. But now these elements become the basis for how the AI evaluates questions from each persona’s perspective, synthesizing their views into actionable recommendations.

Implementation Options

You can set this up with varying levels of sophistication depending on your resources and needs.

The Simple Approach

Most AI platforms now offer project or workspace features that let you upload reference documents. In ChatGPT, these are called Projects. Claude has a similar feature. Copilot and Gemini call them Spaces or Gems.

To get started, create a dedicated project and upload your key research documents and personas. Then write clear instructions telling the AI to consult all personas when responding to questions. Something like:

You are helping stakeholders understand our users. When asked questions, consult all of the user personas in this project and provide: (1) a brief summary of how each persona would likely respond, (2) an overview highlighting where they agree and where they differ, and (3) recommendations based on their collective perspectives. Draw on all the research documents to inform your analysis. If the research does not fully cover a topic, search social platforms like Reddit, Twitter, and relevant forums to see how people matching these personas discuss similar issues. If you are still unsure about something, say so honestly and suggest what additional research might help.

This approach has some limitations. There are caps on how many files you can upload, so you might need to prioritize your most important research or consolidate your personas into a single comprehensive document.

The More Sophisticated Approach

For larger organizations or more ongoing use, a tool like Notion offers advantages because it can hold your entire research repository and has AI capabilities built in. You can create databases for different types of research, link them together, and then use the AI to query across everything.

The benefit here is that the AI has access to much more context. When a stakeholder asks a question, it can draw on surveys, support tickets, interview transcripts, and analytics data all at once. This makes for richer, more nuanced responses.

What This Does Not Replace

I should be clear about the limitations.

Virtual personas are not a substitute for talking to real users. They are a way to make existing research more accessible and actionable.

There are several scenarios where you still need primary research:

  • When launching something genuinely new that your existing research does not cover;
  • When you need to validate specific designs or prototypes;
  • When your repository data is getting stale;
  • When stakeholders need to hear directly from real humans to build empathy.

In fact, you can configure the AI to recognize these situations. When someone asks a question that goes beyond what the research can answer, the AI can respond with something like: “I do not have enough information to answer that confidently. This might be a good question for a quick user interview or survey.”

And when you do conduct new research, that data feeds back into the repository. The personas evolve over time as your understanding deepens. This is much better than the traditional approach, where personas get created once and then slowly drift out of date.

The Organizational Shift

If this approach catches on in your organization, something interesting happens.

The UX team’s role shifts from being the gatekeepers of user knowledge to being the curators and maintainers of the repository.

Instead of spending time creating reports that may or may not get read, you spend time ensuring the repository stays current and that the AI is configured to give helpful responses.

Research communication changes from push (presentations, reports, emails) to pull (stakeholders asking questions when they need answers). User-centered thinking becomes distributed across the organization rather than concentrated in one team.

This does not make UX researchers less valuable. If anything, it makes them more valuable because their work now has a wider reach and greater impact. But it does change the nature of the work.

Getting Started

If you want to try this approach, start small. If you need a primer on functional personas before diving in, I have written a detailed guide to creating them. Pick one project or team and set up a simple implementation using ChatGPT Projects or a similar tool. Gather whatever research you have (even if it feels incomplete), create one or two personas, and see how stakeholders respond.

Pay attention to what questions they ask. These will tell you where your research has gaps and what additional data would be most valuable.

As you refine the approach, you can expand to more teams and more sophisticated tooling. But the core principle stays the same: take all that scattered user knowledge and give it a voice that anyone in your organization can hear.

In my previous article, I argued that we should move from demographic personas to functional personas that focus on what users are trying to do. Now I am suggesting we take the next step: from static personas to interactive ones that can actually participate in the conversations where decisions get made.

Because every day, across your organization, people are making decisions that affect your users. And your users deserve a seat at the table, even if it is a virtual one.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[How To Measure The Impact Of Features]]> https://smashingmagazine.com/2025/12/how-measure-impact-features-tars/ https://smashingmagazine.com/2025/12/how-measure-impact-features-tars/ Fri, 19 Dec 2025 10:00:00 GMT Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).]]> So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”
2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”
3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”
4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources

Further Reading

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Smashing Animations Part 7: Recreating Toon Text With CSS And SVG]]> https://smashingmagazine.com/2025/12/smashing-animations-part-7-recreating-toon-text-css-svg/ https://smashingmagazine.com/2025/12/smashing-animations-part-7-recreating-toon-text-css-svg/ Wed, 17 Dec 2025 10:00:00 GMT After finishing a project that required me to learn everything I could about CSS and SVG animations, I started writing this series about Smashing Animations and “How Classic Cartoons Inspire Modern CSS.” To round off this year, I want to show you how to use modern CSS to create that element that makes Toon Titles so impactful: their typography.

Title Artwork Design

In the silent era of the 1920s and early ’30s, the typography of a film’s title card created a mood, set the scene, and reminded an audience of the type of film they’d paid to see.

Cartoon title cards were also branding, mood, and scene-setting, all rolled into one. In the early years, when major studio budgets were bigger, these title cards were often illustrative and painterly.

But when television boomed during the 1950s, budgets dropped, and cards designed by artists like Lawrence “Art” Goble adopted a new visual language, becoming more graphic, stylised, and less intricate.

Note: Lawrence “Art” Goble is one of the often overlooked heroes of mid-century American animation. He primarily worked for Hanna-Barbera during its most influential years of the 1950s and 1960s.

Goble wasn’t a character animator. His role was to create atmosphere, so he designed environments for The Flintstones, Huckleberry Hound, Quick Draw McGraw, and Yogi Bear, as well as the opening title cards that set the tone. His title cards, featuring paintings with a logo overlaid, helped define the iconic look of Hanna-Barbera.

Goble’s artwork for characters such as Quick Draw McGraw and Yogi Bear was effective on smaller TV screens. Rather than reproducing a still from the cartoon, he focused on presenting a single, strong idea — often in silhouette — that captured its essence. In “The Buzzin’ Bear,” Yogi buzzes by in a helicopter. He bounces away, pic-a-nic basket in hand, in “Bear on a Picnic,” and for his “Prize Fight Fright,” Yogi boxes the title text.

With little or no motion to rely on, Goble’s single frames had to create a mood, set the scene, and describe a story. They did this using flat colours, graphic shapes, and typography that was frequently integrated into the artwork.

As designers who work on the web, toon titles can teach us plenty about how to convey a brand’s personality, make a first impression, and set expectations for someone’s experience using a product or website. We can learn from the artists’ techniques to create effective banners, landing-page headers, and even good ol’ fashioned splash screens.

Toon Title Typography

Cartoon title cards show how merging type with imagery delivers the punch a header or hero needs. With a handful of text-shadow, text-stroke, and transform tricks, modern CSS lets you tap into that same energy.

The Toon Text Title Generator

Partway through writing this article, I realised it would be useful to have a tool for generating text styled like the cartoon titles I love so much. So I made one.

My Toon Text Title Generator lets you experiment with colours, strokes, and multiple text shadows. You can adjust paint order, apply letter spacing, preview your text in a selection of sample fonts, and then copy the generated CSS straight to your clipboard to use in a project.

Toon Title CSS

You can simply copy-paste the CSS that the Toon Text Title Generator provides you. But let’s look closer at what it does.

Text shadow

Look at the type in this title from Augie Doggie’s episode “Yuk-Yuk Duck,” with its pale yellow letters and dark, hard, offset shadow that lifts it off the background and creates the illusion of depth.

You probably already know that text-shadow accepts four values: (1) horizontal and (2) vertical offsets, (3) blur, and (4) a colour which can be solid or semi-transparent. Those offset values can be positive or negative, so I can replicate “Yuk-Yuk Duck” using a hard shadow pulled down and to the right:

color: #f7f76d;
text-shadow: 5px 5px 0 #1e1904;

On the other hand, this “Pint Giant” title has a different feel with its negative semi-soft shadow:

color: #c2a872;
text-shadow:
  -7px 5px 0 #b100e,
  0 -5px 10px #546c6f;

To add extra depth and create more interesting effects, I can layer multiple shadows. For “Let’s Duck Out,” I combine four shadows: the first a solid shadow with a negative horizontal offset to lift the text off the background, followed by progressively softer shadows to create a blur around it:

color: #6F4D80;
text-shadow:
  -5px 5px 0 #260e1e, /* Shadow 1 */
  0 0 15px #e9ce96,   /* Shadow 2 */
  0 0 30px #e9ce96,   /* Shadow 3 */
  0 0 30px #e9ce96;   /* Shadow 4 */

These shadows show that using text-shadow isn’t just about creating lighting effects, as they can also be decorative and add personality.

Text Stroke

Many cartoon title cards feature letters with a bold outline that makes them stand out from the background. I can recreate this effect using text-stroke. For a long time, this property was only available via a -webkit- prefix, but that also means it’s now supported across modern browsers.

text-stroke is a shorthand for two properties. The first, text-stroke-width, draws a contour around individual letters, while the second, text-stroke-color, controls its colour. For “Whatever Goes Pup,” I added a 4px blue stroke to the yellow text:

color: #eff0cd;
-webkit-text-stroke: 4px #7890b5;
text-stroke: 4px #7890b5;

Strokes can be especially useful when they’re combined with shadows, so for “Growing, Growing, Gone,” I added a thin 3px stroke to a barely blurred 1px shadow to create this three-dimensional text effect:

color: #fbb999;
text-shadow: 3px 5px 1px #5160b1;
-webkit-text-stroke: 3px #984336;
text-stroke: 3px #984336;

Paint Order

Using text-stroke doesn’t always produce the expected result, especially with thinner letters and thicker strokes, because by default the browser draws a stroke over the fill. Sadly, CSS still does not permit me to adjust stroke placement as I often do in Sketch. However, the paint-order property has values that allow me to place the stroke behind, rather than in front of, the fill.

paint-order: stroke paints the stroke first, then the fill, whereas paint-order: fill does the opposite:

color: #fbb999;
paint-order: fill;
text-shadow: 3px 5px 1px #5160b1;
text-stroke-color:#984336;
text-stroke-width: 3px;

An effective stroke keeps letters readable, adds weight, and — when combined with shadows and paint order — gives flat text real presence.

Backgrounds Inside Text

Many cartoon title cards go beyond flat colour by adding texture, gradients, or illustrated detail to the lettering. Sometimes that’s a texture, other times it might be a gradient with a subtle tonal shift. On the web, I can recreate this effect by using a background image or gradient behind the text, and then clipping it to the shape of the letters. This relies on two properties working together: background-clip: text and text-fill-color: transparent.

First, I apply a background behind the text. This can be a bitmap or vector image or a CSS gradient. For this example from the Quick Draw McGraw episode “Baba Bait,” the title text includes a subtle top–bottom gradient from dark to light:

background: linear-gradient(0deg, #667b6a, #1d271a);

Next, I clip that background to the glyphs and make the text transparent so the background shows through:

-webkit-background-clip: text;
-webkit-text-fill-color: transparent;

With just those two lines, the background is no longer painted behind the text; instead, it’s painted within it. This technique works especially well when combined with strokes and shadows. A clipped gradient provides the lettering with colour and texture, a stroke keeps its edges sharp, and a shadow elevates it from the background. Together, they recreate the layered look of hand-painted title cards using nothing more than a little CSS. As always, test clipped text carefully, as browser quirks can sometimes affect shadows and rendering.

Splitting Text Into Individual Characters

Sometimes I don’t want to style a whole word or heading. I want to style individual letters — to nudge a character into place, give one glyph extra weight, or animate a few letters independently.

In plain HTML and CSS, there’s only one reliable way to do that: wrap each character in its own span element. I could do that manually, but that would be fragile, hard to maintain, and would quickly fall apart when copy changes. Instead, when I need per-letter control, I use a text-splitting library like splt.js (although other solutions are available). This takes a text node and automatically wraps words or characters, giving me extra hooks to animate and style without messing up my markup.

It’s an approach that keeps my HTML readable and semantic, while giving me the fine-grained control I need to recreate the uneven, characterful typography you see in classic cartoon title cards. However, this approach comes with accessibility caveats, as most screen readers read text nodes in order. So this:

<h2>Hum Sweet Hum</h2>

…reads as you’d expect:

Hum Sweet Hum

But this:

<h2>
<span>H</span>
<span>u</span>
<span>m</span>
<!-- etc. -->
</h2>

…can be interpreted differently depending on the browser and screen reader. Some will concatenate the letters and read the words correctly. Others may pause between letters, which in a worst-case scenario might sound like:

“H…” “U…” “M…”

Sadly, some splitting solutions don’t deliver an always accessible result, so I’ve written my own text splitter, splinter.js, which is currently in beta.

Transforming Individual Letters

To activate my Toon Text Splitter, I add a data- attribute to the element I want to split:

<h2 data-split="toon">Hum Sweet Hum</h2>

First, my script separates each word into individual letters and wraps them in a span element with class and ARIA attributes applied:

<span class="toon-char" aria-hidden="true">H</span>
<span class="toon-char" aria-hidden="true">u</span>
<span class="toon-char" aria-hidden="true">m</span>

The script then takes the initial content of the split element and adds it as an aria attribute to help maintain accessibility:

<h2 data-split="toon" aria-label="Hum Sweet Hum">
  <span class="toon-char" aria-hidden="true">H</span>
  <span class="toon-char" aria-hidden="true">u</span>
  <span class="toon-char" aria-hidden="true">m</span>
</h2>

With those class attributes applied, I can then style individual characters as I choose.

For example, for “Hum Sweet Hum,” I want to replicate how its letters shift away from the baseline. After using my Toon Text Splitter, I applied four different translate values using several :nth-child selectors to create a semi-random look:

/* 4th, 8th, 12th... */
.toon-char:nth-child(4n) { translate: 0 -8px; }
/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) { translate: 0 -4px; }
/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) { translate: 0 4px; }
/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) { translate: 0 8px; }

But translate is only one property I can use to transform my toon text.

I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */
.toon-line .toon-char:nth-child(4n) { rotate: -4deg; }
/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) { rotate: -8deg; }
/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) { rotate: 4deg; }
/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) { rotate: 8deg; }

But translate is only one property I can use to transform my toon text. I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */
.toon-line .toon-char:nth-child(4n) {
rotate: -4deg; }

/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) {
rotate: -8deg; }

/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) {
rotate: 4deg; }

/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) {
rotate: 8deg; }

And, of course, I could add animations to jiggle those characters and bring my toon text style titles to life. First, I created a keyframe animation that rotates the characters:

@keyframes jiggle {
0%, 100% { transform: rotate(var(--base-rotate, 0deg)); }
25% { transform: rotate(calc(var(--base-rotate, 0deg) + 3deg)); }
50% { transform: rotate(calc(var(--base-rotate, 0deg) - 2deg)); }
75% { transform: rotate(calc(var(--base-rotate, 0deg) + 1deg)); }
}

Before applying it to the span elements created by my Toon Text Splitter:

.toon-char {
animation: jiggle 3s infinite ease-in-out;
transform-origin: center bottom; }

And finally, setting the rotation amount and a delay before each character begins to jiggle:

.toon-char:nth-child(4n) { --base-rotate: -2deg; }
.toon-char:nth-child(4n+1) { --base-rotate: -4deg; }
.toon-char:nth-child(4n+2) { --base-rotate: 2deg; }
.toon-char:nth-child(4n+3) { --base-rotate: 4deg; }

.toon-char:nth-child(4n) { animation-delay: 0.1s; }
.toon-char:nth-child(4n+1) { animation-delay: 0.3s; }
.toon-char:nth-child(4n+2) { animation-delay: 0.5s; }
.toon-char:nth-child(4n+3) { animation-delay: 0.7s; }

One Frame To Make An Impression

Cartoon title artists had one frame to make an impression, and their typography was as important as the artwork they painted. The same is true on the web.

A well-designed header or hero area needs clarity, character, and confidence — not simply a faded full-width background image.

With a few carefully chosen CSS properties — shadows, strokes, clipped backgrounds, and some restrained animation — we can recreate that same impact. I love toon text not because I’m nostalgic, but because its design is intentional. Make deliberate choices, and let a little toon text typography add punch to your designs.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[Accessible UX Research, eBook Now Available For Download]]> https://smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/ https://smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/ Tue, 09 Dec 2025 16:00:00 GMT reserve your print copy at the presale price.]]> This article is a sponsored by Accessible UX Research

Smashing Library expands again! We’re so happy to announce our newest book, Accessible UX Research, is now available for download in eBook formats. Michele A. Williams takes us for a deep dive into the real world of UX testing, and provides a road map for including users with different abilities and needs in every phase of testing.

But the truth is, you don’t need to be conducting UX testing or even be a UX professional to get a lot out of this book. Michele gives in-depth descriptions of the assistive technology we should all be familiar with, in addition to disability etiquette, common pitfalls when creating accessible prototypes, and so much more. You’ll refer to this book again and again in your daily work.

This is also your last chance to get your printed copy at our discounted presale price. We expect printed copies to start shipping in early 2026. We know you’ll love this book, but don’t just take our word for it — we asked a few industry experts to check out Accessible UX Research too:

Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.

This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”

Eric Bailey, Accessibility Advocate
“User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”

Devon Pershing, Author of The Accessibility Operations Guidebook
“Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”

Manuel Matuzović, Author of the Web Accessibility Cookbook
“This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”

Anna E. Cook, Accessibility and Inclusive Design Specialist
About The Book

The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.

Inside, you’ll learn how to:

  • Plan research that includes disabled participants from the start,
  • Recruit participants with disabilities,
  • Facilitate sessions that work for a range of access needs,
  • Ask better questions and avoid unintentionally biased research methods,
  • Build trust and confidence in your team around accessibility and inclusion.

The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.

High-quality hardcover, 320 pages. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print edition shipping early 2026. eBook now available for download. Download a free sample (PDF, 2.3MB) and reserve your print copy at the presale price.

“Accessible UX Research” shares successful strategies that’ll help you recruit the participants you need for the study you’re designing. (Large preview) Contents
  1. Disability mindset: For inclusive research to succeed, we must first confront our mindset about disability, typically influenced by ableism.
  2. Diversity of disability: Accessibility is not solely about blind screen reader users; disability categories help us unpack and process the diversity of disabled users.
  3. Disability in the stages of UX research: Disabled participants can and should be part of every research phase — formative, prototype, and summative.
  4. Recruiting disabled participants: Recruiting disabled participants is not always easy, but that simply means we need to learn strategies on where to look.
  5. Designing your research: While our goal is to influence accessible products, our research execution must also be accessible.
  6. Facilitating an accessible study: Preparation and communication with your participants can ensure your study logistics run smoothly.
  7. Analyzing and reporting with accuracy and impact: How you communicate your findings is just as important as gathering them in the first place — so prepare to be a storyteller, educator, and advocate.
  8. Disability in the UX research field: Inclusion isn’t just for research participants, it’s important for our colleagues as well, as explained by blind UX Researcher Dr. Cynthia Bennett.
The book will challenge your disability mindset and what it means to be truly inclusive in your work. (Large preview) Who This Book Is For

Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…

  1. You have been tasked to improve accessibility of your product, but need to know where to start to facilitate this successfully.
  2. You want to establish a culture for accessibility in your company, but not sure how to make it work.
  3. You want to move from WCAG/EAA compliance to established accessibility practices and inclusion in research practices and beyond.
  4. You want to improve your overall accessibility knowledge and be viewed as an Accessibility Specialist for your organization.
About the Author

Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC - Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.

Technical Details Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Heather, and Steven are three of these people. Have you checked out their books already?

The Ethical Design Handbook

A practical guide on ethical design for digital products.

Add to cart $44

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

]]>
hello@smashingmagazine.com (Ari Stiles)
<![CDATA[State, Logic, And Native Power: CSS Wrapped 2025]]> https://smashingmagazine.com/2025/12/state-logic-native-power-css-wrapped-2025/ https://smashingmagazine.com/2025/12/state-logic-native-power-css-wrapped-2025/ Tue, 09 Dec 2025 10:00:00 GMT If I were to divide CSS evolutions into categories, we have moved far beyond the days when we simply asked for border-radius to feel like we were living in the future. We are currently living in a moment where the platform is handing us tools that don’t just tweak the visual layer, but fundamentally redefine how we architect interfaces. I thought the number of features announced in 2024 couldn’t be topped. I’ve never been so happily wrong.

The Chrome team’s “CSS Wrapped 2025” is not just a list of features; it is a manifesto for a dynamic, native web. As someone who has spent a couple of years documenting these evolutions — from defining “CSS5” eras to the intricacies of modern layout utilities — I find myself looking at this year’s wrap-up with a huge sense of excitement. We are seeing a shift towards “Optimized Ergonomics” and “Next-gen interactions” that allow us to stop fighting the code and start sculpting interfaces in their natural state.

In this article, you can find a comprehensive look at the standout features from Chrome’s report, viewed through the lens of my recent experiments and hopes for the future of the platform.

The Component Revolution: Finally, A Native Customizable Select

For years, we have relied on heavy JavaScript libraries to style dropdowns, a “decades-old problem” that the platform has finally solved. As I detailed in my deep dive into the history of the customizable select (and related articles), this has been a long road involving Open UI, bikeshedding names like <selectmenu> and <selectlist>, and finally landing on a solution that re-uses the existing <select> element.

The introduction of appearance: base-select is a strong foundation. It allows us to fully customize the <select> element — including the button and the dropdown list (via ::picker(select)) — using standard CSS. Crucially, this is built with progressive enhancement in mind. By wrapping our styles in a feature query, we ensure a seamless experience across all browsers.

We can opt in to this new behavior without breaking older browsers:

select {
  /* Opt-in for the new customizable select */
  @supports (appearance: base-select) {
    &, &::picker(select) {
      appearance: base-select;
    }
  }
}

The fantastic addition to allow rich content inside options, such as images or flags, is a lot of fun. We can create all sorts of selects nowadays:

  • Demo: I created a Poké-adventure demo showing how the new <selectedcontent> element can clone rich content (like a Pokéball icon) from an option directly into the button.

See the Pen A customizable select with images inside of the options and the selectedcontent [forked] by utilitybend.

See the Pen A customizable select with only pseudo-elements [forked] by utilitybend.

See the Pen An actual Select Menu with optgroups [forked] by utilitybend.

This feature alone signals a massive shift in how we will build forms, reducing dependencies and technical debt.

Scroll Markers And The Death Of The JavaScript Carousel

Creating carousels has historically been a friction point between developers and clients. Clients love them, developers dread the JavaScript required to make them accessible and performant. The arrival of ::scroll-marker and ::scroll-button() pseudo-elements changes this dynamic entirely.

These features allow us to create navigation dots and scroll buttons purely with CSS, linked natively to the scroll container. As I wrote on my blog, this was Love at first slide. The ability to create a fully functional, accessible slider without a single line of JavaScript is not just convenient; it is a triumph for performance. There are some accessibility concerns around this feature, and even though these are valid, there might be a way for us developers to make it work. The good thing is, all these UI changes are making it a lot easier than custom DOM manipulation and dragging around aria tags, but I digress…

We can now group markers automatically using scroll-marker-group and style the buttons using anchor positioning to place them exactly where we want.

.carousel {
  overflow-x: auto;
  scroll-marker-group: after; /* Creates the container for dots */

  /* Create the buttons */
  &::scroll-button(inline-end),
  &::scroll-button(inline-start) {
    content: " ";
    position: absolute;
    /* Use anchor positioning to center them */
    position-anchor: --carousel;
    top: anchor(center);
  }

  /* Create the markers on the children */
  div {
    &::scroll-marker {
      content: " ";
      width: 24px;
      border-radius: 50%;
      cursor: pointer;
    }
    /* Highlight the active marker */
    &::scroll-marker:target-current {
      background: white;
    }
  }
}

See the Pen Carousel Pure HTML and CSS [forked] by utilitybend.

See the Pen Webshop slick slider remake in CSS [forked] by utilitybend.

State Queries: Sticky Thing Stuck? Snappy Thing Snapped?

For a long time, we have lacked the ability to know if a “sticky thing is stuck” or if a “snappy item is snapped” without relying on IntersectionObserver hacks. Chrome 133 introduced scroll-state queries, allowing us to query these states declaratively.

By setting container-type: scroll-state, we can now style children based on whether they are stuck, snapped, or overflowing. This is a massive “quality of life” improvement that I have been eagerly waiting for since CSS Day 2023. It has even evolved a lot since we can also see the direction of the scroll, lovely!

For a simple example: we can finally apply a shadow to a header only when it is actually sticking to the top of the viewport:

.header-container {
  container-type: scroll-state;
  position: sticky;
  top: 0;

  header {
    transition: box-shadow 0.5s ease-out;
    /* The query checks the state of the container */
    @container scroll-state(stuck: top) {
      box-shadow: rgba(0, 0, 0, 0.6) 0px 12px 28px 0px;
    }
  }
}
  • Demo: A sticky header that only applies a shadow when it is actually stuck.

See the Pen Sticky headers with scroll-state query, checking if the sticky element is stuck [forked] by utilitybend.

  • Demo: A Pokémon-themed list that uses scroll-state queries combined with anchor positioning to move a frame over the currently snapped character.

See the Pen Scroll-state query to check which item is snapped with CSS, Pokemon version [forked] by utilitybend.

Optimized Ergonomics: Logic In CSS

The “Optimized Ergonomics” section of CSS Wrapped highlights features that make our workflows more intuitive. Three features stand out as transformative for how we write logic:

  1. if() Statements
    We are finally getting conditionals in CSS. The if() function acts like a ternary operator for stylesheets, allowing us to apply values based on media, support, or style queries inline. This reduces the need for verbose @media blocks for single property changes.
  2. @function functions
    We can finally move some logic to a different place, resulting in some cleaner files, a real quality of life feature.
  3. sibling-index() and sibling-count()
    These tree-counting functions solve the issue of staggering animations or styling items based on list size. As I explored in Styling siblings with CSS has never been easier, this eliminates the need to hard-code custom properties (like --index: 1) in our HTML.

Example: Calculating Layouts

We can now write concise mathematical formulas. For example, staggering an animation for cards entering the screen becomes trivial:

.card-container > * {
  animation: reveal 0.6s ease-out forwards;
  /* No more manual --index variables! */
  animation-delay: calc(sibling-index() * 0.1s);
}

I even experimented with using these functions along with trigonometry to place items in a perfect circle without any JavaScript.

See the Pen Stagger cards using sibling-index() [forked] by utilitybend.

  • Demo: Placing items in a perfect circle using sibling-index, sibling-count, and the new CSS @function feature.

See the Pen The circle using sibling-index, sibling-count and functions [forked] by utilitybend.

My CSS To-Do List: Features I Can’t Wait To Try

While I have been busy sculpting selects and transitions, the “CSS Wrapped 2025” report is packed with other goodies that I haven’t had the chance to fire up in CodePen yet. These are high on my list for my next experiments:

Anchored Container Queries

I used CSS Anchor Positioning for the buttons in my carousel demo, but “CSS Wrapped” highlights an evolution of this: Anchored Container Queries. This solves a problem we’ve all had with tooltips: if the browser flips the tooltip from top to bottom because of space constraints, the “arrow” often stays pointing the wrong way. With anchored container queries (@container anchored(fallback: flip-block)), we can style the element based on which fallback position the browser actually chose.

Nested View Transition Groups

View Transitions have been a revolution, but they came with a specific trade-off: they flattened the element tree, which often broke 3D transforms or overflow: clip. I always had a feeling that it was missing something, and this might just be the answer. By using view-transition-group: nearest, we can finally nest transition groups within each other.

This allows us to maintain clipping effects or 3D rotations during a transition — something that was previously impossible because the elements were hoisted up to the top level.

.card img {
  view-transition-name: photo;
  view-transition-group: nearest; /* Keep it nested! */
}

Typography and Shapes

Finally, the ergonomist in me is itching to try Text Box Trim, which promises to remove that annoying extra whitespace above and below text content (the leading) to finally achieve perfect vertical alignment. And for the creative side, corner-shape and the shape() function are opening up non-rectangular layouts, allowing for “squaricles” and complex paths that respond to CSS variables. That being said, I can’t wait to have a design full of squircles!

A Hopeful Future

We are witnessing a world where CSS is becoming capable of handling logic, state, and complex interactions that previously belonged to JavaScript. Features like moveBefore (preserving DOM state for iframes/videos) and attr() (using types beyond strings for colors and grids) further cement this reality.

While some of these features are currently experimental or specific to Chrome, the momentum is undeniable. We must hope for continued support across all browsers through initiatives like Interop to ensure these capabilities become the baseline. That being said, having browser engines is just as important as having all these awesome features in “Chrome first”. These new features need to be discussed, tinkered with, and tested before ever landing in browsers.

It is a fantastic moment to get into CSS. We are no longer just styling documents; we are crafting dynamic, ergonomic, and robust applications with a native toolkit that is more powerful than ever.

Let’s get going with this new era and spread the word.

This is CSS Wrapped!

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[How UX Professionals Can Lead AI Strategy]]> https://smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/ https://smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/ Mon, 08 Dec 2025 08:00:00 GMT Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.

Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.

The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, someone else will decide how it affects your work. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.

You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.

Why UX Professionals Must Own the AI Conversation

Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.

But without UX input, AI implementations often fail users in predictable ways:

  • They automate tasks without understanding the judgment calls those tasks require.
  • They optimize for speed while destroying the quality that made your work valuable.

Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.

Use AI Momentum to Advance Your Priorities

Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.

How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.

Your Role Isn’t Disappearing (It’s Evolving)

Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.

That someone should be you.

Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.

When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.

Your future value comes from the work you’re already doing:

  • Seeing the full picture.
    You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.
  • Making judgment calls.
    You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.
  • Connecting the dots.
    You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.

AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.

Step 1: Understand Management’s AI Motivations

Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.

Speak their language.
When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. “This approach will protect our quality standards” is less compelling than “This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”

Separate hype from reality.
Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.

Identify real pain points.
AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.

Step 2: Audit Your Current State and Opportunities

Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.

Identify high-volume, repeatable tasks versus high-judgment work.
Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.

Also, identify what you’ve wanted to do but couldn’t get approved.
This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.

Spot opportunities where AI could genuinely help:

  • Research synthesis:
    AI can help organize and categorize findings.
  • Analyzing user behavior data:
    AI can process analytics and session recordings to surface patterns you might miss.
  • Rapid prototyping:
    AI can quickly generate testable prototypes, speeding up your test cycles.
Step 3: Define AI Principles for Your UX Practice

Before you start forming your strategy, establish principles that will guide every decision.

Set non-negotiables.
User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.

Define criteria for AI use.
AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.

Define success metrics beyond efficiency.
Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.

Create guardrails.
Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.

Step 4: Build Your AI-in-UX Strategy

Now you’re ready to build the actual strategy you’ll pitch to leadership. Start small with pilot projects that have a clear scope and evaluation criteria.

Connect to business outcomes management cares about.
Don’t pitch “using AI for research synthesis.” Pitch “reducing time from research to insights by 40%, enabling faster product decisions.”

Piggyback your existing priorities on AI momentum.
Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.

Define roles clearly.
Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.

Plan for capability building.
Your team will need training and new skills. Budget time and resources for this.

Address risks honestly.
AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.

Step 5: Pitch the Strategy to Leadership

Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.

Lead with outcomes and ROI they care about.
Put the business case up front.

Bundle your wish list into the AI strategy.
When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. “To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly” sounds much more reasonable than “Can we please do more testing?” You’re explaining what’s required for their AI investment to succeed.

Show quick wins alongside a longer-term vision.
Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.

Ask for what you need.
Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.

Step 6: Implement and Demonstrate Value

Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.

Document wins and learning.
Failures are useful too. If a pilot doesn’t work out, document why and what you learned.

Share progress in management’s language. Monthly updates should focus on business outcomes, not technical details. “We’ve reduced research synthesis time by 35% while maintaining quality scores” is the right level of detail.

Build internal advocates by solving real problems.
When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.

Iterate based on what works in your specific context. Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.

Taking Initiative Beats Waiting

AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.

Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.

Take one practical first step this week.
Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.

Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.

You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Beyond The Black Box: Practical XAI For UX Practitioners]]> https://smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/ https://smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/ Fri, 05 Dec 2025 15:00:00 GMT In my last piece, we established a foundational truth: for users to adopt and rely on AI, they must trust it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s Ability, Benevolence, Integrity, and Predictability. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.

Our conversation now must evolve from the why of trust to the how of transparency. The field of Explainable AI (XAI), which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.

This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.

De-mystifying XAI: Core Concepts For UX Practitioners

XAI is about answering the user’s question: “Why?” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.

Feature Importance And Counterfactuals

There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are feature importance (Figure 1) and counterfactuals. These are often the most straightforward for users to understand and the most actionable for designers to implement.

Feature Importance

This explainability method answers, “What were the most important factors the AI considered?” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.

Example: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn.

Counterfactuals

This powerful method answers, “What would I need to change to get a different outcome?” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”

Example: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future.

Using Model Data To Enhance The Explanation

Although technical specifics are often handled by data scientists, it's helpful for UX practitioners to know that tools like LIME (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and SHAP (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.

When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.

Now let’s cover feature importance with the assistance of Local Explanations (e.g., LIME) data: This approach answers, “Why did the AI make this specific recommendation for me, right now?” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.

Example: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend this specific song by Adele to you right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”

Finally, let’s cover the inclusion of Value-based Explanations (e.g. Shapley Additive Explanations (SHAP) data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “How did each factor push the decision one way or the other?” It helps visualize what mattered, and whether its influence was positive or negative.

Example: Imagine a bank uses an AI model to decide whether to approve a loan application.

Feature Importance: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers what mattered.

Feature Importance with Value-Based Explanations (SHAP): SHAP values would take feature importance further based on elements of the model.

  • For an approved loan, SHAP might show that a high credit score significantly pushed the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio pulled it slightly away (negative influence), but not enough to deny the loan.
  • For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries strongly pushed the decision towards denial, even if the credit score was decent.

This helps the loan officer explain to the applicant beyond what was considered, to how each factor contributed to the final “yes” or “no” decision.

It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.

Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.

XAI And Ethical AI: Unpacking Bias And Responsibility

Beyond building trust, XAI plays a critical role in addressing the profound ethical implications of AI*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.

For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.

The power of XAI also comes with the potential for “explainability washing.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.

UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the why of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.

From Methods To Mockups: Practical XAI Design Patterns

Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.

Pattern 1: The "Because" Statement (for Feature Importance)

This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.

  • Heuristic: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.
Example: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.

Song Recommendation: “Velvet Morning”
Because you listen to “The Fuzz” and other psychedelic rock.

Pattern 2: The "What-If" Interactive (for Counterfactuals)

Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.

  • Heuristic: Make explanations interactive and empowering. Let users see the cause and effect of their choices.
Example: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).

Pattern 3: The Highlight Reel (For Local Explanations)

When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.

  • Heuristic: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.
Example: An AI tool that summarizes long articles.

AI-Generated Summary Point:
Initial research showed a market gap for sustainable products.

Source in Document:
“...Our Q2 analysis of market trends conclusively demonstrated that no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products...”

Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)

For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.

  • Heuristic: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.
Example: An AI screening a candidate’s profile for a job.

Why this candidate is a 75% match:

Factors pushing the score up:
  • 5+ Years UX Research Experience
  • Proficient in Python

Factors pushing the score down:
  • No experience with B2B SaaS

Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:

  • Natural language explanations: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.
  • Contextual explanations: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.
  • Relevant visualizations: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.

A Note For the Front End: Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.

Some Real-world Examples

UPS Capital’s DeliveryDefense

UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their DeliveryDefense software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.

Autonomous Vehicles

These vehicles of the future will need to effectively use XAI to help their vehicles make safe, explainable decisions. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.

IBM Watson Health (and its challenges)

While often cited as a general example of AI in healthcare, it’s also a valuable case study for the importance of XAI. The failure of its Watson for Oncology project highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.

The UX Researcher’s Role: Pinpointing And Validating Explanations

Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.

Informing the XAI Strategy (What to Explain)

Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.

Mental Model Interviews: Unpacking User Perceptions Of AI Systems

Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.

These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.

Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.

AI Journey Mapping: A Deep Dive Into User Trust And Explainability

By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.

Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.

These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.

The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding what the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the process at critical moments. This means addressing:

  • Why a particular output was generated: Was it due to specific input data? A particular model architecture?
  • What factors influenced the AI’s decision: Were certain features weighted more heavily?
  • How the AI arrived at its conclusion: Can we offer a simplified, analogous explanation of its internal workings?
  • What assumptions the AI made: Were there implicit understandings of the user’s intent or data that need to be surfaced?
  • What the limitations of the AI are: Clearly communicating what the AI cannot do, or where its accuracy might waver, builds realistic expectations.

AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.

Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.

Collaborating On The Design (How to Explain Your AI)

Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.

Targeted Usability & Comprehension Testing: We can design research studies that specifically test the XAI components. We don’t just ask, “Is this easy to use?” We ask, “After seeing this, can you tell me in your own words why the system recommended this product?” or “Show me what you would do to see if you could get a different result.” The goal here is to measure comprehension and actionability, alongside usability.

Measuring Trust Itself: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “How much do you trust this recommendation?” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.

This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.

The Goldilocks Zone Of Explanation

A critical word of caution: it is possible to over-explain. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually decrease trust. The goal is not to make the user a data scientist.

One solution is progressive disclosure.

  1. Start with the simple. Lead with a concise “Because” statement. For most users, this will be enough.
  2. Offer a path to detail. Provide a clear, low-friction link like “Learn More” or “See how this was determined.”
  3. Reveal the complexity. Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.

This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.

Start with the simple: “Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.

Offer a path to detail: Below that, a small link or button: “Why is 72 degrees optimal?"

Reveal the complexity: Clicking that link could open a new screen showing:

  • Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.
  • A visualization of energy consumption at different temperatures.
  • A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”

It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.

For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.

Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.

Ultimately, the best way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.

XAI for Deep Reasoning Agents

Some of the newest AI systems, known as deep reasoning agents, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.

The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.

Next Steps: Empowering Your XAI Journey

Explainability is a fundamental pillar for building trustworthy and effective AI products. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.

To deepen your understanding and practical application, consider exploring resources like the AI Explainability 360 (AIX360) toolkit from IBM Research or Google’s What-If Tool, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the Responsible AI Forum or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.

Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:

“By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”

Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.

]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[Masonry: Things You Won’t Need A Library For Anymore]]> https://smashingmagazine.com/2025/12/masonry-things-you-wont-need-library-anymore/ https://smashingmagazine.com/2025/12/masonry-things-you-wont-need-library-anymore/ Tue, 02 Dec 2025 10:00:00 GMT About 15 years ago, I was working at a company where we built apps for travel agents, airport workers, and airline companies. We also built our own in-house framework for UI components and single-page app capabilities.

We had components for everything: fields, buttons, tabs, ranges, datatables, menus, datepickers, selects, and multiselects. We even had a div component. Our div component was great by the way, it allowed us to do rounded corners on all browsers, which, believe it or not, wasn't an easy thing to do at the time.

Our work took place at a point in our history when JS, Ajax, and dynamic HTML were seen as a revolution that brought us into the future. Suddenly, we could update a page dynamically, get data from a server, and avoid having to navigate to other pages, which was seen as slow and flashed a big white rectangle on the screen between the two pages.

There was a phrase, made popular by Jeff Atwood (the founder of StackOverflow), which read:

“Any application that can be written in JavaScript will eventually be written in JavaScript.”

Jeff Atwood

To us at the time, this felt like a dare to actually go and create those apps. It felt like a blanket approval to do everything with JS.

So we did everything with JS, and we didn’t really take the time to research other ways of doing things. We didn’t really feel the incentive to properly learn what HTML and CSS could do. We didn’t really perceive the web as an evolving app platform in its entirety. We mostly saw it as something we needed to work around, especially when it came to browser support. We could just throw more JS at it to get things done.

Would taking the time to learn more about how the web worked and what was available on the platform have helped me? Sure, I could probably have shaved a bunch of code that wasn’t truly needed. But, at the time, maybe not that much.

You see, browser differences were pretty significant back then. This was a time when Internet Explorer was still the dominant browser, with Firefox being the close second, but starting to lose market share due to Chrome rapidly gaining popularity. Although Chrome and Firefox were quite good at agreeing on web standards, the environments in which our apps were running meant that we had to support IE6 for a long time. Even when we were allowed to support IE8, we still had to deal with a lot of differences between browsers. Not only that, but the web of the time just didn't have that many capabilities built right into the platform.

Fast forward to today. Things have changed tremendously. Not only do we have more of these capabilities than ever before, but the rate at which they become available has increased as well.

Let me ask the question again, then: Would taking the time to learn more about how the web works and what is available on the platform help you today? Absolutely yes. Learning to understand and use the web platform today puts you at a huge advantage over other developers.

Whether you work on performance, accessibility, responsiveness, all of them together, or just shipping UI features, if you want to do it as a responsible engineer, knowing the tools that are available to you helps you reach your goals faster and better.

Some Things You Might Not Need A Library For Anymore

Knowing what browsers support today, the question, then, is: What can we ditch? Do we need a div component to do rounded corners in 2025? Of course, we don’t. The border-radius property has been supported by all currently used browsers for more than 15 years at this point. And corner-shape is also coming soon, for even fancier corners.

Let’s take a look at relatively recent features that are now available in all major browsers, and which you can use to replace existing dependencies in your codebase.

The point isn't to immediately ditch all your beloved libraries and rewrite your codebase. As for everything else, you’ll need to take browser support into account first and decide based on other factors specific to your project. The following features are implemented in the three main browser engines (Chromium, WebKit, and Gecko), but you might have different browser support requirements that prevent you from using them right away. Now is still a good time to learn about these features, though, and perhaps plan to use them at some point.

Popovers And Dialogs

The Popover API, the <dialog> HTML element, and the ::backdrop pseudo-element can help you get rid of dependencies on popup, tooltip, and dialog libraries, such as Floating UI, Tippy.js, Tether, or React Tooltip.

They handle accessibility and focus management for you, out of the box, are highly customizable by using CSS, and can easily be animated.

Accordions

The <details> element, its name attribute for mutually exclusive elements, and the ::details-content pseudo-element remove the need for accordion components like the Bootstrap Accordion or the React Accordion component.

Just using the platform here means it’s easier for folks who know HTML/CSS to understand your code without having to first learn to use a specific library. It also means you’re immune to breaking changes in the library or the discontinuation of that library. And, of course, it means less code to download and run. Mutually exclusive details elements don’t need JS to open, close, or animate.

CSS Syntax

Cascade layers, for a more organized CSS codebase, CSS nesting, for more compact CSS, new color functions, relative colors, and color-mix, new Maths functions like abs(), sign(), pow() and others help reduce dependencies on CSS pre-processors, utility libraries like Bootstrap and Tailwind, or even runtime CSS-in-JS libraries.

The game changer :has(), one of the most requested features for a long time, removes the need for more complicated JS-based solutions.

JS Utilities

Modern Array methods like findLast(), or at(), as well as Set methods like difference(), intersection(), union() and others can reduce dependencies on libraries like Lodash.

Container Queries

Container queries make UI components respond to things other than the viewport size, and therefore make them more reusable across different contexts.

No need to use a JS-heavy UI library for this anymore, and no need to use a polyfill either.

Layout

Grid, subgrid, flexbox, or multi-column have been around for a long time now, but looking at the results of the State of CSS surveys, it’s clear that developers tend to be very cautious with adopting new things, and wait for a very long time before they do.

These features have been Baseline for a long time and you could use them to get rid of dependencies on things like the Bootstrap’s grid system, Foundation Framework’s flexbox utilities, Bulma fixed grid, Materialize grid, or Tailwind columns.

I’m not saying you should drop your framework. Your team adopted it for a reason, and removing it might be a big project. But looking at what the web platform can offer without a third-party wrapper on top comes with a lot of benefits.

Things You Might Not Need Anymore In The Near Future

Now, let’s take a quick look at some of the things you will not need a library for in the near future. That is to say, the things below are not quite ready for mass adoption, but being aware of them and planning for potential later use can be helpful.

Anchor Positioning

CSS anchor positioning handles the positioning of popovers and tooltips relative to other elements, and takes care of keeping them in view, even when moving, scrolling, or resizing the page.

This is a great complement to the Popover API mentioned before, which will make it even easier to migrate away from more performance-intensive JS solutions.

Navigation API

The Navigation API can be used to handle navigation in single-page apps and might be a great complement, or even a replacement, to React Router, Next.js routing, or Angular routing tasks.

View Transitions API

The View Transitions API can animate between the different states of a page. On a single-page application, this makes smooth transitions between states very easy, and can help you get rid of animation libraries such as Anime.js, GSAP, or Motion.dev.

Even better, the API can also be used with multiple-page applications.

Remember earlier, when I said that the reason we built single-page apps at the company where I worked 15 years ago was to avoid the white flash of page reloads when navigating? Had that API been available at the time, we would have been able to achieve beautiful page transition effects without a single-page framework and without a huge initial download of the entire app.

Scroll-driven Animations

Scroll-driven animations run on the user’s scroll position, rather than over time, making them a great solution for storytelling and product tours.

Some people have gone a bit over the top with it, but when used well, this can be a very effective design tool, and can help get rid of libraries like: ScrollReveal, GSAP Scroll, or WOW.js.

Customizable Selects

A customizable select is a normal <select> element that lets you fully customize its appearance and content, while ensuring accessibility and performance benefits.

This has been a long time coming, and a highly requested feature, and it’s amazing to see it come to the web platform soon. With a built-in customizable select, you can finally ditch all this hard-to-maintain JS code for your custom select components.

CSS Masonry

CSS Masonry is another upcoming web platform feature that I want to spend more time on.

With CSS Masonry, you can achieve layouts that are very hard, or even impossible, with flex, grid, or other built-in CSS layout primitives. Developers often resort to using third-party libraries to achieve Masonry layouts, such as the Masonry JS library.

But, more on that later. Let’s wrap this point up before moving on to Masonry.

Why You Should Care

The job market is full of web developers with experience in JavaScript and the latest frameworks of the day. So, really, what’s the point in learning to use the web platform primitives more, if you can do the same things with the libraries, utilities, and frameworks you already know today?

When an entire industry relies on these frameworks, and you can just pull in the right library, shouldn’t browser vendors just work with these libraries to make them load and run faster, rather than trying to convince developers to use the platform instead?

First of all, we do work with library authors, and we do make frameworks better by learning about what they use and improving those areas.

But secondly, “just using the platform” can bring pretty significant benefits.

Sending Less Code To Devices

The main benefit is that you end up sending far less code to your clients’ devices.

According to the 2024 Web Almanac, the average number of HTTP requests is around 70 per site, most of which is due to JavaScript with 23 requests. In 2024, JS overtook images as the dominant file type too. The median number of page requests for JS files is 23, up 8% since 2022.

And page size continues to grow year over year. The median page weight is around 2MB now, which is 1.8MB more than it was 10 years ago.

Sure, your internet connection speed has probably increased, too, but that’s not the case for everyone. And not everyone has the same device capabilities either.

Pulling in third-party code for things you can do with the platform, instead, most probably means you ship more code, and therefore reach fewer customers than you normally would. On the web, bad loading performance leads to large abandonment rates and hurts brand reputation.

Running Less Code On Devices

Furthermore, the code you do ship on your customers’ devices likely runs faster if it uses fewer JavaScript abstractions on top of the platform. It’s also probably more responsive and more accessible by default. All of this leads to more and happier customers.

Check my colleague Alex Russell’s yearly performance inequality gap blog, which shows that premium devices are largely absent from markets with billions of users due to wealth inequality. And this gap is only growing over time.

Built-in Masonry Layout

One web platform feature that’s coming soon and which I’m very excited about is CSS Masonry.

Let me start by explaining what Masonry is.

What Is Masonry

Masonry is a type of layout that was made popular by Pinterest years ago. It creates independent tracks of content within which items pack themselves up as close to the start of the track as they can.

Many people see Masonry as a great option for portfolios and photo galleries, which it certainly can do. But Masonry is more flexible than what you see on Pinterest, and it’s not limited to just waterfall-like layouts.

In a Masonry layout:

  • Tracks can be columns or rows:

  • Tracks of content don’t all have to be the same size:

  • Items can span multiple tracks:

  • Items can be placed on specific tracks; they don’t have to always follow the automatic placement algorithm:

Demos

Here are a few simple demos I made by using the upcoming implementation of CSS Masonry in Chromium.

A photo gallery demo, showing how items (the title in this case) can span multiple tracks:

Another photo gallery showing tracks of different sizes:

A news site layout with some tracks wider than others, and some items spanning the entire width of the layout:

A kanban board showing that items can be placed onto specific tracks:

Note: The previous demos were made with a version of Chromium that’s not yet available to most web users, because CSS Masonry is only just starting to be implemented in browsers.

However, web developers have been happily using libraries to create Masonry layouts for years already.

Sites Using Masonry Today

Indeed, Masonry is pretty common on the web today. Here are a few examples I found besides Pinterest:

And a few more, less obvious, examples:

So, how were these layouts created?

Workarounds

One trick that I’ve seen used is using a Flexbox layout instead, changing its direction to column, and setting it to wrap.

This way, you can place items of different heights in multiple, independent columns, giving the impression of a Masonry layout:

There are, however, two limitations with this workaround:

  1. The order of items is different from what it would be with a real Masonry layout. With Flexbox, items fill the first column first and, when it’s full, then go to the next column. With Masonry, items would stack in whichever track (or column in this case) has more space available.
  2. But also, and perhaps more importantly, this workaround requires that you set a fixed height to the Flexbox container; otherwise, no wrapping would occur.
Third-party Masonry Libraries

For more advanced cases, developers have been using libraries.

The most well-known and popular library for this is simply called Masonry, and it gets downloaded about 200,000 times per week according to NPM.

Squarespace also provides a layout component that renders a Masonry layout, for a no-code alternative, and many sites use it.

Both of these options use JavaScript code to place items in the layout.

Built-in Masonry

I’m really excited that Masonry is now starting to appear in browsers as a built-in CSS feature. Over time, you will be able to use Masonry just like you do Grid or Flexbox, that is, without needing any workarounds or third-party code.

My team at Microsoft has been implementing built-in Masonry support in the Chromium open source project, which Edge, Chrome, and many other browsers are based on. Mozilla was actually the first browser vendor to propose an experimental implementation of Masonry back in 2020. And Apple has also been very interested in making this new web layout primitive happen.

The work to standardize the feature is also moving ahead, with agreement within the CSS working group about the general direction and even a new display type display: grid-lanes.

If you want to learn more about Masonry and track progress, check out my CSS Masonry resources page.

In time, when Masonry becomes a Baseline feature, just like Grid or Flexbox, we’ll be able to simply use it and benefit from:

  • Better performance,
  • Better responsiveness,
  • Ease of use and simpler code.

Let’s take a closer look at these.

Better Performance

Making your own Masonry-like layout system, or using a third-party library instead, means you’ll have to run JavaScript code to place items on the screen. This also means that this code will be render blocking. Indeed, either nothing will appear, or things won’t be in the right places or of the right sizes, until that JavaScript code has run.

Masonry layout is often used for the main part of a web page, which means the code would be making your main content appear later than it could otherwise have, degrading your LCP, or Largest Contentful Paint metric, which plays a big role in perceived performance and search engine optimization.

I tested the Masonry JS library with a simple layout and by simulating a slow 4G connection in DevTools. The library is not very big (24KB, 7.8KB gzipped), but it took 600ms to load under my test conditions.

Here is a performance recording showing that long 600ms load time for the Masonry library, and that no other rendering activity happened while that was happening:

In addition, after the initial load time, the downloaded script then needed to be parsed, compiled, and then run. All of which, as mentioned before, was blocking the rendering of the page.

With a built-in Masonry implementation in the browser, we won’t have a script to load and run. The browser engine will just do its thing during the initial page rendering step.

Better Responsiveness

Similar to when a page first loads, resizing the browser window leads to rendering the layout in that page again. At this point, though, if the page is using the Masonry JS library, there’s no need to load the script again, because it’s already here. However, the code that moves items in the right places needs to run.

Now this particular library seems to be pretty fast at doing this when the page loads. However, it animates the items when they need to move to a different place on window resize, and this makes a big difference.

Of course, users don’t spend time resizing their browser windows as much as we developers do. But this animated resizing experience can be pretty jarring and adds to the perceived time it takes for the page to adapt to its new size.

Ease Of Use And Simpler Code

How easy it is to use a web feature and how simple the code looks are important factors that can make a big difference for your team. They can’t ever be as important as the final user experience, of course, but developer experience impacts maintainability. Using a built-in web feature comes with important benefits on that front:

  • Developers who already know HTML, CSS, and JS will most likely be able to use that feature easily because it’s been designed to integrate well and be consistent with the rest of the web platform.
  • There’s no risk of breaking changes being introduced in how the feature is used.
  • There’s almost zero risk of that feature becoming deprecated or unmaintained.

In the case of built-in Masonry, because it’s a layout primitive, you use it from CSS, just like Grid or Flexbox, no JS involved. Also, other layout-related CSS properties, such as gap, work as you’d expect them to. There are no tricks or workarounds to know about, and the things you do learn are documented on MDN.

For the Masonry JS lib, initialization is a bit complex: it requires a data attribute with a specific syntax, along with hidden HTML elements to set the column and gap sizes.

Plus, if you want to span columns, you need to include the gap size yourself to avoid problems:

<script src="https://unpkg.com/masonry-layout@4.2.2/dist/masonry.pkgd.min.js"></script>
<style>
  .track-sizer,
  .item {
    width: 20%;
  }
  .gutter-sizer {
    width: 1rem;
  }
  .item {
    height: 100px;
    margin-block-end: 1rem;
  }
  .item:nth-child(odd) {
    height: 200px;
  }
  .item--width2 {
    width: calc(40% + 1rem);
  }
</style>

<div class="container"
  data-masonry='{ "itemSelector": ".item", "columnWidth": ".track-sizer", "percentPosition": true, "gutter": ".gutter-sizer" }'>
  <div class="track-sizer"></div>
  <div class="gutter-sizer"></div>
  <div class="item"></div>
  <div class="item item--width2"></div>
  <div class="item"></div>
  ...
</div>

Let’s compare this to what a built-in Masonry implementation would look like:

<style>
  .container {
    display: grid-lanes;
    grid-lanes: repeat(4, 20%);
    gap: 1rem;
  }
  .item {
    height: 100px;
  }
  .item:nth-child(odd) {
    height: 200px;
  }
  .item--width2 {
    grid-column: span 2;
  }
</style>

<div class="container">
  <div class="item"></div>
  <div class="item item--width2"></div>
  <div class="item"></div>
  ...
</div>

Simpler, more compact code that can just use things like gap and where spanning tracks is done with span 2, just like in grid, and doesn’t require you to calculate the right width that includes the gap size.

How To Know What’s Available And When It’s Available?

Overall, the question isn’t really if you should use built-in Masonry over a JS library, but rather when. The Masonry JS library is amazing and has been filling a gap in the web platform for many years, and for many happy developers and users. It has a few drawbacks if you compare it to a built-in Masonry implementation, of course, but those are not important if that implementation isn’t ready.

It’s easy for me to list these cool new web platform features because I work at a browser vendor, and I therefore tend to know what’s coming. But developers often share, survey after survey, that keeping track of new things is hard. Staying informed is difficult, and companies don’t always prioritize learning anyway.

To help with this, here are a few resources that provide updates in simple and compact ways so you can get the information you need quickly:

If you have a bit more time, you might also be interested in browser vendors’ release notes:

For even more resources, check out my Navigating the Web Platform Cheatsheet.

My Thing Is Still Not Implemented

That’s the other side of the problem. Even if you do find the time, energy, and ways to keep track, there’s still frustration with getting your voice heard and your favorite features implemented.

Maybe you’ve been waiting for years for a specific bug to be resolved, or a specific feature to ship in a browser where it’s still missing.

What I’ll say is browser vendors do listen. I’m part of several cross-organization teams where we discuss developer signals and feedback all the time. We look at many different sources of feedback, both internal at each browser vendor and external/public on forums, open source projects, blogs, and surveys. And, we’re always trying to create better ways for developers to share their specific needs and use cases.

So, if you can, please demand more from browser vendors and pressure us to implement the features you need. I get that it takes time, and can also be intimidating (not to mention a high barrier to entry), but it also works.

Here are a few ways you can get your (or your company’s) voice heard: Take the annual State of JS, State of CSS, and State of HTML surveys. They play a big role in how browser vendors prioritize their work.

If you need a specific standard-based API to be implemented consistently across browsers, consider submitting a proposal at the next Interop project iteration. It requires more time, but consider how Shopify and RUMvision shared their wish lists for Interop 2026. Detailed information like this can be very useful for browser vendors to prioritize.

For more useful links to influence browser vendors, check out my Navigating the Web Platform Cheatsheet.

Conclusion

To close, I hope this article has left you with a few things to think about:

  • Excitement for Masonry and other upcoming web features.
  • A few web features you might want to start using.
  • A few pieces of custom or 3rd-party code you might be able to remove in favor of built-in features.
  • A few ways to keep track of what’s coming and influence browser vendors.

More importantly, I hope I’ve convinced you of the benefits of using the web platform to its full potential.

]]>
hello@smashingmagazine.com (Patrick Brosset)
<![CDATA[A Sparkle Of December Magic (2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/11/desktop-wallpaper-calendars-december-2025/ https://smashingmagazine.com/2025/11/desktop-wallpaper-calendars-december-2025/ Sun, 30 Nov 2025 09:00:00 GMT As the year winds down, many of us are busy wrapping up projects, meeting deadlines, or getting ready for the holiday season. Why not take a moment amid the end-of-year hustle to set the mood for December with some wintery desktop wallpapers? They might just bring a sparkle of inspiration to your workspace in these busy weeks.

To provide you with unique and inspiring wallpaper designs each month anew, we started our monthly wallpapers series more than 14 years ago. It’s the perfect opportunity both to put your creative skills to the test and to find just the right wallpaper to accompany you through the new month. This December is no exception, of course, so following our cozy little tradition, we have a new collection of wallpapers waiting for you below. Each design has been created with love by artists and designers from across the globe and comes in a variety of screen resolutions.

A huge thank-you to everyone who tickled their creativity and shared their wallpapers with us this time around! This post wouldn’t exist without your kind support. ❤️ Happy December!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 🎨
    We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬
Zero-Gravity December

“Floating in space, decorating the Christmas tree, unbothered by the familiar weight of New Year’s resolutions waiting back on Earth every December.” — Designed by Ginger It Solutions from Serbia.

A Quiet December Walk

“In the stillness of a snowy forest, a man and his loyal dog share a peaceful winter walk. The world is hushed beneath a blanket of white, with only soft flakes falling and the crunch of footsteps breaking the silence. It’s a simple, serene moment that captures the calm beauty of December and the quiet joy of companionship in nature’s winter glow.” — Designed by PopArt Studio from Serbia.

Quoted Rudolph

Designed by Ricardo Gimenes from Spain.

Learning Is An Art

“The year is coming to an end. A year full of adventures, projects, unforgettable moments, and others that will fade into oblivion. And it’s this month that we start preparing for next year, organizing it and hoping it will be at least as good as the last, and that it will give us 365 days to savor from the first to the last. This month we share Katherine Johnson and some wise words that we shouldn’t forget: ‘I like to learn. It’s an art and a science.’” — Designed by Veronica Valenzuela Jimenez from Spain.

Chilly Dog, Warm Troubles

Designed by Ricardo Gimenes from Spain.

Modern Christmas Magic

“A fusion of modern Christmas aesthetics and a user-centric mobile app development company, crafting delightful holiday-inspired digital experiences.” — Designed by the Zco Corporation Design Team from the United States.

Dear Moon, Merry Christmas

Designed by Vlad Gerasimov from Georgia.

It’s In The Little Things

Designed by Thaïs Lenglez from Belgium.

The House On The River Drina

“Since we often yearn for a peaceful and quiet place to work, we have found inspiration in the famous house on the River Drina in Bajina Bašta, Serbia. Wouldn’t it be great being in nature, away from civilization, swaying in the wind and listening to the waves of the river smashing your house, having no neighbors to bother you? Not sure about the Internet, though…” — Designed by PopArt Studio from Serbia.

Christmas Cookies

“Christmas is coming and a great way to share our love is by baking cookies.” — Designed by Maria Keller from Mexico.

Sweet Snowy Tenderness

“You know that warm feeling when you get to spend cold winter days in a snug, homey, relaxed atmosphere? Oh, yes, we love it, too! It is the sentiment we set our hearts on for the holiday season, and this sweet snowy tenderness is for all of us who adore watching the snowfall from our windows. Isn’t it romantic?” — Designed by PopArt Studio from Serbia.

Anonymoose

Designed by Ricardo Gimenes from Spain.

Cardinals In Snowfall

“During Christmas season, in the cold, colorless days of winter, Cardinal birds are seen as symbols of faith and warmth. In the part of America I live in, there is snowfall every December. While the snow is falling, I can see gorgeous Cardinals flying in and out of my patio. The intriguing color palette of the bright red of the Cardinals, the white of the flurries, and the brown/black of dry twigs and fallen leaves on the snow-laden ground fascinates me a lot, and inspired me to create this quaint and sweet, hand-illustrated surface pattern design as I wait for the snowfall in my town!” — Designed by Gyaneshwari Dave from the United States.

Getting Hygge

“There’s no more special time for a fire than in the winter. Cozy blankets, warm beverages, and good company can make all the difference when the sun goes down. We’re all looking forward to generating some hygge this winter, so snuggle up and make some memories.” — Designed by The Hannon Group from Washington D.C.

Christmas Woodland

Designed by Mel Armstrong from Australia.

Joy To The World

“Joy to the world, all the boys and girls now, joy to the fishes in the deep blue sea, joy to you and me.” — Designed by Morgan Newnham from Boulder, Colorado.

Gifts Lover

Designed by Elise Vanoorbeek from Belgium.

King Of Pop

Designed by Ricardo Gimenes from Spain.

The Matterhorn

“Christmas is always such a magical time of year so we created this wallpaper to blend the majestry of the mountains with a little bit of magic.” — Designed by Dominic Leonard from the United Kingdom.

Ninja Santa

Designed by Elise Vanoorbeek from Belgium.

Ice Flowers

“I took some photos during a very frosty and cold week before Christmas.” Designed by Anca Varsandan from Romania.

Christmas Selfie

Designed by Emanuela Carta from Italy.

Winter Wonderland

“‘Winter is the time for comfort, for good food and warmth, for the touch of a friendly hand and for a talk beside the fire: it is the time for home.’ (Edith Sitwell) — Designed by Dipanjan Karmakar from India.

Winter Coziness At Home

“Winter coziness that we all feel when we come home after spending some time outside or when we come to our parental home to celebrate Christmas inspired our designers. Home is the place where we can feel safe and sound, so we couldn’t help ourselves but create this calendar.” — Designed by MasterBundles from Ukraine.

Enchanted Blizzard

“A seemingly forgotten world under the shade of winter glaze hides a moment where architecture meets fashion and change encounters steadiness.” — Designed by Ana Masnikosa from Belgrade, Serbia.

All That Belongs To The Past

“Sometimes new beginnings make us revisit our favorite places or people from the past. We don’t visit them often because they remind us of the past but enjoy the brief reunion. Cheers to new beginnings in the new year!” Designed by Dorvan Davoudi from Canada.

December Through Different Eyes

“As a Belgian, December reminds me of snow, coziness, winter, lights, and so on. However, in the Southern hemisphere, it is summer at this time. With my illustration I wanted to show the different perspectives on December. I wish you all a Merry Christmas and Happy New Year!” — Designed by Jo Smets from Belgium.

Silver Winter

Designed by Violeta Dabija from Moldova.

Cozy

“December is all about coziness and warmth. Days are getting darker, shorter, and colder. So a nice cup of hot cocoa just warms me up.” — Designed by Hazuki Sato from Belgium.

Tongue Stuck On Lamppost

Designed by Josh Cleland from the United States.

On To The Next One

“Endings intertwined with new beginnings, challenges we rose to and the ones we weren’t up to, dreams fulfilled and opportunities missed. The year we say goodbye to leaves a bitter-sweet taste, but we’re thankful for the lessons, friendships, and experiences it gave us. We look forward to seeing what the new year has in store, but, whatever comes, we will welcome it with a smile, vigor, and zeal.” — Designed by PopArt Studio from Serbia.

Christmas Owl

“Christmas waves a magic wand over this world, and behold, everything is softer and more beautiful.” — Designed by Suman Sil from India.

Catch Your Perfect Snowflake

“This time of year, people tend to dream big and expect miracles. Let your dreams come true!” Designed by Igor Izhik from Canada.

Winter Garphee

“Garphee’s flufiness glowing in the snow.” Designed by Razvan Garofeanu from Romania.

Trailer Santa

“A mid-century modern Christmas scene outside the norm of snowflakes and winter landscapes.” Designed by Houndstooth from the United States.

Winter Solstice

“In December there’s a winter solstice; which means that the longest night of the year falls in December. I wanted to create the feeling of solitude of the long night into this wallpaper.” — Designed by Alex Hermans from Belgium.

Christmas Time

Designed by Sofie Keirsmaekers from Belgium.

Happy Holidays

Designed by Ricardo Gimenes from Spain.

Get Featured Next Month

Feeling inspired? We’ll publish the January wallpapers on December 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[The Accessibility Problem With Authentication Methods Like CAPTCHA]]> https://smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/ https://smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/ Thu, 27 Nov 2025 10:00:00 GMT The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) has become ingrained in internet browsing since personal computers gained momentum in the consumer electronics market. For nearly as long as people have been going online, web developers have sought ways to block spam bots.

The CAPTCHA service distinguishes between human and bot activity to keep bots out. Unfortunately, its methods are less than precise. In trying to protect humans, developers have made much of the web inaccessible to people with disabilities.

Authentication methods, such as CAPTCHA, typically use image classification, puzzles, audio samples, or click-based tests to determine whether the user is human. While the types of challenges are well-documented, their logic is not public knowledge. People can only guess what it takes to “prove” they are human.

What Is CAPTCHA?

A CAPTCHA is a reverse Turing test that takes the form of a challenge-response test. For example, if it instructs users to “select all images with stairs,” they must pick the stairs out from railings, driveways, and crosswalks. Alternatively, they may be asked to enter the text they see, add the sum of dice faces, or complete a sliding puzzle.

Image-based CAPTCHAs are responsible for the most frustrating shared experiences internet users have — deciding whether to select a square when only a small sliver of the object in question is in it.

Regardless of the method, a computer or algorithm ultimately determines whether the test-taker is human or machine. This authentication service has spawned many offshoots, including reCAPTCHA and hCAPTCHA. It has even led to the creation of entire companies, such as GeeTest and Arkose Labs. The Google-owned automated system reCAPTCHA requires users to click a checkbox labeled “I’m not a robot” for authentication. It runs an adaptive analysis in the background to assign a risk score. hCAPTCHA is an image-classification-based alternative.

Other authentication methods include multi-factor authentication (MFA), QR codes, temporary personal identification numbers (PINs), and biometrics. They do not follow the challenge-response formula, but serve fundamentally similar purposes.

These offshoots are intended to be better than the original, but they often fail to meet modern accessibility standards. Take hCaptcha, for instance, which uses a cookie to let you bypass the challenge-response test entirely. It’s a great idea in theory, but it doesn’t work in practice.

You’re supposed to receive a one-time code via email that you send to a specific number over SMS. Users report receiving endless error messages, forcing them to complete the standard text CAPTCHA. This is only available if the site explicitly enabled it during configuration. If it is not set up, you must complete an image challenge that does not recognize screen readers.

Even when the initial process works, subsequent authentication relies on a third-party cross-site cookie, which most browsers block automatically. Also, the code expires after a short period, so you have to redo the entire process if it takes you too long to move on to the next step.

Why Do Teams Use CAPTCHA And Similar Authentication Methods?

CAPTCHA is common because it is easy to set up. Developers can program it to appear, and it conducts the test automatically. This way, they can focus on more important matters while still preventing spam, fraud, and abuse. These tools are supposed to make it easier for humans to use the internet safely, but they often keep real people from logging in.

These tests result in a poor user experience overall. One study found users wasted over 819 million hours on over 512 billion reCAPTCHA v2 sessions as of 2023. Despite it all, bots prevail. Machine learning models can solve text-based CAPTCHA within fractions of a second with over 97% accuracy.

A 2024 study on Google’s reCAPTCHA v2 — which is still widely used despite the rollout of reCAPTCHA v3 — found bots can solve image classification CAPTCHA with up to 100% accuracy, depending on the object they are tasked with identifying. The researchers used a free, open-source model, which means that bad actors could easily replicate their work.

Why Should Web Developers Stop Using CAPTCHA?

Authentication methods like CAPTCHA have an accessibility problem. Machine learning advances forced these services to grow increasingly complex. Even still, they are not foolproof. Bots get it right more than people do. Research shows they can complete reCAPTCHA within 17.5 seconds, achieving 85% accuracy. Humans take longer and are less accurate.

Many people fail CAPTCHA tests and have no idea what they did wrong. For example, a prompt instructing users to “select all squares with traffic lights” seems simple enough, but it gets complicated if a sliver of the pole is in another square. Should they select that box, or is that what an algorithm would do?

Although bot capabilities have grown by magnitudes, humans have remained the same. As tests get progressively more difficult, they feel less inclined to attempt them. One survey shows nearly 59% of people will stop using a product after several bad experiences. If authentication is too cumbersome or complex, they might stop using the website entirely.

People can fail these tests for various reasons, including technical ones. If they block third-party cookies, have a local proxy running, or have not updated their browser in a while, they may keep failing, regardless of how many times they try.

Authentication Issues With CAPTCHA

Due to the reasons mentioned above, most types of CAPTCHA are inherently inaccessible. This is especially true for people with disabilities, as these challenge-response tests were not designed with their needs in mind. Some of the common issues include the following:

Issues Related To Visuals And Screen Reader Use

Screen readers cannot read standard visual CAPTCHAs, such as the distorted text test, since the jumbled, contorted words are not machine-readable. The image classification and sliding puzzle methods are similarly inaccessible.

In one WebAIM survey conducted from 2023 to 2024, screen reader users agreed CAPTCHA was the most problematic item, ranking it above ambiguous links, unexpected screen changes, missing alt text, inaccessible search, and lack of keyboard accessibility. Its spot at the top has remained largely unchanged for over a decade, illustrating its history of inaccessibility.

Issues Related To Hearing and Audio Processing

Audio CAPTCHAs are relatively uncommon because web development best practices advise against autoplay audio and emphasize the importance of user controls. However, audio CAPTCHAs still exist. People who are hard of hearing or deaf may encounter a barrier when attempting these tests. Even with assistive technology, the intentional audio distortion and background noise make these samples challenging for individuals with auditory processing disorders to comprehend.

Issues Related To Motor And Dexterity

Tests requiring motor and dexterity skills can be challenging for those with motor impairments or physical disabilities. For example, someone with a hand tremor may find the sliding puzzles difficult. Also, the image classification tests that load more images until none that fit the criteria are left may pose a challenge.

Issues Related To Cognition And Language

As CAPTCHAs become increasingly complex, some developers are turning to tests that require a combination of creative and critical thinking. Those that require users to solve a math problem or complete a puzzle can be challenging for people with dyslexia, dyscalculia, visual processing disorders, or cognitive impairments.

Why Assistive Technology Won’t Bridge The Gap

CAPTCHAs are intentionally designed for humans to interpret and solve, so assistive technology like screen readers and hands-free controls may be of little help. ReCAPTCHA in particular poses an issue because it analyzes background activity. If it flags the accessibility devices as bots, it will serve a potentially inaccessible CAPTCHA.

Even if this technology could bridge the gap, web developers shouldn’t expect it to. Industry standards dictate that they should follow universal design principles to make their websites as accessible and functional as possible.

CAPTCHA’s accessibility issues could be forgiven if it were an effective security tool, but it is far from foolproof since bots get it right more than humans do. Why keep using a method that is ineffective and creates barriers for people with disabilities?

There are better alternatives.

Principles For Accessible Authentication

The idea that humans should consistently outperform algorithms is outdated. Better authentication methods exist, such as multifactor authentication (MFA). The two-factor authentication market will be worth an estimated $26.7 billion by 2027, underscoring its popularity. This tool is more effective than a CAPTCHA because it prevents unauthorized access, even with legitimate credentials.

Ensure your MFA technique is accessible. Instead of having website visitors transcribe complex codes, you should send push notifications or SMS messages. Rely on the verification code autofill to automatically capture and enter the code. Alternatively, you can introduce a “remember this device” feature to skip authentication on trusted devices.

Apple’s two-factor authentication approach is designed this way. A trusted device automatically displays a six-digit verification code, so they do not have to search for it. When prompted, iPhone users can tap the suggestion that appears above their mobile keyboard for autofill.

Single sign-on is another option. This session and user authentication service allows people to log in to multiple websites or applications with a single set of login credentials, minimizing the need for repeated identity verification.

One-time-use “magic links” are an excellent alternative to reCAPTCHA and temporary PINs. Rather than remembering a code or solving a puzzle, the user clicks on a button. Avoid imposing deadlines because, according to WCAG Success Criterion 2.2.3, users should not face time limits since those with disabilities may need more time to complete specific actions.

Alternatively, you could use Cloudflare Turnstile. It authenticates without showing a CAPTCHA, and most people never even have to check a box or hit a button. The software works by issuing a small JavaScript challenge behind the scenes to automatically differentiate between bots and humans. Cloudflare Turnstile can be embedded into any website, making it an excellent alternative to standard classification tasks.

Testing And Evaluation Of Accessible Authentication Designs

Testing and evaluating your accessible alternative authentication methods is essential. Many designs look good on paper but do not work in practice. If possible, gather feedback from actual users. An open beta may be an effective way to maximize visibility.

Remember, general accessibility considerations do not only apply to people with disabilities. They also include those who are neurodivergent, lack access to a mobile device, or use assistive technology. Ensure your alternative designs consider these individuals.

Realistically, you cannot create a perfect system since everyone is unique. Many people struggle to follow multistep processes, solve equations, process complex instructions, or remember passcodes. While universal web design principles can improve flexibility, no single solution can meet everyone’s needs.

Regardless of the authentication technique you use, you should present users with multiple authentication options upfront. They know their capabilities best, so let them decide what to use instead of trying to over-engineer a solution that works for every edge case.

Address The Accessibility Problem With Design Changes

A person with hand tremors may be unable to complete a sliding puzzle, while someone with an audio processing disorder may have trouble with distorted audio samples. However, you cannot simply replace CAPTCHAs with alternatives because they are often equally inaccessible.

QR codes, for example, may be difficult to scan for those with reduced fine motor control. People who are visually impaired may struggle to find it on the screen. Similarly, biometrics can pose an issue for people with facial deformities or a limited range of motion. Addressing the accessibility problem requires creative thinking.

You can start by visiting the Web Accessibility Initiative’s accessibility tutorials for developers to better understand universal design. Although these tutorials focus more on content than authentication, you can still use them to your advantage. The W3C Group Draft Note on the Inaccessibility of CAPTCHA provides more relevant guidance.

Getting started is as easy as researching best practices. Understanding the basics is essential because there is no universal solution for accessible web design. If you want to optimize accessibility, consider sourcing feedback from the people who actually visit your website.

Further Reading

]]>
hello@smashingmagazine.com (Eleanor Hecks)
<![CDATA[Design System Culture: What It Is And Why It Matters (Excerpt)]]> https://smashingmagazine.com/2025/11/design-system-culture/ https://smashingmagazine.com/2025/11/design-system-culture/ Tue, 25 Nov 2025 18:00:00 GMT Subscribe to our Smashing newsletter to be notified when orders are open.]]> This article is a sponsored by Maturing Design Systems

Design systems have become an integral part of our everyday work, so much that the successful growth and maturation of a design system can make or break a product or project. Great tokens, components and organization aren’t enough — it is most often the culture and curation that creates a sustainable, widely-adopted system. It can be hard to determine where to invest our time and attention. How do we build and maintain design systems that support our teams, enhance our work, and grow along with us?

Excerpt: Design System Culture

Culture is a funny thing. We all have some intuition about how important it is—at least we know we want to work in a great culture and avoid the toxic ones. But culture is notoriously difficult to define, and changing it can feel more like magic than reality. One company culture can be inspiring for some and boring for others, a place of growth for some and stifling for others.

Adding to the nuance, not only does your company have a culture as a whole, but it has many subcultures. That’s because culture is not created by any individual. Culture is something that happens when the same group of people gather together repeatedly over time. So, as a company grows, adding hierarchy and structure, the teams formed around specific goals, products, features, disciplines, and so on, all develop their own subcultures.

You probably have a design subculture. You probably have a product ownership subculture. You probably even have a subculture forming around those folks who get on a Zoom call every Tuesday at lunch to knit and chat. There are hundreds or more subcultures at most good-sized organizations. It’s complicated, nuanced, and immensely important.

When an individual is struggling with the way they are managed, one culture enables them to offer authentic feedback to their boss, while another leads them to look for a new job. When a company provides free lunch on Fridays, one culture creates a sense of gratitude for this benefit; another makes you feel like this free lunch comes with the expectation that you can’t ever leave work. One culture prioritizes financial results over respectful interactions. One culture encourages competition between teams, while another emphasizes collaboration with coworkers.

Why Culture?

At the beginning of 2021, my company was asked to help a large organization plan, design, and build a design system alongside the minimum viable product of a new product idea. This is the kind of work we truly love, so the team was excited to jump in.

As an author of a book about design systems, I want nothing more than to tell you how amazingly this engagement went. Instead, it was a tremendous struggle. Despite this being the perfect kind of work for my team and I on paper, we had to make the hard decision to walk away from our client at the end of that year. Not because we couldn’t do the work. Not because of any technical challenges or budget concerns. The reason we gave was “cultural incompatibility.” In almost twenty years of running my own businesses, this had never happened to me. After all, our clients don’t come to us because they have everything figured out — they come because they know they need help. If we couldn’t guide them through a difficult season, why did we even exist!?

Needless to say, it didn’t sit well with me. So, after following a few useless threads of fear that we just couldn’t cut it, I spent the next year diving down a rabbit hole of research on organizational culture. This next section is a summary of what I learned in that year and how I’ve been putting that to use since. To start, let’s find a common understanding of what culture is.

What Is Culture?

Over the last few decades, a lot has been said about workplace culture. From understanding why it matters and how it impacts the ways we lead, to offering methodologies for changing it. I’ve found tremendous value in the research and writings of Edgar Schein, a business theorist and psychologist. Schein offers a simple model to explain what culture is, breaking it down into three levels:

Artifacts

Artifacts are the top level of Schein’s model. These are the things people think of when you say “culture” — the visible perks a company offers. I once worked at a place where we could expense bringing in donuts for the team. Another job I had provided a foosball table. One company encouraged us to cook lunch together each week. These kinds of things, along with the company swag, the channel in Slack where you get to brag about your peers, and the company retreat are all “artifacts” of your company culture.

Espoused Values And Beliefs

The next layer down is called “espoused values and beliefs.” This is what people inside the culture say they believe. It’s the list of values, the mission statement, the vision. It’s the content on the website and plastered on the walls. It’s the stuff you expect to get when you accept the job because it’s how people answered all your questions throughout the interview process.

Basic Underlying Assumptions

The deepest layer is called “basic underlying assumptions.” This is what people inside the organization actually believe. It’s the way the leadership and employees behave, most notably

in the face of a difficult decision. This layer is the root of your culture. And no matter what you show (artifacts), no matter what you say (espoused beliefs), the things you believe (underlying assumptions) will come out eventually.

It Starts At The Bottom

As an employee, you will experience these things from the top down. On your first day, you observe what’s happening around you — you see the artifacts of the culture. Eventually, you get to know a few folks. As you have more and more conversations with them, you’ll begin to hear how they talk about the culture — their espoused beliefs. At some point, people inside your culture will be faced with some tough situations. This is where the rubber meets the road and when you’ll learn what those individuals’ basic underlying assumptions are.

Unhealthy organizations don’t have a process for surfacing and valuing those underlying assumptions. Healthy organizations know that culture starts with the basic underlying assumptions of every individual at the company.

Unhealthy organizations try to create culture with perks and mission statements. Healthy organizations allow the top two layers to emerge naturally from the bottom layer.

When the basic underlying assumptions don’t line up with the espoused beliefs and artifacts, the disconnect is strong. It’s often hard to articulate the problem, but people will feel it. This is the company with a core value of “family first” that requires you to travel all the time with no recognition of the impact it has on your actual family. The espoused belief to prioritize family is not actively supported in the decisions being made.

Strength And Weakness

We all subconsciously know these things, and that is reflected in the language we use as we talk about the culture of an organization. We tend to use the words “strong” and “weak” to describe culture. You might say, “That company has a strong culture.” This statement is an indication that the layers are aligned, and that means the culture itself serves as a way of guiding decisions. If we all have shared values, we can trust one another’s ability to make decisions that will align with those values.

Conversely, an organization with a weak culture is missing the alignment between the things they say and the decisions they make. These cultures often continually add policies and procedures in order to police the behavior of individuals. In this scenario, the culture is weak because it doesn’t offer the organic guidance a stronger culture does — the misalignment means the things we choose to do differ from the things we say.

That is not to say policies and procedures are bad. As companies grow, there is a need to document the expectations for people. The proactive nature of a strong culture means these documents are often a formalization of what has emerged organically, whereas a weak culture reacts to negative situations in hopes to prevent the bad from happening again.

Editor’s Note

Do you like what you’ve read so far? This is just an excerpt of Ben’s upcoming book, Maturing Design Systems, in which he explores the anatomy of a design system, explains how culture shapes outcomes, and shares practical guidance for the challenges at each stage — from building v1 and growing healthy adoption to navigating “the teenage years” and ultimately running a stable, influential system.

Table of Contents

  • Context
    An introduction to the context of design systems, understanding where they live in your organization, what feeds them, and whether you should build one.
  • Design System Culture
    A deep dive into what culture is, why it’s important for design system teams to understand, and how it unlocks the ability for you to deliver real value.
  • The Anatomy of a Design System
    An exploration of the layers and parts that make up a design system based on the evaluation of hundreds of design systems over many years.
  • Maturity
    An over view of the design system maturity model including the fours stages of maturity, origin stories, a framework for maturing in a healthy way, and a framework for creating design system stability.
  • Stage 1, Building Version One
    A dive into what it means to be in stage 1 of the design system maturity and a few mental models to keep you focused on the right things in this early stage.
  • Stage 2, Growing Adoption
    Unpacking stage 2 of the design system maturity model and a deep dive into adoption: broadening your perspective on adoption, the adoption curve, and how to create sustainable adoption.
  • Stage 3, Surviving the Teenage Years
    Understanding the relevant concerns for stage 3 of the design system maturity model and how to address the more nuanced challenges that come with this level of maturity.
  • Stage 4, Evolving a Healthy Program
    Exploring what it means to be in stage 4 of the design system maturity model, when you’ve become an influential leader in the eyes of the rest of your organization.

About The Author

Ben Callahan is an author, design system researcher, coach, and speaker. He founded Redwoods, a design system community, and The Question, a weekly forum for collaborative learning. As a founding partner at Sparkbox, he helps organizations embed human-centered culture into their design systems. His work bridges people and systems, emphasizing sustainable growth, team alignment, and meaningful impact in technology. He believes every interaction is an opportunity to learn.

Reviewers’ Testimonials

“This book is a clear and insightful blueprint for maturing design systems at scale. For well-supported teams, it offers strategy and clarity grounded in real examples. For smaller teams like mine, it serves as a North Star that helps you advocate for the work and find solutions that fit your team's maturity. I highly recommend it to anyone building a design system.”

Lenora Porter, Product Designer
“Ben draws connections between process, collaboration, and identity in ways that feel both intuitive and revelatory. Many design system books live comfortably in the tactical and technical, but this one moves beyond the how and into the why — inviting readers to reflect on their roles not just as product owners, designers or engineers, but as stewards of shared understanding within complex organisations. This book doesn’t prescribe rigid solutions. Instead, it encourages self-inquiry and alignment, asking readers to consider how they can bring intentionality, empathy, and resilience into the systems they touch.”

Tarunya Varma, Product Design Manager, Tide
“Ben Callahan’s “Maturing Design Systems” puts language to the struggles many of us feel but can’t quite explain. It unpacks the hidden influence of culture, setup, and leadership, providing you with the clarity, tools, and frameworks to course-correct and move your system work forward, whether you’re navigating a growing startup or a scaling enterprise.”

Ness Grixti, Design Lead, Wise, and Author of “A Practical Guide to Design System Components”
Don’t Miss Out!

Through years of interviews, coaching, and consulting, Ben has discovered a model for how design systems mature. Understanding how systems tend to mature allows you to create a sustainable program around your design system — one that acknowledges the human and change-management side of this work, not just the technical and creative.

This book will be a valuable resource for anyone working with design systems!

Spread The Word

Sign up to our Smashing newsletter and be one of the first to know when Maturing Design Systems is available for preorder. We can’t wait to share this book with you!

]]>
hello@smashingmagazine.com (Ari Stiles)
<![CDATA[Designing For Stress And Emergency]]> https://smashingmagazine.com/2025/11/designing-for-stress-emergency/ https://smashingmagazine.com/2025/11/designing-for-stress-emergency/ Mon, 24 Nov 2025 13:00:00 GMT Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today). With a live UX training starting next week.]]> No design exists in isolation. As designers, we often imagine specific situations in which people will use our product. It might be indeed quite common — but there will also be other — urgent, frustrating, stressful situations. And they are the ones that we rarely account for.

So how do we account for such situations? How can we help people use our products while coping with stress — without adding to their cognitive load? Let’s take a closer look.

Study Where Your Product Fits Into People’s Lives

When designing digital products, sometimes we get a bit too attached to our shiny new features and flows — often forgetting the messy reality in which these features and flows have to neatly fit. And often it means 10s of other products, 100s of other tabs, and 1000s of other emails.

If your customers have to use a slightly older machine, with a smallish 22" screen and a lot of background noise, they might use your product differently than you might have imagined, e.g., splitting the screen into halves to see both views at the same time (as displayed above).

Chances are high that our customers will use our product while doing something else, often with very little motivation, very little patience, plenty of urgent (and way more important) problems, and an unhealthy dose of stress. And that’s where our product must do its job well.

What Is Stress?

What exactly do we mean when we talk about “stress”? As H Locke noted, stress is the body’s response to a situation it cannot handle. There is a mismatch between what people can control, their own skills, and the challenge in front of them.

If the situation seems unmanageable and the goal they want to achieve moves further away, it creates an enormous sense of failing. It can be extremely frustrating and demotivating.

Some failures have a local scope, but many have a far-reaching impact. Many people can’t choose the products they have to use for work, so when a tool fails repeatedly, causes frustration, or is unreliable, it affects the worker, the work, the colleagues, and processes within the organization. Fragility has a high cost — and so does frustration.

How Stress Influences User Interactions

It’s not a big surprise: stress disrupts attention, memory, cognition, and decision-making. It makes it difficult to prioritize and draw logical conclusions. In times of stress, we rely on fast, intuitive judgments, not reasoning. Typically, it leads to instinctive responses based on established habits.

When users are in an emergency, they experience cognitive tunneling — it's a state when their peripheral vision narrows, reading comprehension drops, fine motor skills deteriorate, and patience drops sharply. Under pressure, people often make decisions hastily, while others get entirely paralyzed. Either way is a likely path to mistakes — often irreversible ones and often without time for extensive deliberations.

Ideally, these decisions would be made way ahead of time — and then suggested when needed. But in practice, it’s not always possible. As it turns out, a good way to help people deal with stress is by providing order around how they manage it.

Single-Tasking Instead Of Multi-Tasking

People can’t really multi-task, especially in very stressful situations or emergencies. Especially with a big chunk of work in front of them, people need some order to make progress, reliably. That’s why simpler pages usually work better than one big complex page.

Order means giving users a clear plan of action to complete a task. No distractions, no unnecessary navigation. We ask simple questions and prompt simple actions, one after another, one thing at a time.

An example of the plan is the Task List Pattern, invented by fine folks at Gov.uk. We break a task into a sequence of sub-tasks, describe them with actionable labels, assign statuses, and track progress.

To support accuracy, we revise default settings, values, presets, and actions. Also, the order of actions and buttons matters, so we put high-priority things first to make them easier to find. Then we add built-in safeguards (e.g., Undo feature) to prevent irreversible errors.

Supporting In Emergencies

The most effective help during emergencies is to help people deal with the situation in a well-defined and effective way. That means being prepared for and designing an emergency mode, e.g., to activate instant alerts on emergency contacts, distribute pre-assigned tasks, and establish a line of communication.

Rediplan App by Australian Red Cross is an emergency plan companion that encourages citizens to prepare their documents and belongings with a few checklists and actions — including key contracts, meeting places, and medical information, all in one place.

Just Enough Friction

Not all stress is equally harmful, though. As Krystal Higgins points out, if there is not enough friction when onboarding new users and the experience is too passive or users are hand-held even through the most basic tasks, you risk that they won’t realize the personal value they gain from the experience and, ultimately, lose interest.

Design And Test For Stress Cases

Stress cases aren’t edge cases. We can’t predict the emotional state in which a user comes to our site or uses our product. A person looking for specific information on a hospital website or visiting a debt management website, for example, is most likely already stressed. Now, if the interface is overwhelming, it will only add to their cognitive load.

Stress-testing your product is critical to prevent this from happening. It’s useful to set up an annual day to stress test your product and refine emergency responses. It could be as simple as running content testing, or running tests in a real, noisy, busy environment where users actually work — at peak times.

And in case of emergencies, we need to check if fallbacks work as expected and if the current UX of the product helps people manage failures and exceptional situations well enough.

Wrapping Up

Emergencies will happen eventually — it’s just a matter of time. With good design, we can help mitigate risk and control damage, and make it hard to make irreversible mistakes. At its heart, that’s what good UX is exceptionally good at.

Key Takeaways

People can’t multitask, especially in very stressful situations.

  • Stress disrupts attention, memory, cognition, decision-making.
  • Also, it’s difficult to prioritize and draw logical conclusions.
  • Under stress, we rely on fast, intuitive judgments — not reasoning.
  • It leads to instinctive responses based on established habits.

Goal: Design flows that support focus and high accuracy.

  • Start with better default settings, values, presets, and actions.
  • High-priority first: order of actions and buttons matters.
  • Break complex tasks down into a series of simple steps (10s–30s each).
  • Add built-in safeguards to prevent irreversible errors (Undo).

Shift users to single-tasking: ask for one thing at a time.

  • Simpler pages might work better than one complex page.
  • Suggest a step-by-step plan of action to follow along.
  • Consider, design, and test flows for emergency responses ahead of time.
  • Add emergency mode for instant alerts and task assignments.
Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Further Reading

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Keyframes Tokens: Standardizing Animation Across Projects]]> https://smashingmagazine.com/2025/11/keyframes-tokens-standardizing-animation-across-projects/ https://smashingmagazine.com/2025/11/keyframes-tokens-standardizing-animation-across-projects/ Fri, 21 Nov 2025 08:00:00 GMT Picture this: you join a new project, dive into the codebase, and within the first few hours, you discover something frustratingly familiar. Scattered throughout the stylesheets, you find multiple @keyframes definitions for the same basic animations. Three different fade-in effects, two or three slide variations, a handful of zoom animations, and at least two different spin animations because, well, why not?

@keyframes pulse {
  from {
    scale: 1;
  }
  to {
    scale: 1.1;
  }
}

@keyframes bigger-pulse {
  0%, 20%, 100% { 
    scale: 1; 
  }
  10%, 40% { 
    scale: 1.2; 
  }
}

If this scenario sounds familiar, you’re not alone. In my experience across various projects, one of the most consistent quick wins I can deliver is consolidating and standardizing keyframes. It’s become such a reliable pattern that I now look forward to this cleanup as one of my first tasks on any new codebase.

The Logic Behind The Chaos

This redundancy makes perfect sense when you think about it. We all use the same fundamental animations in our day-to-day work: fades, slides, zooms, spins, and other common effects. These animations are pretty straightforward, and it's easy to whip up a quick @keyframes definition to get the job done.

Without a centralized animation system, developers naturally write these keyframes from scratch, unaware that similar animations already exist elsewhere in the codebase. This is especially common when working in component-based architectures (which most of us do these days), as teams often work in parallel across different parts of the application.

The result? Animation chaos.

The Small Problem

The most obvious issues with keyframes duplication are wasted development time and unnecessary code bloat. Multiple keyframe definitions mean multiple places to update when requirements change. Need to adjust the timing of your fade animation? You’ll need to hunt down every instance across your codebase. Want to standardize easing functions? Good luck finding all the variations. This multiplication of maintenance points makes even simple animation updates a time-consuming task.

The Bigger Problem

This keyframes duplication creates a much more insidious problem lurking beneath the surface: the global scope trap. Even when working with component-based architectures, CSS keyframes are always defined in the global scope. This means all keyframes apply to all components. Always. Yes, your animation doesn't necessarily use the keyframes you defined in your component. It uses the last keyframes that match that exact same name that were loaded into the global scope.

As long as all your keyframes are identical, this might seem like a minor issue. But the moment you want to customize an animation for a specific use case, you’re in trouble, or worse, you’ll be the one causing them.

Either your animation won’t work because another component loaded after yours, overwriting your keyframes, or your component loads last and accidentally changes the animation behavior for every other component using that keyframe's name, and you may not even realize it.

Here’s a simple example that demonstrates the problem:

.component-one {
  /* component styles */
  animation: pulse 1s ease-in-out infinite alternate;
}

/* this @keyframes definition will not work */
@keyframes pulse {
  from {
    scale: 1;
  }
  to {
    scale: 1.1;
  }
} 

/* later in the code... */

.component-two {
  /* component styles */
  animation: pulse 1s ease-in-out infinite;
}

/* this keyframes will apply to both components */
@keyframes pulse {
  0%, 20%, 100% { 
    scale: 1; 
  }
  10%, 40% { 
    scale: 1.2; 
  }
}

Both components use the same animation name, but the second @keyframes definition overwrites the first one. Now both component-one and component-two will use the second keyframes, regardless of which component defined which keyframes.

See the Pen Keyframes Tokens - Demo 1 [forked] by Amit Sheen.

The worst part? This often works perfectly in local development but breaks mysteriously in production when build processes change the loading order of your stylesheets. You end up with animations that behave differently depending on which components are loaded and in what sequence.

The Solution: Unified Keyframes

The answer to this chaos is surprisingly simple: predefined dynamic keyframes stored in a shared stylesheet. Instead of letting every component define its own animations, we create centralized keyframes that are well-documented, easy to use, maintainable, and tailored to the specific needs of your project.

Think of it as keyframes tokens. Just as we use tokens for colors and spacing, and many of us already use tokens for animation properties, like duration and easing functions, why not use tokens for keyframes as well?

This approach can integrate naturally with any current design token workflow you’re using, while solving both the small problem (code duplication) and the bigger problem (global scope conflicts) in one go.

The idea is straightforward: create a single source of truth for all our common animations. This shared stylesheet contains carefully crafted keyframes that cover the animation patterns our project actually uses. No more guessing whether a fade animation already exists somewhere in our codebase. No more accidentally overwriting animations from other components.

But here’s the key: these aren’t just static copy-paste animations. They’re designed to be dynamic and customizable through CSS custom properties, allowing us to maintain consistency while still having the flexibility to adapt animations to specific use cases, like if you need a slightly bigger “pulse” animation in one place.

Building The First Keyframes Token

One of the first low-hanging fruits we should tackle is the “fade-in” animation. In one of my recent projects, I found over a dozen separate fade-in definitions, and yes, they all simply animated the opacity from 0 to 1.

So, let’s create a new stylesheet, call it kf-tokens.css, import it into our project, and place our keyframes with proper comments inside of it.

/* keyframes-tokens.css */

/*
 * Fade In - fade entrance animation
 * Usage: animation: kf-fade-in 0.3s ease-out;
 */
@keyframes kf-fade-in {
  from {
    opacity: 0;
  }
  to {
    opacity: 1;
  }
}

This single @keyframes declaration replaces all those scattered fade-in animations across our codebase. Clean, simple, and globally applicable. And now that we have this token defined, we can use it from any component throughout our project:

.modal {
  animation: kf-fade-in 0.3s ease-out;
}

.tooltip {
  animation: kf-fade-in 0.2s ease-in-out;
}

.notification {
  animation: kf-fade-in 0.5s ease-out;
}

See the Pen Keyframes Tokens - Demo 2 [forked] by Amit Sheen.

Note: We’re using a kf- prefix in all our @keyframes names. This prefix serves as a namespace that prevents naming conflicts with existing animations in the project and makes it immediately clear that these keyframes come from our keyframes tokens file.

Making A Dynamic Slide

The kf-fade-in keyframes work great because it's simple and there's little room to mess things up. In other animations, however, we need to be much more dynamic, and here we can leverage the enormous power of CSS custom properties. This is where keyframes tokens really shine compared to scattered static animations.

Let’s take a common scenario: “slide-in” animations. But slide in from where? 100px from the right? 50% from the left? Should it enter from the top of the screen? Or maybe float in from the bottom? So many possibilities, but instead of creating separate keyframes for each direction and each variation, we can build one flexible token that adapts to all scenarios:

/*
 * Slide In - directional slide animation
 * Use --kf-slide-from to control direction
 * Default: slides in from left (-100%)
 * Usage: 
 *   animation: kf-slide-in 0.3s ease-out;
 *   --kf-slide-from: -100px 0; // slide from left
 *   --kf-slide-from: 100px 0;  // slide from right
 *   --kf-slide-from: 0 -50px;  // slide from top
 */

@keyframes kf-slide-in {
  from {
    translate: var(--kf-slide-from, -100% 0);
  }
  to {
    translate: 0 0;
  }
}

Now we can use this single @keyframes token for any slide direction simply by changing the --kf-slide-from custom property:

.sidebar {
  animation: kf-slide-in 0.3s ease-out;
  /* Uses default value: slides from left */
}

.notification {
  animation: kf-slide-in 0.4s ease-out;
  --kf-slide-from: 0 -50px; /* slide from top */
}

.modal {
  animation:
    kf-fade-in 0.5s,
    kf-slide-in 0.5s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-slide-from: 50px 50px; /* slide from bottom-right */
}

This approach gives us incredible flexibility while maintaining consistency. One keyframe declaration, infinite possibilities.

See the Pen Keyframes Tokens - Demo 3 [forked] by Amit Sheen.

And if we want to make our animations even more flexible, allowing for “slide-out” effects as well, we can simply add a --kf-slide-to custom property, similar to what we’ll see in the next section.

Bidirectional Zoom Keyframes

Another common animation that gets duplicated across projects is “zoom” effects. Whether it’s a subtle scale-up for toast messages, a dramatic zoom-in for modals, or a gentle scale-down effect for headings, zoom animations are everywhere.

Instead of creating separate keyframes for each scale value, let’s build one flexible set of kf-zoom keyframes:

/*
 * Zoom - scale animation
 * Use --kf-zoom-from and --kf-zoom-to to control scale values
 * Default: zooms from 80% to 100% (0.8 to 1)
 * Usage:
 *   animation: kf-zoom 0.2s ease-out;
 *   --kf-zoom-from: 0.5; --kf-zoom-to: 1;   // zoom from 50% to 100%
 *   --kf-zoom-from: 1; --kf-zoom-to: 0;     // zoom from 100% to 0%
 *   --kf-zoom-from: 1; --kf-zoom-to: 1.1;   // zoom from 100% to 110%
 */

@keyframes kf-zoom {
  from {
    scale: var(--kf-zoom-from, 0.8);
  }
  to {
    scale: var(--kf-zoom-to, 1);
  }
}

With one definition, we can achieve any zoom variation we need:

.toast {
  animation:
    kf-slide-in 0.2s,
    kf-zoom 0.4s ease-out;
  --kf-slide-from: 0 100%; /* slide from top */
  /* Uses default zoom: scales from 80% to 100% */
}

.modal {
  animation: kf-zoom 0.3s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-zoom-from: 0; /* dramatic zoom from 0% to 100% */
}

.heading {
  animation:
    kf-fade-in 2s,
    kf-zoom 2s ease-in;
  --kf-zoom-from: 1.2; 
  --kf-zoom-to: 0.8; /* gentle scale down */
}

The default of 0.8 (80%) works perfectly for most UI elements, like toast messages and cards, while still being easy to customize for special cases.

See the Pen Keyframes Tokens - Demo 4 [forked] by Amit Sheen.

You might have noticed something interesting in the recent examples: we've been combining animations. One of the key advantages of working with @keyframes tokens is that they’re designed to integrate seamlessly with each other. This smooth composition is intentional, not accidental.

We’ll discuss animation composition in more detail later, including where they can become problematic, but most combinations are straightforward and easy to implement.

Note: While writing this article, and maybe because of writing it, I found myself rethinking the whole idea of entrance animations. With all the recent advances in CSS, do we still need them at all? Luckily, Adam Argyle explored the same questions and expressed them brilliantly in his blog. This doesn’t contradict what’s written here, but it does present an approach worth considering, especially if your projects rely heavily on entrance animations.

Continuous Animations

While entrance animations, like “fade”, “slide”, and “zoom” happen once and then stop, continuous animations loop indefinitely to draw attention or indicate ongoing activity. The two most common continuous animations I encounter are “spin” (for loading indicators) and “pulse” (for highlighting important elements).

These animations present unique challenges when it comes to creating keyframes tokens. Unlike entrance animations that typically go from one state to another, continuous animations need to be highly customizable in their behavior patterns.

The Spin Doctor

Every project seems to use multiple spin animations. Some spin clockwise, others counterclockwise. Some do a single 360-degree rotation, others do multiple turns for a faster effect. Instead of creating separate keyframes for each variation, let’s build one flexible spin that handles all scenarios:

/*
 * Spin - rotation animation
 * Use --kf-spin-from and --kf-spin-to to control rotation range
 * Use --kf-spin-turns to control rotation amount
 * Default: rotates from 0deg to 360deg (1 full rotation)
 * Usage:
 *   animation: kf-spin 1s linear infinite;
 *   --kf-spin-turns: 2;   // 2 full rotations
 *   --kf-spin-from: 0deg; --kf-spin-to: 180deg;  // half rotation
 *   --kf-spin-from: 0deg; --kf-spin-to: -360deg; // counterclockwise
 */

@keyframes kf-spin {
  from {
    rotate: var(--kf-spin-from, 0deg);
  }
  to {
    rotate: calc(var(--kf-spin-from, 0deg) + var(--kf-spin-to, 360deg) * var(--kf-spin-turns, 1));
  }
}

Now we can create any spin variation we like:

.loading-spinner {
  animation: kf-spin 1s linear infinite;
  /* Uses default: rotates from 0deg to 360deg */
} 

.fast-loader {
  animation: kf-spin 1.2s ease-in-out infinite alternate;
  --kf-spin-turns: 3; /* 3 full rotations for each direction per cycle */
}

.steped-reverse {
  animation: kf-spin 1.5s steps(8) infinite;
  --kf-spin-to: -360deg; /* counterclockwise */
}

.subtle-wiggle {
  animation: kf-spin 2s ease-in-out infinite alternate;
  --kf-spin-from: -16deg;
  --kf-spin-to: 32deg; /* wiggle 36 deg: between -18deg and +18deg */
}

See the Pen Keyframes Tokens - Demo 5 [forked] by Amit Sheen.

The beauty of this approach is that the same keyframes work for loading spinners, rotating icons, wiggle effects, and even complex multi-turn animations.

The Pulse Paradox

Pulse animations are trickier because they can “pulse” different properties. Some pulse the scale, others pulse the opacity, and some pulse color properties like brightness or saturation. Rather than creating separate keyframes for each property, we can create keyframes that work with any CSS property.

Here's an example of a pulse keyframe with scale and opacity options:

/* 
 * Pulse - pulsing animation
 * Use --kf-pulse-scale-from and --kf-pulse-scale-to to control scale range
 * Use --kf-pulse-opacity-from and --kf-pulse-opacity-to to control opacity range
 * Default: no pulse (all values 1)
 * Usage:
 *   animation: kf-pulse 2s ease-in-out infinite alternate;
 *   --kf-pulse-scale-from: 0.95; --kf-pulse-scale-to: 1.05; // scale pulse
 *   --kf-pulse-opacity-from: 0.7; --kf-pulse-opacity-to: 1; // opacity pulse
 */

@keyframes kf-pulse {
  from {
    scale: var(--kf-pulse-scale-from, 1);
    opacity: var(--kf-pulse-opacity-from, 1);
  }
  to {
    scale: var(--kf-pulse-scale-to, 1);
    opacity: var(--kf-pulse-opacity-to, 1);
  }
}

This creates a flexible pulse that can animate multiple properties:

.call-to-action { 
  animation: kf-pulse 0.6s infinite alternate;
  --kf-pulse-opacity-from: 0.5; /* opacity pulse */
}

.notification-dot {
  animation: kf-pulse 0.6s ease-in-out infinite alternate;
  --kf-pulse-scale-from: 0.9; 
  --kf-pulse-scale-to: 1.1; /* scale pulse */
}

.text-highlight {
  animation: kf-pulse 1.5s ease-out infinite;
  --kf-pulse-scale-from: 0.8;
  --kf-pulse-opacity-from: 0.2;
  /* scale and opacity pulse */
}

See the Pen Keyframes Tokens - Demo 6 [forked] by Amit Sheen.

This single kf-pulse keyframe can handle everything from subtle attention grabs to dramatic highlights, all while being easy to customize.

Advanced Easing

One of the great things about using keyframes tokens is how easy it is to expand our animation library and provide effects that most developers would not bother to write from scratch, like elastic or bounce.

Here is an example of a simple “bounce” keyframes token that uses a --kf-bounce-from custom property to control the jump height.

/*
 * Bounce - bouncing entrance animation
 * Use --kf-bounce-from to control jump height
 * Default: jumps from 100vh (off screen)
 * Usage:
 *   animation: kf-bounce 3s ease-in;
 *   --kf-bounce-from: 200px; // jump from 200px height
 */

@keyframes kf-bounce {
  0% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -1);
  }

  34% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.4);
  }

  55% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.2);
  }

  72% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.1);
  }

  85% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.05);
  }

  94% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.025);
  }

  99% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.0125);
  }

  22%, 45%, 64%, 79%, 90%, 97%, 100% {
    translate: 0 0;
    animation-timing-function: ease-out;
  }
}

Animations like “elastic” are a bit trickier because of the calculations inside the keyframes. We need to define --kf-elastic-from-X and --kf-elastic-from-Y separately (both are optional), and together they let us create an elastic entrance from any point on the screen.

/*
 * Elastic In - elastic entrance animation
 * Use --kf-elastic-from-X and --kf-elastic-from-Y to control start position
 * Default: enters from top center (0, -100vh)
 * Usage:
 *   animation: kf-elastic-in 2s ease-in-out both;
 *   --kf-elastic-from-X: -50px;
 *   --kf-elastic-from-Y: -200px; // enter from (-50px, -200px)
 */

@keyframes kf-elastic-in {
  0% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 1) calc(var(--kf-elastic-from-Y, 0px) * 1);
  }

  16% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * -0.3227) calc(var(--kf-elastic-from-Y, 0px) * -0.3227);
  }

  28% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 0.1312) calc(var(--kf-elastic-from-Y, 0px) * 0.1312);
  }

  44% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * -0.0463) calc(var(--kf-elastic-from-Y, 0px) * -0.0463);
  }

  59% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 0.0164) calc(var(--kf-elastic-from-Y, 0px) * 0.0164);
  }

  73% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * -0.0058) calc(var(--kf-elastic-from-Y, 0px) * -0.0058);
  }

  88% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 0.0020) calc(var(--kf-elastic-from-Y, 0px) * 0.0020);
  }

  100% {
    translate: 0 0;
  }
}

This approach makes it easy to reuse and customize advanced keyframes across our project, just by changing a single custom property.

.bounce-and-zoom {
  animation:
    kf-bounce 3s ease-in,
    kf-zoom 3s linear;
  --kf-zoom-from: 0;
}

.bounce-and-slide {
  animation-composition: add; /* Both animations use translate */
  animation:
    kf-bounce 3s ease-in,
    kf-slide-in 3s ease-out;
  --kf-slide-from: -200px;
}

.elastic-in {
  animation: kf-elastic-in 2s ease-in-out both;
}

See the Pen Keyframes Tokens - Demo 7 [forked] by Amit Sheen.

Up to this point, we’ve seen how we can consolidate keyframes in a smart and efficient way. Of course, you might want to tweak things to better fit your project’s needs, but we’ve covered examples of several common animations and everyday use cases. And with these keyframes tokens in place, we now have powerful building blocks for creating consistent, maintainable animations across the entire project. No more duplicated keyframes, no more global scope conflicts. Just a clean, convenient way to handle all our animation needs.

But the real question is: How do we compose these building blocks together?

Putting It All Together

We’ve seen that combining basic keyframes tokens is simple. We don’t need anything special but to define the first animation, define the second one, set the variables as needed, and that’s it.

/* Fade in + slide in */
.toast {
  animation:
    kf-fade-in 0.4s,
    kf-slide-in 0.4s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-slide-from: 0 40px;
}

/* Zoom in + fade in */
.modal {
  animation:
    kf-fade-in 0.3s,
    kf-zoom 0.3s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-zoom-from: 0.7;
  --kf-zoom-to: 1;
}

/* Slide in + pulse */
.notification {
  animation:
    kf-slide-in 0.5s,
    kf-pulse 1.2s ease-in-out infinite alternate;
  --kf-slide-from: -100px 0;
  --kf-pulse-scale-from: 0.95;
  --kf-pulse-scale-to: 1.05;
}

These combinations work beautifully because each animation targets a different property: opacity, transform (translate/scale), etc. But sometimes there are conflicts, and we need to know why and how to deal with them.

When two animations try to animate the same property — for example, both animating scale or both animating opacity — the result will not be what you expect. By default, only one of the animations is actually applied to that property, which is the last one in the animation list. This is a limitation of how CSS handles multiple animations on the same property.

For example, this will not work as intended because only the kf-pulse animation will apply.

.bad-combo {
  animation:
    kf-zoom 0.5s forwards,
    kf-pulse 1.2s infinite alternate;
  --kf-zoom-from: 0.5;
  --kf-zoom-to: 1.2;
  --kf-pulse-scale-from: 0.8;
  --kf-pulse-scale-to: 1.1;
}
Animation Addition

The simplest and most direct way to handle multiple animations that affect the same property is to use the animation-composition property. In the last example above, the kf-pulse animation replaces the kf-zoom animation, so we will not see the initial zoom and will not get the expected scale to of 1.2.

By setting the animation-composition to add, we tell the browser to combine both animations. This gives us the result we want.

.component-two {
  animation-composition: add;
}

See the Pen Keyframes Tokens - Demo 8 [forked] by Amit Sheen.

This approach works well for most cases where we want to combine effects on the same property. It is also useful when we need to combine animations with static property values.

For example, if we have an element that uses the translate property to position it exactly where we want, and then we want to animate it in with the kf-slide-in keyframes, we get a nasty visible jump without animation-composition.

See the Pen Keyframes Tokens - Demo 9 [forked] by Amit Sheen.

With animation-composition set to add, the animation is smoothly combined with the existing transform, so the element stays in place and animates as expected.

Animation Stagger

Another way of handling multiple animations is to “stagger” them — that is, start the second animation slightly after the first one finishes. It is not a solution that works for every case, but it is useful when we have an entrance animation followed by a continuous animation.

/* fade in + opacity pulse */
.notification {
  animation:
    kf-fade-in 2s ease-out,
    kf-pulse 0.5s 2s ease-in-out infinite alternate;
  --kf-pulse-opacity-to: 0.5;
}

See the Pen Keyframes Tokens - Demo 10 [forked] by Amit Sheen.

Order Matters

A large part of the animations we work with use the transform property. In most cases, this is simply more convenient. It also has a performance advantage as transform animations can be GPU-accelerated. But if we use transforms, we need to accept that the order in which we perform our transformations matters. A lot.

In our keyframes so far, we’ve used individual transforms. According to the specs, these are always applied in a fixed order: first, the element gets translate, then rotate, then scale. This makes sense and is what most of us expect.

However, if we use the transform property, the order in which the functions are written is the order in which they are applied. In this case, if we move something 100 pixels on the X-axis and then rotate it by 45 degrees, it is not the same as first rotating it by 45 degrees and then moving it 100 pixels.

/* Pink square: First translate, then rotate */ 
.example-one {
  transform: translateX(100px) rotate(45deg);
}

/* Green square: First rotate, then translate */
.example-two { 
  transform: rotate(45deg) translateX(100px);
}

See the Pen Keyframes Tokens - Demo 11 [forked] by Amit Sheen.

But according to the transform order, all individual transforms — everything we’ve used for the keyframes tokens — happens before the transform functions. That means anything you set in the transform property will happen after the animations. But if you set, for example, translate together with the kf-spin keyframes, the translate will happen before the animation. Confused yet?!

This leads to situations where static values can cause different results for the same animation, like in the following case:

/* Common animation for both spinners */ 
.spinner {
  animation: kf-spin 1s linear infinite;
}

/* Pink spinner: translate before rotate (individual transform) */
.spinner-pink {
  translate: 100% 50%;
}

/* Green spinner: rotate then translate (function order) */
.spinner-green {
  transform: translate(100%, 50%);
}

See the Pen Keyframes Tokens - Demo 12 [forked] by Amit Sheen.

You can see that the first spinner (pink) gets a translate that happens before the rotate of kf-spin, so it first moves to its place and then spins. The second spinner (green) gets a translate() function that happens after the individual transform, so the element first spins, then moves relative to its current angle, and we get that wide orbit effect.

No, this is not a bug. It is just one of those things we need to know about CSS and keep in mind when working with multiple animations or multiple transforms. If needed, you can also create an additional set of kf-spin-alt keyframes that rotate elements using the rotate() function.

Reduced Motion

And while we’re talking about alternative keyframes, we cannot ignore the “no animation” option. One of the biggest advantages of using keyframes tokens is that accessibility can be baked in, and it is actually quite easy to do. By designing our keyframes with accessibility in mind, we can ensure that users who prefer reduced motion get a smoother, less distracting experience, without extra work or code duplication.

The exact meaning of “Reduced Motion” can change a bit from one animation to another, and from project to project, but here are a few important points to keep in mind:

Muting Keyframes

While some animations can be softened or slowed down, there are others that should disappear completely when reduced motion is requested. Pulse animations are a good example. To make sure these animations do not run in reduced motion mode, we can simply wrap them in the appropriate media query.


@media (prefers-reduced-motion: no-preference) {
  @keyfrmaes kf-pulse {
    from {
      scale: var(--kf-pulse-scale-from, 1);
      opacity: var(--kf-pulse-opacity-from, 1);
    }
    to {
      scale: var(--kf-pulse-scale-to, 1);
      opacity: var(--kf-pulse-opacity-to, 1);
    }
  }
}

This ensures that users who have set prefers-reduced-motion to reduce will not see the animation and will get an experience that matches their preference.

Instant In

There are some keyframes we cannot simply remove, such as entrance animations. The value must change, must animate; otherwise, the element won't have the correct values. But in reduced motion, this transition from the initial value should be instant.

To achieve this, we’ll define an extra set of keyframes where the value jumps immediately to the end state. These become our default keyframes. Then, we’ll add the regular keyframes inside a media query for prefers-reduced-motion set to no-preference, just like in the previous example.

/* pop in instantly for reduced motion */
@keyframes kf-zoom {
  from, to {
    scale: var(--kf-zoom-to, 1);
  }
}

@media (prefers-reduced-motion: no-preference) {
  /* Original zoom keyframes */
  @keyframes kf-zoom {
    from {
      scale: var(--kf-zoom-from, 0.8);
    }
    to {
      scale: var(--kf-zoom-to, 1);
    }
  }
}

This way, users who prefer reduced motion will see the element appear instantly in its final state, while everyone else gets the animated transition.

The Soft Approach

There are cases where we do want to keep some movement, but much softer and calmer than the original animation. For example, we can replace a bounce entrance with a gentle fade-in.


@keyframes kf-bounce {
  /* Soft fade-in for reduced motion */
}

@media (prefers-reduced-motion: no-preference) {
  @keyframes kf-bounce {
    /* Original bounce keyframes */
  }
}

Now, users with reduced motion enabled still get a sense of appearance, but without the intense movement of a bounce or elastic animation.

With the building blocks in place, the next question is how to make them part of the actual workflow. Writing flexible keyframes is one thing, but making them reliable across a large project requires a few strategies that I had to learn the hard way.

Implementation Strategies & Best Practices

Once we have a solid library of keyframes tokens, the real challenge is how to bring them into everyday work.

  • The temptation is to drop all keyframes in at once and declare the problem solved, but in practice I have found that the best results come from gradual adoption. Start with the most common animations, such as fade or slide. These are easy wins that show immediate value without requiring big rewrites.
  • Naming is another point that deserves attention. A consistent prefix or namespace makes it obvious which animations are tokens and which are local one-offs. It also prevents accidental collisions and helps new team members recognize the shared system at a glance.
  • Documentation is just as important as the code itself. Even a short comment above each keyframes token can save hours of guessing later. A developer should be able to open the tokens file, scan for the effect they need, and copy the usage pattern straight into their component.
  • Flexibility is what makes this approach worth the effort. By exposing sensible custom properties, we give teams room to adapt the animation without breaking the system. At the same time, try not to overcomplicate. Provide the knobs that matter and keep the rest opinionated.
  • Finally, remember accessibility. Not every animation needs a reduced motion alternative, but many do. Baking in these adjustments early means we never have to retrofit them later, and it shows a level of care that our users will notice even if they never mention it.

In my experience, treating keyframes tokens as part of our design tokens workflow is what makes them stick. Once they are in place, they stop feeling like special effects and become part of the design language, a natural extension of how the product moves and responds.

Wrapping Up

Animations can be one of the most joyful parts of building interfaces, but without structure, they can also become one of the biggest sources of frustration. By treating keyframes as tokens, you take something that is usually messy and hard to manage and turn it into a clear, predictable system.

The real value is not just in saving a few lines of code. It is in the confidence that when you use a fade, slide, zoom, or spin, you know exactly how it will behave across the project. It is in the flexibility that comes from custom properties without the chaos of endless variations. And it is in the accessibility built into the foundation rather than added as an afterthought.

I have seen these ideas work in different teams and different codebases, and the pattern is always the same.

Once the tokens are in place, keyframes stop being a scattered collection of tricks and become part of the design language. They make the product feel more intentional, more consistent, and more alive.

If you take one thing from this article, let it be this: animations deserve the same care and structure we already give to colors, typography, and spacing. A small investment in keyframes tokens pays off every time your interface moves.

]]>
hello@smashingmagazine.com (Amit Sheen)
<![CDATA[From Chaos To Clarity: Simplifying Server Management With AI And Automation]]> https://smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/ https://smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/ Tue, 18 Nov 2025 10:00:00 GMT This article is a sponsored by Cloudways

If you build or manage websites for a living, you know the feeling. Your day is a constant juggle; one moment you’re fine-tuning a design, the next you’re troubleshooting a slow server or a mysterious error. Daily management of a complex web of plugins, integrations, and performance tools often feels like you’re just reacting to problems—putting out fires instead of building something new.

This reactive cycle is exhausting, and it pulls your focus away from meaningful work and into the technical weeds. A recent industry event, Cloudways Prepathon 2025, put a sharp focus on this very challenge. The discussions made it clear: the future of web work demands a better way. It requires an infrastructure that’s ready for AI; one that can actively help you turn this daily chaos into clarity.

The stakes for performance are higher than ever.

Suhaib Zaheer, SVP of Managed Hosting at DigitalOcean, and Ali Ahmed Khan, Sr. Director of Product Management, shared a telling statistic during their panel: 53% of mobile visitors will leave a site if it takes more than three seconds to load.

Think about that for a second, and within half that time, your potential traffic is gone. This isn’t just about a slow website, but about lost trust, abandoned carts, and missed opportunities. Performance is no longer just a feature; it’s the foundation of user experience. And in today’s landscape, automation is the key to maintaining it consistently.

So how do we stop reacting and start preventing?

The Old Way: A Constant State Of Alert

For too long, server management has worked like this: something breaks, you receive an alert (or worse, a client complaint), and you start digging. You log into your server, check logs, try to correlate different metrics, and eventually (hopefully) find the root cause. Then you manually apply a fix.

This process is fragile and relies on your constant attention while eating up hours that could be spent on development, strategy, or client work. For freelancers and small teams, this time is your most valuable asset. Every minute spent manually diagnosing a disk space issue or a web stack failure is a minute not spent on growing your business.

The problem isn't a lack of tools. It's that most tools just show you the data; they don't help you understand it or act on it. They add to the noise instead of providing clarity.

A New Approach: From Diagnosis To Automatic Resolution

This is where a shift towards intelligent automation changes the game. Tools like Cloudways Copilot, which became generally available earlier this year, are built specifically to simplify this workflow. The goal is straightforward: combine AI-driven diagnostics with automated fixes to predict and resolve performance issues before they affect your users.

Here’s a practical look at how it works.

Imagine your site starts running slowly. In the past, you'd begin the tedious investigation.

1. The AI Insights

Instead of a generic "high CPU" alert, you get a detailed insight. It tells you what happened (e.g., "MySQL process is consuming excessive resources"), why it happened (e.g., "caused by a poorly optimized query from a recent plugin update"), and provides a step-by-step guide to fix it manually. This alone cuts diagnosis time from 30-40 minutes down to about five. You understand the problem, not just the diagnosis.

2. The SmartFix

This is where it moves from helpful to transformative. For common issues, you don’t just get a manual guide. You get a one-click SmartFix button. After reviewing the actions Copilot will take, you can let it automatically resolve the issue. It applies the necessary steps safely and without you needing to touch a command line. This is the clarity we’re talking about. The system doesn’t just tell you about the problem; it solves it for you.

For developers managing multiple sites, this is a fundamental change. It means you can handle routine server issues at scale. A disk cleanup that would have required logging into ten different servers can now be handled with a few clicks. It frees your brain from repetitive troubleshooting and lets you focus on the work that actually requires your expertise.

Building An AI-Ready Foundation

The principles discussed at Prepathon go beyond any single tool. The theme was about building a resilient foundation. Meeky Hwang, CEO at Ndevr, introduced the "3E Framework," which perfectly applies here. A strong platform must balance:

  • Audience Experience
    What your visitors see and feel—blazing speed and seamless operation.
  • Creator Experience
    The workflow for you and your team—managing content and marketing without technical friction.
  • Developer Experience
    The backend foundation—server management that is secure, stable, and efficient.

AI-driven server management directly strengthens all three. A faster, more stable server improves the Audience Experience. Fewer emergencies and simpler workflows improve the Creator and Developer Experience. When these are aligned, you can scale with confidence.

This Isn’t About Replacing You

It’s important to be clear. This isn’t about replacing the developer but about augmenting your capabilities. As Vito Peleg, Co-founder & CEO at Atarim, noted during Prepathon:

“We're all becoming prompt engineers in the modern world. Our job is no longer to do the task, but to orchestrate the fleet of AI agents that can do it at a scale we never could alone.”

— Vito Peleg, Co-founder & CEO at Atarim

Think of Cloudways Copilot as an expert sysadmin on your team. It handles the routine, often tedious, work. It alerts you to what’s important and provides clear, actionable context. This gives you back the mental space and time to focus on architecture, innovation, and client strategy.

“The challenge isn’t managing servers anymore — it’s managing focus,”

Suhaib Zaheer noted.

“AI-driven infrastructure should help developers spend less time reacting to issues and more time creating better digital experiences.”
A Practical Path Forward

For freelancers, WordPress experts, and small agency developers, this shift offers a tangible way to:

  • Drastically reduce the hours spent manually troubleshooting infrastructure issues.
  • Implement predictive monitoring that catches slowdowns and bottlenecks early.
  • Manage your entire stack through clear, plain-English AI insights instead of raw data.
  • Balance speed, security, and uptime without needing an enterprise-scale budget or team.

The goal is to make powerful infrastructure simple, while also giving you back control and your time so you can focus on what you do best: creating exceptional web experiences.

You can use promo code BFCM5050 to get 50% off for 3 months plus 50 Free Migrations using Cloudways. This offer is valid from November 18th to December 4th, 2025.

]]>
hello@smashingmagazine.com (Mansoor Ahmed Khan)
<![CDATA[CSS Gamepad API Visual Debugging With CSS Layers]]> https://smashingmagazine.com/2025/11/css-gamepad-api-visual-debugging-css-layers/ https://smashingmagazine.com/2025/11/css-gamepad-api-visual-debugging-css-layers/ Fri, 14 Nov 2025 13:00:00 GMT When you plug in a controller, you mash buttons, move the sticks, pull the triggers… and as a developer, you see none of it. The browser’s picking it up, sure, but unless you’re logging numbers in the console, it’s invisible. That’s the headache with the Gamepad API.

It’s been around for years, and it’s actually pretty powerful. You can read buttons, sticks, triggers, the works. But most people don’t touch it. Why? Because there’s no feedback. No panel in developer tools. No clear way to know if the controller’s even doing what you think. It feels like flying blind.

That bugged me enough to build a little tool: Gamepad Cascade Debugger. Instead of staring at console output, you get a live, interactive view of the controller. Press something and it reacts on the screen. And with CSS Cascade Layers, the styles stay organized, so it’s cleaner to debug.

In this post, I’ll show you why debugging controllers is such a pain, how CSS helps clean it up, and how you can build a reusable visual debugger for your own projects.

Even if you are able to log them all, you’ll quickly end up with unreadable console spam. For example:

[0,0,1,0,0,0.5,0,...]
[0,0,0,0,1,0,0,...]
[0,0,1,0,0,0,0,...]

Can you tell what button was pressed? Maybe, but only after straining your eyes and missing a few inputs. So, no, debugging doesn’t come easily when it comes to reading inputs.

Problem 3: Lack Of Structure

Even if you throw together a quick visualizer, styles can quickly get messy. Default, active, and debug states can overlap, and without a clear structure, your CSS becomes brittle and hard to extend.

CSS Cascade Layers can help. They group styles into “layers” that are ordered by priority, so you stop fighting specificity and guessing, “Why isn’t my debug style showing?” Instead, you maintain separate concerns:

  • Base: The controller’s standard, initial appearance.
  • Active: Highlights for pressed buttons and moved sticks.
  • Debug: Overlays for developers (e.g., numeric readouts, guides, and so on).

If we were to define layers in CSS according to this, we’d have:

/* lowest to highest priority */
@layer base, active, debug;

@layer base {
  /* ... */
}

@layer active {
  /* ... */
}

@layer debug {
  /* ... */
}

Because each layer stacks predictably, you always know which rules win. That predictability makes debugging not just easier, but actually manageable.

We’ve covered the problem (invisible, messy input) and the approach (a visual debugger built with Cascade Layers). Now we’ll walk through the step-by-step process to build the debugger.

The Debugger Concept

The easiest way to make hidden input visible is to just draw it on the screen. That’s what this debugger does. Buttons, triggers, and joysticks all get a visual.

  • Press A: A circle lights up.
  • Nudge the stick: The circle slides around.
  • Pull a trigger halfway: A bar fills halfway.

Now you’re not staring at 0s and 1s, but actually watching the controller react live.

Of course, once you start piling on states like default, pressed, debug info, maybe even a recording mode, the CSS starts getting larger and more complex. That’s where cascade layers come in handy. Here’s a stripped-down example:

@layer base {
  .button {
    background: #222;
    border-radius: 50%;
    width: 40px;
    height: 40px;
  }
}

@layer active {
  .button.pressed {
    background: #0f0; /* bright green */
  }
}

@layer debug {
  .button::after {
    content: attr(data-value);
    font-size: 12px;
    color: #fff;
  }
}

The layer order matters: baseactivedebug.

  • base draws the controller.
  • active handles pressed states.
  • debug throws on overlays.

Breaking it up like this means you’re not fighting weird specificity wars. Each layer has its place, and you always know what wins.

Building It Out

Let’s get something on screen first. It doesn’t need to look good — just needs to exist so we have something to work with.

<h1>Gamepad Cascade Debugger</h1>

<!-- Main controller container -->
<div id="controller">
  <!-- Action buttons -->
  <div id="btn-a" class="button">A</div>
  <div id="btn-b" class="button">B</div>
  <div id="btn-x" class="button">X</div>

  <!-- Pause/menu button (represented as two bars) -->
  <div>
    <div id="pause1" class="pause"></div>
    <div id="pause2" class="pause"></div>
  </div>
</div>

<!-- Toggle button to start/stop the debugger -->
<button id="toggle">Toggle Debug</button>

<!-- Status display for showing which buttons are pressed -->
<div id="status">Debugger inactive</div>

<script src="script.js"></script>

That’s literally just boxes. Not exciting yet, but it gives us handles to grab later with CSS and JavaScript.

Okay, I’m using cascade layers here because it keeps stuff organized once you add more states. Here’s a rough pass:

/* ===================================
   CASCADE LAYERS SETUP
   Order matters: base → active → debug
   =================================== */

/* Define layer order upfront */
@layer base, active, debug;

/* Layer 1: Base styles - default appearance */
@layer base {
  .button {
    background: #333;
    border-radius: 50%;
    width: 70px;
    height: 70px;
    display: flex;
    justify-content: center;
    align-items: center;
  }

  .pause {
    width: 20px;
    height: 70px;
    background: #333;
    display: inline-block;
  }
}

/* Layer 2: Active states - handles pressed buttons */
@layer active {
  .button.active {
    background: #0f0; /* Bright green when pressed */
    transform: scale(1.1); /* Slightly enlarges the button */
  }

  .pause.active {
    background: #0f0;
    transform: scaleY(1.1); /* Stretches vertically when pressed */
  }
}

/* Layer 3: Debug overlays - developer info */
@layer debug {
  .button::after {
    content: attr(data-value); /* Shows the numeric value */
    font-size: 12px;
    color: #fff;
  }
}

The beauty of this approach is that each layer has a clear purpose. The base layer can never override active, and active can never override debug, regardless of specificity. This eliminates the CSS specificity wars that usually plague debugging tools.

Now it looks like some clusters are sitting on a dark background. Honestly, not too bad.

Adding the JavaScript

JavaScript time. This is where the controller actually does something. We’ll build this step by step.

Step 1: Set Up State Management

First, we need variables to track the debugger’s state:

// ===================================
// STATE MANAGEMENT
// ===================================

let running = false; // Tracks whether the debugger is active
let rafId; // Stores the requestAnimationFrame ID for cancellation

These variables control the animation loop that continuously reads gamepad input.

Step 2: Grab DOM References

Next, we get references to all the HTML elements we’ll be updating:

// ===================================
// DOM ELEMENT REFERENCES
// ===================================

const btnA = document.getElementById("btn-a");
const btnB = document.getElementById("btn-b");
const btnX = document.getElementById("btn-x");
const pause1 = document.getElementById("pause1");
const pause2 = document.getElementById("pause2");
const status = document.getElementById("status");

Storing these references up front is more efficient than querying the DOM repeatedly.

Step 3: Add Keyboard Fallback

For testing without a physical controller, we’ll map keyboard keys to buttons:

// ===================================
// KEYBOARD FALLBACK (for testing without a controller)
// ===================================

const keyMap = {
  "a": btnA,
  "b": btnB,
  "x": btnX,
  "p": [pause1, pause2] // 'p' key controls both pause bars
};

This lets us test the UI by pressing keys on a keyboard.

Step 4: Create The Main Update Loop

Here’s where the magic happens. This function runs continuously and reads gamepad state:

// ===================================
// MAIN GAMEPAD UPDATE LOOP
// ===================================

function updateGamepad() {
  // Get all connected gamepads
  const gamepads = navigator.getGamepads();
  if (!gamepads) return;

  // Use the first connected gamepad
  const gp = gamepads[0];

  if (gp) {
    // Update button states by toggling the "active" class
    btnA.classList.toggle("active", gp.buttons[0].pressed);
    btnB.classList.toggle("active", gp.buttons[1].pressed);
    btnX.classList.toggle("active", gp.buttons[2].pressed);

    // Handle pause button (button index 9 on most controllers)
    const pausePressed = gp.buttons[9].pressed;
    pause1.classList.toggle("active", pausePressed);
    pause2.classList.toggle("active", pausePressed);

    // Build a list of currently pressed buttons for status display
    let pressed = [];
    gp.buttons.forEach((btn, i) => {
      if (btn.pressed) pressed.push("Button " + i);
    });

    // Update status text if any buttons are pressed
    if (pressed.length > 0) {
      status.textContent = "Pressed: " + pressed.join(", ");
    }
  }

  // Continue the loop if debugger is running
  if (running) {
    rafId = requestAnimationFrame(updateGamepad);
  }
}

The classList.toggle() method adds or removes the active class based on whether the button is pressed, which triggers our CSS layer styles.

Step 5: Handle Keyboard Events

These event listeners make the keyboard fallback work:

// ===================================
// KEYBOARD EVENT HANDLERS
// ===================================

document.addEventListener("keydown", (e) => {
  if (keyMap[e.key]) {
    // Handle single or multiple elements
    if (Array.isArray(keyMap[e.key])) {
      keyMap[e.key].forEach(el => el.classList.add("active"));
    } else {
      keyMap[e.key].classList.add("active");
    }
    status.textContent = "Key pressed: " + e.key.toUpperCase();
  }
});

document.addEventListener("keyup", (e) => {
  if (keyMap[e.key]) {
    // Remove active state when key is released
    if (Array.isArray(keyMap[e.key])) {
      keyMap[e.key].forEach(el => el.classList.remove("active"));
    } else {
      keyMap[e.key].classList.remove("active");
    }
    status.textContent = "Key released: " + e.key.toUpperCase();
  }
});

Step 6: Add Start/Stop Control

Finally, we need a way to toggle the debugger on and off:

// ===================================
// TOGGLE DEBUGGER ON/OFF
// ===================================

document.getElementById("toggle").addEventListener("click", () => {
  running = !running; // Flip the running state

  if (running) {
    status.textContent = "Debugger running...";
    updateGamepad(); // Start the update loop
  } else {
    status.textContent = "Debugger inactive";
    cancelAnimationFrame(rafId); // Stop the loop
  }
});

So yeah, press a button and it glows. Push the stick and it moves. That’s it.

One more thing: raw values. Sometimes you just want to see numbers, not lights.

At this stage, you should see:

  • A simple on-screen controller,
  • Buttons that react as you interact with them, and
  • An optional debug readout showing pressed button indices.

To make this less abstract, here’s a quick demo of the on-screen controller reacting in real time:

Now, pressing Start Recording logs everything until you hit Stop Recording.

2. Exporting Data to CSV/JSON

Once we have a log, we’ll want to save it.

<div class="controls">
  <button id="export-json" class="btn">Export JSON</button>
  <button id="export-csv" class="btn">Export CSV</button>
</div>

Step 1: Create The Download Helper

First, we need a helper function that handles file downloads in the browser:

// ===================================
// FILE DOWNLOAD HELPER
// ===================================

function downloadFile(filename, content, type = "text/plain") {
  // Create a blob from the content
  const blob = new Blob([content], { type });
  const url = URL.createObjectURL(blob);

  // Create a temporary download link and click it
  const a = document.createElement("a");
  a.href = url;
  a.download = filename;
  a.click();

  // Clean up the object URL after download
  setTimeout(() => URL.revokeObjectURL(url), 100);
}

This function works by creating a Blob (binary large object) from your data, generating a temporary URL for it, and programmatically clicking a download link. The cleanup ensures we don’t leak memory.

Step 2: Handle JSON Export

JSON is perfect for preserving the complete data structure:

// ===================================
// EXPORT AS JSON
// ===================================

document.getElementById("export-json").addEventListener("click", () => {
  // Check if there's anything to export
  if (!frames.length) {
    console.warn("No recording available to export.");
    return;
  }

  // Create a payload with metadata and frames
  const payload = {
    createdAt: new Date().toISOString(),
    frames
  };

  // Download as formatted JSON
  downloadFile(
    "gamepad-log.json", 
    JSON.stringify(payload, null, 2), 
    "application/json"
  );
});

The JSON format keeps everything structured and easily parseable, making it ideal for loading back into dev tools or sharing with teammates.

Step 3: Handle CSV Export

For CSV exports, we need to flatten the hierarchical data into rows and columns:

// ===================================
// EXPORT AS CSV
// ===================================

document.getElementById("export-csv").addEventListener("click", () => {
  // Check if there's anything to export
  if (!frames.length) {
    console.warn("No recording available to export.");
    return;
  }

  // Build CSV header row (columns for timestamp, all buttons, all axes)
  const headerButtons = frames[0].buttons.map((_, i) => btn${i});
  const headerAxes = frames[0].axes.map((_, i) => axis${i});
  const header = ["t", ...headerButtons, ...headerAxes].join(",") + "\n";

  // Build CSV data rows
  const rows = frames.map(f => {
    const btnVals = f.buttons.map(b => b.value);
    return [f.t, ...btnVals, ...f.axes].join(",");
  }).join("\n");

  // Download as CSV
  downloadFile("gamepad-log.csv", header + rows, "text/csv");
});

CSV is brilliant for data analysis because it opens directly in Excel or Google Sheets, letting you create charts, filter data, or spot patterns visually.

Now that the export buttons are in, you’ll see two new options on the panel: Export JSON and Export CSV. JSON is nice if you want to throw the raw log back into your dev tools or poke around the structure. CSV, on the other hand, opens straight into Excel or Google Sheets so you can chart, filter, or compare inputs. The following figure shows what the panel looks like with those extra controls.

3. Snapshot System

Sometimes you don’t need a full recording, just a quick “screenshot” of input states. That’s where a Take Snapshot button helps.

<div class="controls">
  <button id="snapshot" class="btn">Take Snapshot</button>
</div>

And the JavaScript:

// ===================================
// TAKE SNAPSHOT
// ===================================

document.getElementById("snapshot").addEventListener("click", () => {
  // Get all connected gamepads
  const pads = navigator.getGamepads();
  const activePads = [];

  // Loop through and capture the state of each connected gamepad
  for (const gp of pads) {
    if (!gp) continue; // Skip empty slots

    activePads.push({
      id: gp.id, // Controller name/model
      timestamp: performance.now(),
      buttons: gp.buttons.map(b => ({ 
        pressed: b.pressed, 
        value: b.value 
      })),
      axes: [...gp.axes]
    });
  }

  // Check if any gamepads were found
  if (!activePads.length) {
    console.warn("No gamepads connected for snapshot.");
    alert("No controller detected!");
    return;
  }

  // Log and notify user
  console.log("Snapshot:", activePads);
  alert(Snapshot taken! Captured ${activePads.length} controller(s).);
});

Snapshots freeze the exact state of your controller at one moment in time.

4. Ghost Input Replay

Now for the fun one: ghost input replay. This takes a log and plays it back visually as if a phantom player was using the controller.

<div class="controls">
  <button id="replay" class="btn">Replay Last Recording</button>
</div>

JavaScript for replay:

// ===================================
// GHOST REPLAY
// ===================================

document.getElementById("replay").addEventListener("click", () => {
  // Ensure we have a recording to replay
  if (!frames.length) {
    alert("No recording to replay!");
    return;
  }

  console.log("Starting ghost replay...");

  // Track timing for synced playback
  let startTime = performance.now();
  let frameIndex = 0;

  // Replay animation loop
  function step() {
    const now = performance.now();
    const elapsed = now - startTime;

    // Process all frames that should have occurred by now
    while (frameIndex < frames.length && frames[frameIndex].t <= elapsed) {
      const frame = frames[frameIndex];

      // Update UI with the recorded button states
      btnA.classList.toggle("active", frame.buttons[0].pressed);
      btnB.classList.toggle("active", frame.buttons[1].pressed);
      btnX.classList.toggle("active", frame.buttons[2].pressed);

      // Update status display
      let pressed = [];
      frame.buttons.forEach((btn, i) => {
        if (btn.pressed) pressed.push("Button " + i);
      });
      if (pressed.length > 0) {
        status.textContent = "Ghost: " + pressed.join(", ");
      }

      frameIndex++;
    }

    // Continue loop if there are more frames
    if (frameIndex < frames.length) {
      requestAnimationFrame(step);
    } else {
      console.log("Replay finished.");
      status.textContent = "Replay complete";
    }
  }

  // Start the replay
  step();
});

To make debugging a bit more hands-on, I added a ghost replay. Once you’ve recorded a session, you can hit replay and watch the UI act it out, almost like a phantom player is running the pad. A new Replay Ghost button shows up in the panel for this.

Hit Record, mess around with the controller a bit, stop, then replay. The UI just echoes everything you did, like a ghost following your inputs.

Why bother with these extras?

  • Recording/export makes it easy for testers to show exactly what happened.
  • Snapshots freeze a moment in time, super useful when you’re chasing odd bugs.
  • Ghost replay is great for tutorials, accessibility checks, or just comparing control setups side by side.

At this point, it’s not just a neat demo anymore, but something you could actually put to work.

Real-World Use Cases

Now we’ve got this debugger that can do a lot. It shows live input, records logs, exports them, and even replays stuff. But the real question is: who actually cares? Who’s this useful for?

Game Developers

Controllers are part of the job, but debugging them? Usually a pain. Imagine you’re testing a fighting game combo, like ↓ → + punch. Instead of praying, you pressed it the same way twice, you record it once, and replay it. Done. Or you swap JSON logs with a teammate to check if your multiplayer code reacts the same on their machine. That’s huge.

Accessibility Practitioners

This one’s close to my heart. Not everyone plays with a “standard” controller. Adaptive controllers throw out weird signals sometimes. With this tool, you can see exactly what’s happening. Teachers, researchers, whoever. They can grab logs, compare them, or replay inputs side-by-side. Suddenly, invisible stuff becomes obvious.

Quality Assurance Testing

Testers usually write notes like “I mashed buttons here and it broke.” Not very helpful. Now? They can capture the exact presses, export the log, and send it off. No guessing.

Educators

If you’re making tutorials or YouTube vids, ghost replay is gold. You can literally say, “Here’s what I did with the controller,” while the UI shows it happening. Makes explanations way clearer.

Beyond Games

And yeah, this isn’t just about games. People have used controllers for robots, art projects, and accessibility interfaces. Same issue every time: what is the browser actually seeing? With this, you don’t have to guess.

Conclusion

Debugging a controller input has always felt like flying blind. Unlike the DOM or CSS, there’s no built-in inspector for gamepads; it’s just raw numbers in the console, easily lost in the noise.

With a few hundred lines of HTML, CSS, and JavaScript, we built something different:

  • A visual debugger that makes invisible inputs visible.
  • A layered CSS system that keeps the UI clean and debuggable.
  • A set of enhancements (recording, exporting, snapshots, ghost replay) that elevate it from demo to developer tool.

This project shows how far you can go by mixing the Web Platform’s power with a little creativity in CSS Cascade Layers.

The tool I just explained in its entirety is open-source. You can clone the GitHub repo and try it for yourself.

But more importantly, you can make it your own. Add your own layers. Build your own replay logic. Integrate it with your game prototype. Or even use it in ways I haven’t imagined. For teaching, accessibility, or data analysis.

At the end of the day, this isn’t just about debugging gamepads. It’s about shining a light on hidden inputs, and giving developers the confidence to work with hardware that the web still doesn’t fully embrace.

So, plug in your controller, open up your editor, and start experimenting. You might be surprised at what your browser and your CSS can truly accomplish.

]]>
hello@smashingmagazine.com (Godstime Aburu)
<![CDATA[Older Tech In The Browser Stack]]> https://smashingmagazine.com/2025/11/older-tech-browser-stack/ https://smashingmagazine.com/2025/11/older-tech-browser-stack/ Thu, 13 Nov 2025 08:00:00 GMT I’ve been in front-end development long enough to see a trend over the years: younger developers working with a new paradigm of programming without understanding the historical context of it.

It is, of course, perfectly understandable to not know something. The web is a very big place with a diverse set of skills and specialties, and we don’t always know what we don’t know. Learning in this field is an ongoing journey rather than something that happens once and ends.

Case in point: Someone on my team asked if it was possible to tell if users navigate away from a particular tab in the UI. I pointed out JavaScript’s beforeunload event. But those who have tackled this before know this is possible because they have been hit with alerts about unsaved data on other sites, for which beforeunload is a typical use case. I also pointed out the pageHide and visibilityChange events to my colleague for good measure.

How did I know about that? Because it came up in another project, not because I studied up on it when initially learning JavaScript.

The fact is that modern front-end frameworks are standing on the shoulders of the technology giants that preceded them. They abstract development practices, often for a better developer experience that reduces, or even eliminates, the need to know or touch what have traditionally been essential front-end concepts everyone probably ought to know.

Consider the CSS Object Model (CSSOM). You might expect that anyone working in CSS and JavaScript has a bunch of hands-on CSSOM experience, but that’s not always going to be the case.

There was a React project for an e-commerce site I worked on where we needed to load a stylesheet for the currently selected payment provider. The problem was that the stylesheet was loading on every page when it was only really needed on a specific page. The developer tasked with making this happen hadn’t ever loaded a stylesheet dynamically. Again, this is totally understandable when React abstracts away the traditional approach you might have reached for.

The CSSOM is likely not something you need in your everyday work. But it is likely you will need to interact with it at some point, even in a one-off instance.

These experiences inspired me to write this article. There are many existing web features and technologies in the wild that you may never touch directly in your day-to-day work. Perhaps you’re fairly new to web development and are simply unaware of them because you’re steeped in the abstraction of a specific framework that doesn’t require you to know it deeply, or even at all.

I’m speaking specifically about XML, which many of us know is an ancient language not totally dissimilar from HTML.

I’m bringing this up because of recent WHATWG discussions suggesting that a significant chunk of the XML stack known as XSLT programming should be removed from browsers. This is exactly the sort of older, existing technology we’ve had for years that could be used for something as practical as the CSSOM situation my team was in.

Have you worked with XSLT before? Let’s see if we lean heavily into this older technology and leverage its features outside the context of XML to tackle real-world problems today.

XPath: The Central API

The most important XML technology that is perhaps the most useful outside of a straight XML perspective is XPath, a query language that allows you to find any node or attribute in a markup tree with one root element. I have a personal affection for XSLT, but that also relies on XPath, and personal affection must be put aside in ranking importance.

The argument for removing XSLT does not make any mention of XPath, so I suppose it is still allowed. That’s good because XPath is the central and most important API in this suite of technologies, especially when trying to find something to use outside normal XML usage. It is important because, while CSS selectors can be used to find most of the elements in your page, they cannot find them all. Furthermore, CSS selectors cannot be used to find an element based on its current position in the DOM.

XPath can.

Now, some of you reading this might know XPath, and some might not. XPath is a pretty big area of technology, and I can’t really teach all the basics and also show you cool things to do with it in a single article like this. I actually tried writing that article, but the average Smashing Magazine publication doesn’t go over 5,000 words. I was already at more than 2,000 words while only halfway through the basics.

So, I’m going to start doing cool stuff with XPath and give you some links that you can use for the basics if you find this stuff interesting.

Combining XPath & CSS

XPath can do lots of things that CSS selectors can’t when querying elements. But CSS selectors can also do a few things that XPath can’t, namely, query elements by class name.

CSS XPath
.myClass /*[contains(@class, "myClass")]

In this example, CSS queries elements that contain a .myClass classname. Meanwhile, the XPath example queries elements that contain an attribute class with the string “myClass”. In other words, it selects elements with myClass in any attribute, including elements with the .myClass classname — as well as elements with “myClass” in the string, such as .myClass2. XPath is broader in that sense.

So, no. I’m not suggesting that we ought to toss out CSS and start selecting all elements via XPath. That’s not the point.

The point is that XPath can do things that CSS cannot and could still be very useful, even though it is an older technology in the browser stack and may not seem obvious at first glance.

Let’s use the two technologies together not only because we can, but because we’ll learn something about XPath in the process, making it another tool in your stack — one you might not have known has been there all along!

The problem is that JavaScript’s document.evaluate method and the various query selector methods we use with the CSS APIs for JavaScript are incompatible.

I have made a compatible querying API to get us started, though admittedly, I have not put a lot of thought into it since it’s a departure from what we’re doing here. Here’s a fairly simple working example of a reusable query constructor:

See the Pen queryXPath [forked] by Bryan Rasmussen.

I’ve added two methods on the document object: queryCSSSelectors (which is essentially querySelectorAll) and queryXPaths. Both of these return a queryResults object:

{
  queryType: nodes | string | number | boolean,
  results: any[] // html elements, xml elements, strings, numbers, booleans,
  queryCSSSelectors: (query: string, amend: boolean) => queryResults,
  queryXpaths: (query: string, amend: boolean) => queryResults
}

The queryCSSSelectors and queryXpaths functions run the query you give them over the elements in the results array, as long as the results array is of type nodes, of course. Otherwise, it will return a queryResult with an empty array and a type of nodes. If the amend property is set to true, the functions will change their own queryResults.

Under no circumstances should this be used in a production environment. I am doing it this way purely to demonstrate the various effects of using the two query APIs together.

Example Queries

I want to show a few examples of different XPath queries that demonstrate some of the powerful things they can do and how they can be used in place of other approaches.

The first example is //li/text(). This queries all li elements and returns their text nodes. So, if we were to query the following HTML:

<ul>
  <li>one</li>
  <li>two</li>
  <li>three</li>
</ul>

…this is what is returned:

{"queryType":"xpathEvaluate","results":["one","two","three"],"resultType":"string"}

In other words, we get the following array: ["one","two","three"].

Normally, you would query for the li elements to get that, turn the result of that query into an array, map the array, and return the text node of each element. But we can do that more concisely with XPath:

document.queryXPaths("//li/text()").results.

Notice that the way to get a text node is to use text(), which looks like a function signature — and it is. It returns the text node of an element. In our example, there are three li elements in the markup, each containing text ("one", "two", and "three").

Let’s look at one more example of a text() query. Assume this is our markup:

<pa href="/login.html">Sign In</a>

Let’s write a query that returns the href attribute value:

document.queryXPaths("//a[text() = 'Sign In']/@href").results.

This is an XPath query on the current document, just like the last example, but this time we return the href attribute of a link (a element) that contains the text “Sign In”. The actual returned result is ["/login.html"].

XPath Functions Overview

There are a number of XPath functions, and you’re probably unfamiliar with them. There are several, I think, that are worth knowing about, including the following:

  • starts-with
    If a text starts with a particular other text example, starts-with(@href, 'http:') returns true if an href attribute starts with http:.
  • contains
    If a text contains a particular other text example, contains(text(), "Smashing Magazine") returns true if a text node contains the words “Smashing Magazine” in it anywhere.
  • count
    Returns a count of how many matches there are to a query. For example, count(//*[starts-with(@href, 'http:']) returns a count of how many links in the context node have elements with an href attribute that contains the text beginning with the http:.
  • substring
    Works like JavaScript substring, except you pass the string as an argument. For example, substring("my text", 2, 4) returns "y t".
  • substring-before
    Returns the part of a string before another string. For example, substing-before("my text", " ") returns "my". Similarly, substring-before("hi","bye") returns an empty string.
  • substring-after
    Returns the part of a string after another string. For example, substing-after("my text", " ") returns "text". Similarly, substring-after("hi","bye")returns an empty string.
  • normalize-space
    Returns the argument string with whitespace normalized by stripping leading and trailing whitespace and replacing sequences of whitespace characters by a single space.
  • not
    Returns a boolean true if the argument is false, otherwise false.
  • true
    Returns boolean true.
  • false
    Returns boolean false.
  • concat
    The same thing as JavaScript concat, except you do not run it as a method on a string. Instead, you put in all the strings you want to concatenate.
  • string-length
    This is not the same as JavaScript string-length, but rather returns the length of the string it is given as an argument.
  • translate
    This takes a string and changes the second argument to the third argument. For example, translate("abcdef", "abc", "XYZ") outputs XYZdef.

Aside from these particular XPath functions, there are a number of other functions that work just the same as their JavaScript counterparts — or counterparts in basically any programming language — that you would probably also find useful, such as floor, ceiling, round, sum, and so on.

The following demo illustrates each of these functions:

See the Pen XPath Numerical functions [forked] by Bryan Rasmussen.

Note that, like most of the string manipulation functions, many of the numerical ones take a single input. This is, of course, because they are supposed to be used for querying, as in the last XPath example:

//li[floor(text()) > 250]/@val

If you use them, as most of the examples do, you will end up running it on the first node that matches the path.

There are also some type conversion functions you should probably avoid because JavaScript already has its own type conversion problems. But there can be times when you want to convert a string to a number in order to check it against some other number.

Functions that set the type of something are boolean, number, string, and node. These are the important XPath datatypes.

And as you might imagine, most of these functions can be used on datatypes that are not DOM nodes. For example, substring-after takes a string as we’ve already covered, but it could be the string from an href attribute. It can also just be a string:

const testSubstringAfter = document.queryXPaths("substring-after('hello world',' ')");

Obviously, this example will give us back the results array as ["world"]. To show this in action, I have made a demo page using functions against things that are not DOM nodes:

See the Pen queryXPath [forked] by Bryan Rasmussen.

You should note the surprising aspect of the translate function, which is that if you have a character in the second argument (i.e., the list of characters you want translated) and no matching character to translate to, that character gets removed from the output.

Thus, this:

translate('Hello, My Name is Inigo Montoya, you killed my father, prepare to die','abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ,','*')

…results in the string, including spaces:

[" * *  ** "]

This means that the letter “a” is being translated to an asterisk (*), but every other character that does not have a translation given the target string is completely removed. The whitespace is all we have left between the translated “a” characters.

Then again, this query:

translate('Hello, My Name is Inigo Montoya, you killed my father, prepare to die','abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ,','**************************************************')")

…does not have the problem and outputs a result that looks like this:

"***** ** **** ** ***** ******* *** ****** ** ****** ******* ** ***"

It might strike you that there is no easy way in JavaScript to do exactly what the XPath translate function does, although for many use cases, replaceAll with regular expressions can handle it.

You could use the same approach I have demonstrated, but that is suboptimal if all you want is to translate the strings. The following demo wraps XPath’s translate function to provide a JavaScript version:

See the Pen translate function [forked] by Bryan Rasmussen.

Where might you use something like this? Consider Caesar Cipher encryption with a three-place offset (e.g., top-of-the-line encryption from 48 B.C.):

translate("Caesar is planning to cross the Rubicon!", 
 "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz",
  "XYZABCDEFGHIJKLMNOPQRSTUVWxyzabcdefghijklmnopqrstuvw")

The input text “Caesar is planning to cross the Rubicon!” results in “Zxbpxo fp mixkkfkd ql zolpp qeb Oryfzlk!”

To give another quick example of different possibilities, I made a metal function that takes a string input and uses a translate function to return the text, including all characters that take umlauts.

See the Pen metal function [forked] by Bryan Rasmussen.

const metal = (str) => {
  return translate(str, "AOUaou","ÄÖÜäöü");
}

And, if given the text “Motley Crue rules, rock on dudes!”, returns “Mötley Crüe rüles, röck ön düdes!”

Obviously, one might have all sorts of parody uses of this function. If that’s you, then this TVTropes article ought to provide you with plenty of inspiration.

Using CSS With XPath

Remember our main reason for using CSS selectors together with XPath: CSS pretty much understands what a class is, whereas the best you can do with XPath is string comparisons of the class attribute. That will work in most cases.

But if you were to ever run into a situation where, say, someone created classes named .primaryLinks and .primaryLinks2 and you were using XPath to get the .primaryLinks class, then you would likely run into problems. As long as there’s nothing silly like that, you would probably use XPath. But I am sad to report that I have worked at places where people do those types of silly things.

Here’s another demo using CSS and XPath together. It shows what happens when we use the code to run an XPath on a context node that is not the document’s node.

See the Pen css and xpath together [forked] by Bryan Rasmussen.

The CSS query is .relatedarticles a, which fetches the two a elements in a div assigned a .relatedarticles class.

After that are three “bad” queries, that is to say, queries that do not do what we want them to do when running with these elements as the context node.

I can explain why they are behaving differently than you might expect. The three bad queries in question are:

  • //text(): Returns all the text in the document.
  • //a/text(): Returns all the text inside of links in the document.
  • ./a/text(): Returns no results.

The reason for these results is that while your context is a elements returned from the CSS query, // goes against the whole document. This is the strength of XPath; CSS cannot go from a node up to an ancestor and then to a sibling of that ancestor, and walk down to a descendant of that sibling. But XPath can.

Meanwhile, ./ queries the children of the current node, where the dot (.) represents the current node, and the forward slash (/) represents going to some child node — whether it is an attribute, element, or text is determined by the next part of the path. But there is no child a element selected by the CSS query, thus that query also returns nothing.

There are three good queries in that last demo:

  • .//text(),
  • ./text(),
  • normalize-space(./text()).

The normalize-space query demonstrates XPath function usage, but also fixes a problem included in the other queries. The HTML is structured like this:

<a href="https://www.smashingmagazine.com/2018/04/feature-testing-selenium-webdriver/">
  Automating Your Feature Testing With Selenium WebDriver
</a>

The query returns a line feed at the beginning and end of the text node, and normalize-space removes this.

Using any XPath function that returns something other than a boolean with an input XPath applies to other functions. The following demo shows a number of examples:

See the Pen xpath functions examples [forked] by Bryan Rasmussen.

The first example shows a problem you should watch out for. Specifically, the following code:

document.queryXPaths("substring-after(//a/@href,'https://')");

…returns one string:

It makes sense, right? These functions do not return arrays but rather single strings or single numbers. Running the function anywhere with multiple results only returns the first result.

The second result shows what we really want:

document.queryCSSSelectors("a").queryXPaths("substring-after(./@href,'https://')");

Which returns an array of two strings:

XPath functions can be nested just like functions in JavaScript. So, if we know the Smashing Magazine URL structure, we could do the following (using template literals is recommended):

`translate(
    substring(
      substring-after(./@href, ‘www.smashingmagazine.com/')
    ,9),
 '/','')`

This is getting a bit too complex to the extent that it needs comments describing what it does: take all of the URL from the href attribute after www.smashingmagazine.com/, remove the first nine characters, then translate the forward slash (/) character to nothing so as to get rid of the ending forward slash.

The resulting array:

["feature-testing-selenium-webdriver","automated-test-results-improve-accessibility"]
More XPath Use Cases

XPath can really shine in testing. The reason is not difficult to see, as XPath can be used to get every element in the DOM, from any position in the DOM, whereas CSS cannot.

You cannot count on CSS classes remaining consistent in many modern build systems, but with XPath, we are able to make more robust matches as to what the text content of an element is, regardless of a changing DOM structure.

There has been research on techniques that allow you to make resilient XPath tests. Nothing is worse than having tests flake out and fail just because a CSS selector no longer works because something has been renamed or removed.

XPath is also really great at multiple locator extraction. There is more than one way to use XPath queries to match an element. The same is true with CSS. But XPath queries can drill into things in a more targeted way that limits what gets returned, allowing you to find a specific match where there may be several possible matches.

For example, we can use XPath to return a specific h2 element that is contained inside a div that immediately follows a sibling div that, in turn, contains a child image element with a data-testID="leader" attribute on it:

<div>
  <div>
    <h1>don't get this headline</h1>
  </div>

  <div>
    <h2>Don't get this headline either</h2>
  </div>

  <div>
    <h2>The header for the leader image</h2>
  </div>

  <div>
    <img data-testID="leader" src="image.jpg"/>
  </div>
</div>

This is the query:

document.queryXPaths(`
  //div[
    following-sibling::div[1]
    /img[@data-testID='leader']
  ]
  /h2/
  text()
`);

Let’s drop in a demo to see how that all comes together:

See the Pen Complex H2 Query [forked] by Bryan Rasmussen.

So, yes. There are lots of possible paths to any element in a test using XPath.

XSLT 1.0 Deprecation

I mentioned early on that the Chrome team plans on removing XSLT 1.0 support from the browser. That’s important because XSLT 1.0 uses XML-focused programming for document transformation that, in turn, relies on XPath 1.0, which is what is found in most browsers.

When that happens, we’ll lose a key component of XPath. But given the fact that XPath is really great for writing tests, I find it unlikely that XPath as a whole will disappear anytime soon.

That said, I’ve noticed that people get interested in a feature when it’s taken away. And that’s certainly true in the case of XSLT 1.0 being deprecated. There’s an entire discussion happening over at Hacker News filled with arguments against the deprecation. The post itself is a great example of creating a blogging framework with XSLT. You can read the discussion for yourself, but it gets into how JavaScript might be used as a shim for XLST to handle those sorts of cases.

I have also seen suggestions that browsers should use SaxonJS, which is a port to JavaScript’s Saxon XSLT, XQUERY, and XPath engines. That’s an interesting idea, especially as Saxon-JS implements the current version of these specifications, whereas there is no browser that implements any version of XPath or XSLT beyond 1.0, and none that implements XQuery.

I reached out to Norm Tovey-Walsh at Saxonica, the company behind SaxonJS and other versions of the Saxon engine. He said:

“If any browser vendor was interested in taking SaxonJS as a starting point for integrating modern XML technologies into the browser, we’d be thrilled to discuss it with them.”

Norm Tovey-Walsh

But also added:

“I would be very surprised if anyone thought that taking SaxonJS in its current form and dropping it into the browser build unchanged would be the ideal approach. A browser vendor, by nature of the fact that they build the browser, could approach the integration at a much deeper level than we can ‘from the outside’.”

Norm Tovey-Walsh

It’s worth noting that Tovey-Walsh’s comments came about a week before the XSLT deprecation announcement.

Conclusion

I could go on and on. But I hope this has demonstrated the power of XPath and given you plenty of examples demonstrating how to use it for achieving great things. It’s a perfect example of older technology in the browser stack that still has plenty of utility today, even if you’ve never known it existed or never considered reaching for it.

Further Reading

  • Enhancing the Resiliency of Automated Web Tests with Natural Language” (ACM Digital Library) by Maroun Ayli, Youssef Bakouny, Nader Jalloul, and Rima Kilany
    This article provides many XPath examples for writing resilient tests.
  • XPath (MDN)
    This is an excellent place to start if you want a technical explanation detailing how XPath works.
  • XPath Tutorial (ZVON)
    I’ve found this tutorial to be the most helpful in my own learning, thanks to a wealth of examples and clear explanations.
  • XPather
    This interactive tool lets you work directly with the code.
]]>
hello@smashingmagazine.com (Bryan Rasmussen)
<![CDATA[Effectively Monitoring Web Performance]]> https://smashingmagazine.com/2025/11/effectively-monitoring-web-performance/ https://smashingmagazine.com/2025/11/effectively-monitoring-web-performance/ Tue, 11 Nov 2025 10:00:00 GMT This article is a sponsored by DebugBear

There’s no single way to measure website performance. That said, the Core Web Vitals metrics that Google uses as a ranking factor are a great starting point, as they cover different aspects of visitor experience:

  • Largest Contentful Paint (LCP): Measures the initial page load time.
  • Cumulative Layout Shift (CLS): Measures if content is stable after rendering.
  • Interaction to Next Paint (INP): Measures how quickly the page responds to user input.

There are also many other web performance metrics that you can use to track technical aspects, like page weight or server response time. While these often don’t matter directly to the end user, they provide you with insight into what’s slowing down your pages.

You can also use the User Timing API to track page load milestones that are important on your website specifically.

Synthetic And Real User Data

There are two different types of web performance data:

  • Synthetic tests are run in a controlled test environment.
  • Real user data is collected from actual website visitors.

Synthetic monitoring can provide super-detailed reports to help you identify page speed issues. You can configure exactly how you want to collect the data, picking a specific network speed, device size, or test location.

Get a hands-on feel for synthetic monitoring by using the free DebugBear website speed test to check on your website.

That said, your synthetic test settings might not match what’s typical for your real visitors, and you can’t script all of the possible ways that people might interact with your website.

That’s why you also need real user monitoring (RUM). Instead of looking at one experience, you see different load times and how specific visitor segments are impacted. You can review specific page views to identify what caused poor performance for a particular visitor.

At the same time, real user data isn’t quite as detailed as synthetic test reports, due to web API limitations and performance concerns.

DebugBear offers both synthetic monitoring and real user monitoring:

  • To set up synthetic tests, you just need to enter a website URL, and
  • To collect real user metrics, you need to install an analytics snippet on your website.
Three Steps To A Fast Website

Collecting data helps you throughout the lifecycle of your web performance optimizations. You can follow this three-step process:

  1. Identify: Collect data across your website and identify slow visitor experiences.
  2. Diagnose: Dive deep into technical analysis to find optimizations.
  3. Monitor: Check that optimizations are working and get alerted to performance regressions.

Let’s take a look at each step in detail.

Step 1: Identify Slow Visitor Experiences

What’s prompting you to look into website performance issues in the first place? You likely already have some specific issues in mind, whether that’s from customer reports or because of poor scores in the Core Web Vitals section of Google Search Console.

Real user data is the best place to check for slow pages. It tells you whether the technical issues on your site actually result in poor user experience. It’s easy to collect across your whole website (while synthetic tests need to be set up for each URL). And, you can often get a view count along with the performance metrics. A moderately slow page that gets two visitors a month isn’t as important as a moderately fast page that gets thousands of visits a day.

The Web Vitals dashboard in DebugBear’s RUM product checks your site’s performance health and surfaces the most-visited pages and URLs where many visitors have a poor experience.

You can also run a website scan to get a list of URLs from your sitemap and then check each of these pages against real user data from Google’s Chrome User Experience Report (CrUX). However, this will only work for pages that meet a minimum traffic threshold to be included in the CrUX dataset.

The scan result highlights pages with poor web vitals scores where you might want to investigate further.

If no real-user data is available, then there is a scanning tool called Unlighthouse, which is based on Google’s Lighthouse tool. It runs synthetic tests for each page, allowing you to filter through the results in order to identify pages that need to be optimized.

Step 2: Diagnose Web Performance Issues

Once you’ve identified slow pages on your website, you need to look at what’s actually happening on your page that is causing delays.

Debugging Page Load Time

If there are issues with page load time metrics — like the Largest Contentful Paint (LCP) — synthetic test results can provide a detailed analysis. You can also run page speed experiments to try out and measure the impact of certain optimizations.

Real user data can still be important when debugging page speed, as load time depends on many user- and device-specific factors. For example, depending on the size of the user’s device, the page element that’s responsible for the LCP can vary. RUM data can provide a breakdown of possible influencing factors, like CSS selectors and image URLs, across all visitors, helping you zero in on what exactly needs to be fixed.

Debugging Slow Interactions

RUM data is also generally needed to properly diagnose issues related to the Interaction to Next Paint (INP) metric. Specifically, real user data can provide insight into what causes slow interactions, which helps you answer questions like:

  • What page elements are responsible?
  • Is time spent processing already-active background tasks or handling the interaction itself?
  • What scripts contribute the most to overall CPU processing time?

You can view this data at a high level to identify trends, as well as review specific page views to see what impacted a specific visitor experience.

Step 3: Monitor Performance & Respond To Regressions

Continuous monitoring of your website performance lets you track whether the performance is improving after making a change, and alerts you when scores decline.

How you respond to performance regressions depends on whether you’re looking at lab-based synthetic tests or real user analytics.

Synthetic Data

Test settings for synthetic tests are standardized between runs. While infrastructure changes, like browser upgrades, occasionally cause changes, performance is more generally determined by resources loaded by the website and the code it runs.

When a metric changes, DebugBear lets you view a before-and-after comparison between the two test results. For example, the next screenshot displays a regression in the First Contentful Paint (FCP) metric. The comparison reveals that new images were added to the page, competing for bandwidth with other page resources.

From the report, it’s clear that a CSS file that previously took 255 milliseconds to load now takes 915 milliseconds. Since stylesheets are required to render page content, this means the page now loads more slowly, giving you better insight into what needs optimization.

Real User Data

When you see a change in real user metrics, there can be two causes:

  1. A shift in visitor characteristics or behavior, or
  2. A technical change on your website.

Launching an ad campaign, for example, often increases redirects, reduces cache hits, and shifts visitor demographics. When you see a regression in RUM data, the first step is to find out if the change was on your website or in your visitor’s browser. Check for view count changes in ad campaigns, referrer domains, or network speed to get a clearer picture.

If those visits have different performance compared to your typical visitors, then that suggests the repression is not due to a change on your website. However, you may still need to make changes on your website to better serve these visitor cohorts and deliver a good experience for them.

To identify the cause of a technical change, take a look at component breakdown metrics, such as LCP subparts. This helps you narrow down the cause of a regression, whether it is due to changes in server response time, new render-blocking resources, or the LCP image.

You can also check for shifts in page view properties, like different LCP element selectors or specific scripts that cause poor performance.

Conclusion

One-off page speed tests are a great starting point for optimizing performance. However, a monitoring tool like DebugBear can form the basis for a more comprehensive web performance strategy that helps you stay fast for the long term.

Get a free DebugBear trial on our website!

]]>
hello@smashingmagazine.com (Matt Zeunert)
<![CDATA[Smashing Animations Part 6: Magnificent SVGs With `<use>` And CSS Custom Properties]]> https://smashingmagazine.com/2025/11/smashing-animations-part-6-svgs-css-custom-properties/ https://smashingmagazine.com/2025/11/smashing-animations-part-6-svgs-css-custom-properties/ Fri, 07 Nov 2025 15:00:00 GMT I explained recently how I use <symbol>, <use>, and CSS Media Queries to develop what I call adaptive SVGs. Symbols let us define an element once and then use it again and again, making SVG animations easier to maintain, more efficient, and lightweight.

Since I wrote that explanation, I’ve designed and implemented new Magnificent 7 animated graphics across my website. They play on the web design pioneer theme, featuring seven magnificent Old West characters.

<symbol> and <use> let me define a character design and reuse it across multiple SVGs and pages. First, I created my characters and put each into a <symbol> inside a hidden library SVG:

<!-- Symbols library -->
<svg xmlns="http://www.w3.org/2000/svg" style="display:none;">
 <symbol id="outlaw-1">[...]</symbol>
 <symbol id="outlaw-2">[...]</symbol>
 <symbol id="outlaw-3">[...]</symbol>
 <!-- etc. -->
</svg>

Then, I referenced those symbols in two other SVGs, one for large and the other for small screens:

<!-- Large screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-large">
 <use href="outlaw-1" />
 <!-- ... -->
</svg>

<!-- Small screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-small">
 <use href="outlaw-1" />
 <!-- ... -->
</svg>

Elegant. But then came the infuriating. I could reuse the characters, but couldn’t animate or style them. I added CSS rules targeting elements within the symbols referenced by a <use>, but nothing happened. Colours stayed the same, and things that should move stayed static. It felt like I’d run into an invisible barrier, and I had.

Understanding The Shadow DOM Barrier

When you reference the contents of a symbol with use, a browser creates a copy of it in the Shadow DOM. Each <use> instance becomes its own encapsulated copy of the referenced <symbol>, meaning that CSS from outside can’t break through the barrier to style any elements directly. For example, in normal circumstances, this tapping value triggers a CSS animation:

<g class="outlaw-1-foot tapping">
 <!-- ... -->
</g>
.tapping {
  animation: tapping 1s ease-in-out infinite;
}

But when the same animation is applied to a <use> instance of that same foot, nothing happens:

<symbol id="outlaw-1">
 <g class="outlaw-1-foot"><!-- ... --></g>
</symbol>

<use href="#outlaw-1" class="tapping" />
.tapping {
  animation: tapping 1s ease-in-out infinite;
}

That’s because the <g> inside the <symbol> element is in a protected shadow tree, and the CSS Cascade stops dead at the <use> boundary. This behaviour can be frustrating, but it’s intentional as it ensures that reused symbol content stays consistent and predictable.

While learning how to develop adaptive SVGs, I found all kinds of attempts to work around this behaviour, but most of them sacrificed the reusability that makes SVG so elegant. I didn’t want to duplicate my characters just to make them blink at different times. I wanted a single <symbol> with instances that have their own timings and expressions.

CSS Custom Properties To The Rescue

While working on my pioneer animations, I learned that regular CSS values can’t cross the boundary into the Shadow DOM, but CSS Custom Properties can. And even though you can’t directly style elements inside a <symbol>, you can pass custom property values to them. So, when you insert custom properties into an inline style, a browser looks at the cascade, and those styles become available to elements inside the <symbol> being referenced.

I added rotate to an inline style applied to the <symbol> content:

<symbol id="outlaw-1">
  <g class="outlaw-1-foot" style="
    transform-origin: bottom right; 
    transform-box: fill-box; 
    transform: rotate(var(--foot-rotate));">
    <!-- ... -->
  </g>
</symbol>

Then, defined the foot tapping animation and applied it to the element:

@keyframes tapping {
  0%, 60%, 100% { --foot-rotate: 0deg; }
  20% { --foot-rotate: -5deg; }
  40% { --foot-rotate: 2deg; }
}

use[data-outlaw="1"] {
  --foot-rotate: 0deg;
  animation: tapping 1s ease-in-out infinite;
}
Passing Multiple Values To A Symbol

Once I’ve set up a symbol to use CSS Custom Properties, I can pass as many values as I want to any <use> instance. For example, I might define variables for fill, opacity, or transform. What’s elegant is that each <symbol> instance can then have its own set of values.

<g class="eyelids" style="
  fill: var(--eyelids-colour, #f7bea1);
  opacity: var(--eyelids-opacity, 1);
  transform: var(--eyelids-scale, 0);"
>
  <!-- etc. -->
</g>
use[data-outlaw="1"] {
  --eyelids-colour: #f7bea1; 
  --eyelids-opacity: 1;
}

use[data-outlaw="2"] {
  --eyelids-colour: #ba7e5e; 
  --eyelids-opacity: 0;
}

Support for passing CSS Custom Properties like this is solid, and every contemporary browser handles this behaviour correctly. Let me show you a few ways I’ve been using this technique, starting with a multi-coloured icon system.

A Multi-Coloured Icon System

When I need to maintain a set of icons, I can define an icon once inside a <symbol> and then use custom properties to apply colours and effects. Instead of needing to duplicate SVGs for every theme, each use can carry its own values.

For example, I applied an --icon-fill custom property for the default fill colour of the <path> in this Bluesky icon :

<symbol id="icon-bluesky">
  <path fill="var(--icon-fill, currentColor)" d="..." />
</symbol>

Then, whenever I need to vary how that icon looks — for example, in a <header> and <footer> — I can pass new fill colour values to each instance:

<header>
  <svg xmlns="http://www.w3.org/2000/svg">
    <use href="#icon-bluesky" style="--icon-fill: #2d373b;" />
  </svg>
</header>

<footer>
  <svg xmlns="http://www.w3.org/2000/svg">
    <use href="#icon-bluesky" style="--icon-fill: #590d1a;" />
  </svg>
</footer>

These icons are the same shape but look different thanks to their inline styles.

Data Visualisations With CSS Custom Properties

We can use <symbol> and <use> in plenty more practical ways. They’re also helpful for creating lightweight data visualisations, so imagine an infographic about three famous Wild West sheriffs: Wyatt Earp, Pat Garrett, and Bat Masterson.

Each sheriff’s profile uses the same set of SVG three symbols: one for a bar representing the length of a sheriff’s career, another to represent the number of arrests made, and one more for the number of kills. Passing custom property values to each <use> instance can vary the bar lengths, arrests scale, and kills colour without duplicating SVGs. I first created symbols for those items:

<svg xmlns="http://www.w3.org/2000/svg" style="display:none;">
  <symbol id="career-bar">
    <rect
      height="10"
      width="var(--career-length, 100)" 
      fill="var(--career-colour, #f7bea1)"
    />
  </symbol>

  <symbol id="arrests-badge">
    <path 
      fill="var(--arrest-color, #d0985f)" 
      transform="scale(var(--arrest-scale, 1))"
    />
  </symbol>

  <symbol id="kills-icon">
    <path fill="var(--kill-colour, #769099)" />
  </symbol>
</svg>

Each symbol accepts one or more values:

  • --career-length adjusts the width of the career bar.
  • --career-colour changes the fill colour of that bar.
  • --arrest-scale controls the arrest badge size.
  • --kill-colour defines the fill colour of the kill icon.

I can use these to develop a profile of each sheriff using <use> elements with different inline styles, starting with Wyatt Earp.

<svg xmlns="http://www.w3.org/2000/svg">
  <g id="wyatt-earp">
    <use href="#career-bar" style="--career-length: 400; --career-color: #769099;"/>
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <!-- ... -->
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <use href="#arrests-badge" style="--arrest-scale: 1;" />
    <use href="#kills-icon" style="--kill-color: #769099;" />
  </g>

  <g id="pat-garrett">
    <use href="#career-bar" style="--career-length: 300; --career-color: #f7bea1;"/>
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <!-- ... -->
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <use href="#arrests-badge" style="--arrest-scale: 1;" />
    <use href="#kills-icon" style="--kill-color: #f7bea1;" />
  </g>

  <g id="bat-masterson">
    <use href="#career-bar" style="--career-length: 200; --career-color: #c2d1d6;"/>
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <!-- ... -->
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <use href="#arrests-badge" style="--arrest-scale: 1;" />
    <use href="#kills-icon" style="--kill-color: #c2d1d6;" />
  </g>
</svg>

Each <use> shares the same symbol elements, but the inline variables change their colours and sizes. I can even animate those values to highlight their differences:

@keyframes pulse {
  0%, 100% { --arrest-scale: 1; }
  50% { --arrest-scale: 1.2; }
}

use[href="#arrests-badge"]:hover {
  animation: pulse 1s ease-in-out infinite;
}

CSS Custom Properties aren’t only helpful for styling; they can also channel data between HTML and SVG’s inner geometry, binding visual attributes like colour, length, and scale to semantics like arrest numbers, career length, and kills.

Ambient Animations

I started learning to animate elements within symbols while creating the animated graphics for my website’s Magnificent 7. To reduce complexity and make my code lighter and more maintainable, I needed to define each character once and reuse it across SVGs:

<!-- Symbols library -->
<svg xmlns="http://www.w3.org/2000/svg" style="display:none;">
  <symbol id="outlaw-1">[…]</symbol>
  <!-- ... -->
</svg>

<!-- Large screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-large">
  <use href="outlaw-1" />
  <!-- ... -->
</svg>

<!-- Small screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-small">
  <use href="outlaw-1" />
  <!-- ... -->
</svg>

But I didn’t want those characters to stay static; I needed subtle movements that would bring them to life. I wanted their eyes to blink, their feet to tap, and their moustache whiskers to twitch. So, to animate these details, I pass animation data to elements inside those symbols using CSS Custom Properties, starting with the blinking.

I implemented the blinking effect by placing an SVG group over the outlaws’ eyes and then changing its opacity.

To make this possible, I added an inline style with a CSS Custom Property to the group:

<symbol id="outlaw-1" viewBox="0 0 712 2552">
 <g class="eyelids" style="opacity: var(--eyelids-opacity, 1);">
    <!-- ... -->
  </g>
</symbol>

Then, I defined the blinking animation by changing --eyelids-opacity:

@keyframes blink {
  0%, 92% { --eyelids-opacity: 0; }
  93%, 94% { --eyelids-opacity: 1; }
  95%, 97% { --eyelids-opacity: 0.1; }
  98%, 100% { --eyelids-opacity: 0; }
}

…and applied it to every character:

use[data-outlaw] {
  --blink-duration: 4s;
  --eyelids-opacity: 1;
  animation: blink var(--blink-duration) infinite var(--blink-delay);
}

…so that each character wouldn’t blink at the same time, I set a different --blink-delay before they all start blinking, by passing another Custom Property:

use[data-outlaw="1"] { --blink-delay: 1s; }
use[data-outlaw="2"] { --blink-delay: 2s; }

use[data-outlaw="7"] { --blink-delay: 3s; }

Some of the characters tap their feet, so I added an inline style with a CSS Custom Property to those groups, too:

<symbol id="outlaw-1" viewBox="0 0 712 2552">
  <g class="outlaw-1-foot" style="
    transform-origin: bottom right; 
    transform-box: fill-box; 
    transform: rotate(var(--foot-rotate));">
  </g>
</symbol>

Defining the foot-tapping animation:

@keyframes tapping {
  0%, 60%, 100% { --foot-rotate: 0deg; }
  20% { --foot-rotate: -5deg; }
  40% { --foot-rotate: 2deg; }
}

And adding those extra Custom Properties to the characters’ declaration:

use[data-outlaw] {
  --blink-duration: 4s;
  --eyelids-opacity: 1;
  --foot-rotate: 0deg;
  animation: 
    blink var(--blink-duration) infinite var(--blink-delay),
    tapping 1s ease-in-out infinite;
}

…before finally making the character’s whiskers jiggle via an inline style with a CSS Custom Property which describes how his moustache transforms:

<symbol id="outlaw-1" viewBox="0 0 712 2552">
  <g class="outlaw-1-tashe" style="
    transform: translateX(var(--jiggle-x, 0px));"
  >
    <!-- ... -->
  </g>
</symbol>

Defining the jiggle animation:

@keyframes jiggle {
  0%, 100% { --jiggle-x: 0px; }
  20% { --jiggle-x: -3px; }
  40% { --jiggle-x: 2px; }
  60% { --jiggle-x: -1px; }
  80% { --jiggle-x: 4px; }
}

And adding those properties to the characters’ declaration:

use[data-outlaw] {
  --blink-duration: 4s;
  --eyelids-opacity: 1;
  --foot-rotate: 0deg;
  --jiggle-x: 0px;
  animation: 
    blink var(--blink-duration) infinite var(--blink-delay),
    jiggle 1s ease-in-out infinite,
    tapping 1s ease-in-out infinite;
}

With these moving parts, the characters come to life, but my markup remains remarkably lean. By combining several animations into a single declaration, I can choreograph their movements without adding more elements to my SVG. Every outlaw shares the same base <symbol>, and their individuality comes entirely from CSS Custom Properties.

Pitfalls And Solutions

Even though this technique might seem bulletproof, there are a few traps it’s best to avoid:

  • CSS Custom Properties only work if they’re referenced with a var() inside a <symbol>. Forget that, and you’ll wonder why nothing updates. Also, properties that aren’t naturally inherited, like fill or transform, need to use var() in their value to benefit from the cascade.
  • It’s always best to include a fallback value alongside a custom property, like opacity: var(--eyelids-opacity, 1); to ensure SVG elements render correctly even without custom property values applied.
  • Inline styles set via the style attribute take precedence, so if you mix inline and external CSS, remember that Custom Properties follow normal cascade rules.
  • You can always use DevTools to inspect custom property values. Select a <use> instance and check the Computed Styles panel to see which custom properties are active.
Conclusion

The <symbol> and <use> elements are among the most elegant but sometimes frustrating aspects of SVG. The Shadow DOM barrier makes animating them trickier, but CSS Custom Properties act as a bridge. They let you pass colour, motion, and personality across that invisible boundary, resulting in cleaner, lighter, and, best of all, fun animations.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[Six Key Components of UX Strategy]]> https://smashingmagazine.com/2025/11/practical-guide-ux-strategy/ https://smashingmagazine.com/2025/11/practical-guide-ux-strategy/ Wed, 05 Nov 2025 13:00:00 GMT Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).]]> For years, “UX strategy” felt like a confusing, ambiguous, and overloaded term to me. To me, it was some sort of a roadmap or a “grand vision”, with a few business decisions attached to it. And looking back now, I realize that I was wrong all along.

UX Strategy isn’t a goal; it’s a journey towards that goal. A journey connecting where UX is today with a desired future state of UX. And as such, it guides our actions and decisions, things we do and don’t do. And its goal is very simple: to maximize our chances of success while considering risks, bottlenecks and anything that might endanger the project.

Let’s explore the components of UX strategy, and how it works with product strategy and business strategy to deliver user value and meet business goals.

Strategy vs. Goals vs. Plans

When we speak about strategy, we often speak about planning and goals — but they are actually quite different. While strategy answers “what” we’re doing and “why”, planning is about “how” and “when” we’ll get it done. And the goal is merely a desired outcome of that entire journey.

  • Goals establish a desired future outcome,
  • That outcome typically represents a problem to solve,
  • Strategy shows a high-level solution for that problem,
  • Plan is a detailed set of low-level steps for getting the solution done.

A strong strategy requires making conscious, and oftentimes tough, decisions about what we will do — and just as importantly, what we will not do, and why.

Business Strategy

UX strategy doesn’t live in isolation. It must inform and support product strategy and be aligned with business strategy. All these terms are often slightly confusing and overloaded, so let's clear it up.

At the highest level, business strategy is about the distinct choices executives make to set the company apart from its competitors. They shape the company’s positioning, objectives, and (most importantly!) competitive advantage.

Typically, this advantage is achieved in two ways: through lower prices (cost leadership) or through differentiation. The latter part isn't about being different, but rather being perceived differently by the target audience. And that’s exactly where UX impact steps in.

In short, business strategy is:

  • A top-line vision, basis for core offers,
  • Shapes positioning, goals, competitive advantage,
  • Must always adapt to the market to keep a competitive advantage.
Product Strategy

Product strategy is how a high-level business direction is translated into a unique positioning of a product. It defines what the product is, who its users are, and how it will contribute to the business’s goals. It’s also how we bring a product to market, drive growth, and achieve product-market fit.

In short, product strategy is:

  • Unique positioning and value of a product,
  • How to establish and keep a product in the marketplace,
  • How to keep competitive advantage of the product.
UX Strategy

UX strategy is about shaping and delivering product value through UX. Good UX strategy always stems from UX research and answers to business needs. It established what to focus on, what our high-value actions are, how we’ll measure success, and — quite importantly — what risks we need to mitigate.

Most importantly, it’s not a fixed plan or a set of deliverables; it’s a guide that informs our actions, but also must be prepared to change when things change.

In short, UX strategy is:

  • How we shape and deliver product value through UX,
  • Priorities, focus + why, actions, metrics, risks,
  • Isn’t a roadmap, intention or deliverables.
Six Key Components of UX Strategy

The impact of good UX typically lives in differentiation mentioned above. Again, it’s not about how “different” our experience is, but the unique perceived value that users associate with it. And that value is a matter of a clear, frictionless, accessible, fast, and reliable experience wrapped into the product.

I always try to include 6 key components in any strategic UX work so we don’t end up following a wrong assumption that won’t bring any impact:

  1. Target goal
    The desired, improved future state of UX.
  2. User segments
    Primary users that we are considering.
  3. Priorities
    What we will and, crucially, what we will not do, and why.
  4. High-value actions
    How we drive value and meet user and business needs.
  5. Feasibility
    Realistic assessment of people, processes, and resources.
  6. Risks
    Bottlenecks, blockers, legacy constraints, big unknowns.

It’s worth noting that it’s always dangerous to be designing a product with everybody in mind. As Jamie Levy noted, by being very broad too early, we often reduce the impact of our design and messaging. It’s typically better to start with a specific, well-defined user segment and then expand, rather than the other way around.

Practical Example (by Alin Buda)

UX strategy doesn’t have to be a big 40-page long PDF report or a Keynote presentation. A while back, Alin Buda kindly left a comment on one of my LinkedIn posts, giving a great example of what a concise UX strategy could look like:

UX Strategy (for Q4)

Our UX strategy is to focus on high-friction workflows for expert users, not casual usability improvements. Why? Because retention in this space is driven by power-user efficiency, and that aligns with our growth model.

To succeed, we’ll design workflow accelerators and decision-support tools that will reduce time-on-task. As a part of it, we’ll need to redesign legacy flows in the Crux system. We won’t prioritize UI refinements or onboarding tours, because it doesn’t move the needle in this context.

What I like most about this example is just how concise and clear it is. Getting to this level of clarity takes quite a bit of time, but it creates a very precise overview of what we do, what we don't do, what we focus on, and how we drive value.

Wrapping Up

The best path to make a strong case with senior leadership is to frame your UX work as a direct contributor to differentiation. This isn’t just about making things look different; it’s about enhancing the perceived value.

A good strategy ties UX improvements to measurable business outcomes. It doesn’t speak about design patterns, consistency, or neatly organized components. Instead, it speaks the language of product and business strategy: OKRs, costs, revenue, business metrics, and objectives.

Design can succeed without a strategy. In the wise words of Sun Tzu, strategy without tactics is the slowest route to victory. And tactics without strategy are the noise before defeat.

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources ]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Leverage Component Variants In Penpot]]> https://smashingmagazine.com/2025/11/how-leverage-component-variants-penpot/ https://smashingmagazine.com/2025/11/how-leverage-component-variants-penpot/ Tue, 04 Nov 2025 10:00:00 GMT Penpot, the open-source tool built for scalable, consistent design.]]> This article is a sponsored by Penpot

Since Brad Frost popularized the use of design systems in digital design way back in 2013, they’ve become an invaluable resource for organizations — and even individuals — that want to craft reusable design patterns that look and feel consistent.

But Brad didn’t just popularize design systems; he also gave us a framework for structuring them, and while we don’t have to follow that framework exactly (most people adapt it to their needs), a particularly important part of most design systems is the variants, which are variations of components. Component variants allow for the design of components that are the same as other components, but different, so that they’re understood by users immediately, yet provide clarity for a unique context.

This makes component variants just as important as the components themselves. They ensure that we aren’t creating too many components that have to be individually managed, even if they’re only mildly different from other components, and since component variants are grouped together, they also ensure organization and visual consistency.

And now we can use them in Penpot, the web-based, open-source design tool where design is expressed as code. In this article, you’ll learn about variants, their place in design systems, and how to use them effectively in Penpot.

Step 1: Get Your Design Tokens In Order

For the most part, what separates one variant from another is the design tokens that it uses. But what is a design token exactly?

Imagine a brand color, let’s say a color value equal to hsl(270 100 42) in CSS. We save it as a “design token” called color.brand.default so that we can reuse it more easily without having to remember the more cumbersome hsl(270 100 42).

From there, we might also create a second design token called background.button.primary.default and set it to color.brand.default, thereby making them equal to the same color, but with different names to establish semantic separation between the two. This referencing the value of one token from another token is often called an “alias”.

This setup gives us the flexibility to change the value of the color document-wide, change the color used in the component (maybe by switching to a different token alias), or create a variant of the component that uses a different color. Ultimately, the goal is to be able to make changes in many places at once rather than one-by-one, mostly by editing the design token values rather than the design itself, at specific scopes rather than limiting ourselves to all-or-nothing changes. This also enables us to scale our design system without constraints.

With that in mind, here’s a rough idea of just a few color-related design tokens for a primary button with hover and disabled states:

Token name Token value
color.brand.default hsl(270 100 42)
color.brand.lighter hsl(270 100 52)
color.brand.lightest hsl(270 100 95)
color.brand.muted hsl(270 5 50)
background.button.primary.default {color.brand.default}
background.button.primary.hover {color.brand.lighter}
background.button.primary.disabled {color.brand.muted}
text.button.primary.default {color.brand.lightest}
text.button.primary.hover {color.brand.lightest}
text.button.primary.disabled {color.brand.lightest}

To create a color token in Penpot, switch to the “Tokens” tab in the left panel, click on the plus (+) icon next to “Color”, then specify the name, value, and optional description.

For example:

  • Name: color.brand.default,
  • Value: hsl(270 100 42) (there’s a color picker if you need it).

It’s pretty much the same process for other types of design tokens.

Don’t worry, I’m not going to walk you through every design token, but I will show you how to create a design token alias. Simply repeat the steps above, but for the value, notice how I’ve just referenced another color token (make sure to include the curly braces):

  • Name: background.button.primary.default,
  • Value: {color.brand.default}

Now, if the value of the color changes, so will the background of the buttons. But also, if we want to decouple the color from the buttons, all we need to do is reference a different color token or value. Mikołaj Dobrucki goes into a lot more detail in another Smashing article, but it’s worth noting here that Penpot design tokens are platform-agnostic. They follow the standardized W3C DTCG format, which means that they’re compatible with other tools and easily export to all platforms, including web, iOS, and Android.

In the next couple of steps, we’ll create a button component and its variants while plugging different design tokens into different variants. You’ll see why doing this is so useful and how using design tokens in variants benefits design systems overall.

Step 2: Create The Component

You’ll need to create what’s called a “main” component, which is the one that you’ll update as needed going forward. Other components — the ones that you’ll actually insert into your designs — will be copies (or “instances”) of the main component, which is sort of the point, right? Update once, and the changes reflect everywhere.

Here’s one I made earlier, minus the colors:

To apply a design token, make sure that you’re on the “Tokens” tab and have the relevant layer selected, then select the design token that you want to apply to it:

It doesn’t matter which variant you create first, but you’ll probably want to go with the default one as a starting point, as I’ve done. Either way, to turn this button into a main component, select the button object via the canvas (or “Layers” tab), right-click on it, then choose the “Create component” option from the context menu (or just press Ctrl / ⌘ + K after selecting it).

Remember to name the component as well. You can do that by double-clicking on the name (also via the canvas or “Layers” tab).

Step 3: Create The Component Variants

To create a variant, select the main component and either hit the Ctrl / ⌘ + K keyboard shortcut, or click on the icon that reveals the “Create variant” tooltip (located in the “Design” tab in the right panel).

Next, while the variant is still selected, make the necessary design changes via the “Design” tab. Or, if you want to swap design tokens out for other design tokens, you can do that in the same way that you applied them to begin with, via the “Tokens” tab. Rinse and repeat until you have all of your variants on the canvas designed:

After that, as you might’ve guessed, you’ll want to name your variants. But avoid doing this via the “Layers” panel. Instead, select a variant and replace “Property 1” with a label that describes the differentiating property of each variant. Since my button variants in this example represent different states of the same button, I’ve named this “State”. This applies to all of the variants, so you only need to do this once.

Next to the property name, you’ll see “Value 1” or something similar. Edit that for each variant, for example, the name of the state. In my case, I’ve named them “Default”, “Hover”, and “Disabled”.

And yes, you can add more properties to a component. To do this, click on the nearby plus (+) icon. I’ll talk more about component variants at scale in a minute, though.

To see the component in action, switch to the “Assets” tab (located in the left panel) and drag the component onto the canvas to initialize one instance of it. Again, remember to choose the correct property value from the “Design” tab:

If you already have a Penpot design system, combining multiple components into one component with variants is not only easy and error-proof, but you might be good to go already if you’re using a robust property naming system that uses forward slashes (/). Penpot has put together a very straightforward guide, but the diagram below sums it up pretty well:

How Component Variants Work At Scale

Design tokens, components, and component variants — the triple-threat of design systems — work together, not just to create powerful yet flexible design systems, but sustainable design systems that scale. This is easier to accomplish when thinking ahead, starting with design tokens that separate the “what” from the “what for” using token aliases, despite how verbose that might seem at first.

For example, I used color.brand.lightest for the text color of every variant, but instead of plugging that color token in directly, I created aliases such as text.button.primary.default. This means that I can change the text color of any variant later without having to dive into the actual variant on the canvas, or force a change to color.brand.lightest that might impact a bunch of other components.

Because remember, while the component and its variants give us reusability of the button, the color tokens give us reusability of the colors, which might be used in dozens, if not hundreds, of other components. A design system is like a living, breathing ecosystem, where some parts of it are connected, some parts of it aren’t connected, and some parts of it are or aren’t connected but might have to be later, and we need to be ready for that.

The good news is that Penpot makes all of this pretty easy to manage as long as you do a little planning beforehand.

Consider the following:

  • The design tokens that you’ll reuse (e.g., colors, font sizes, and so on),
  • Where design token aliases will be reused (e.g., buttons, headings, and so on),
  • Organizing the design tokens into sets,
  • Organizing the sets into themes,
  • Organizing the themes into groups,
  • The different components that you’ll need, and
  • The different variants and variant properties that you’ll need for each component.

Even the buttons that I designed here today can be scaled far beyond what I’ve already mocked up. Think of all the possible variants that might come up, such as a secondary button color, a tertiary color, a confirmation color, a warning color, a cancelled color, different colors for light and dark mode, not to mention more properties for more states, such as active and focus states. What if we want a whole matrix of variants, like where buttons in a disabled state can be hovered and where all buttons can be focused upon? Or where some buttons have icons instead of text labels, or both?

Designs can get very complicated, but once you’ve organized them into design tokens, components, and component variants in Penpot, they’ll actually feel quite simple, especially once you’re able to see them on the canvas, and even more so once you’ve made a significant change in just a few seconds without breaking anything.

Conclusion

This is how we make component variants work at scale. We get the benefits of reusability while keeping the flexibility to fork any aspect of our design system, big or small, without breaking out of it. And design tools like Penpot make it possible to not only establish a design system, but also express its design tokens and styles as code.

]]>
hello@smashingmagazine.com (Daniel Schwarz)
<![CDATA[Fading Light And Falling Leaves (November 2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/10/desktop-wallpaper-calendars-november-2025/ https://smashingmagazine.com/2025/10/desktop-wallpaper-calendars-november-2025/ Fri, 31 Oct 2025 12:00:00 GMT November can feel a bit gray in many parts of the world, so what better way to brighten the days than with a splash of colorful inspiration? For this month’s wallpapers edition, artists and designers from around the globe once again tickled their creativity and designed unique and inspiring wallpapers that are sure to bring some good vibes to your desktops and home screens.

As always, the wallpapers in this post come in a variety of screen resolutions and can be downloaded for free — just as it has been a monthly tradition here at Smashing Magazine for more than 14 years already. And since so many beautiful designs have seen the light of day since we first embarked on this monthly creativity adventure, we’ve also added a selection of oldies but goodies from our archives to the collection. Maybe one of your almost-forgotten favorites will catch your eye again this month?

A huge thank you to all the talented creatives who contributed their designs — this post wouldn’t be possible without your support! By the way, if you, too, would like to get featured in one of our upcoming wallpapers posts, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy November!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Falling Into November

“Celebrate the heart of fall with cozy colors, crisp leaves, and the gentle warmth that only November brings.” — Designed by Libra Fire from Serbia.

Crown Me

Designed by Ricardo Gimenes from Spain.

Fireside Stories Under The Stars

“A cozy autumn evening comes alive as friends gather around a warm bonfire, sharing stories beneath a starry night sky. The glow of the fire contrasts beautifully with the cool, serene landscape, capturing the magic of friendship, warmth, and the quiet beauty of November nights.” — Designed by PopArt Studio from Serbia.

Lunchtime

Designed by Ricardo Gimenes from Spain.

Where Innovation Meets Design

“This artwork blends technology and creativity in a clean, modern aesthetic. Soft pastel tones and fluid shapes frame a central smartphone, symbolizing the fusion of innovation and human intelligence in mobile app development.” — Designed by Zco Corporation from the United States.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

The Secret Cave

Designed by Ricardo Gimenes from Spain.

Sunset Or Sunrise

“November is autumn in all its splendor. Earthy colors, falling leaves, and afternoons in the warmth of the home. But it is also adventurous and exciting and why not, different. We sit in Bali contemplating Pura Ulun Danu Bratan. We don’t know if it’s sunset or dusk, but… does that really matter?” — Designed by Veronica Valenzuela Jimenez from Spain.

A Jelly November

“Been looking for a mysterious, gloomy, yet beautiful desktop wallpaper for this winter season? We’ve got you, as this month’s calendar marks Jellyfish Day. On November 3rd, we celebrate these unique, bewildering, and stunning marine animals. Besides adorning your screen, we’ve got you covered with some jellyfish fun facts: they aren’t really fish, they need very little oxygen, eat a broad diet, and shrink in size when food is scarce. Now that’s some tenacity to look up to.” — Designed by PopArt Studio from Serbia.

Winter Is Here

Designed by Ricardo Gimenes from Spain.

Moonlight Bats

“I designed some Halloween characters and then this idea came to my mind — a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle from Germany.

Time To Give Thanks

Designed by Glynnis Owen from Australia.

Anbani

Anbani means alphabet in Georgian. The letters that grow on that tree are the Georgian alphabet. It’s very unique!” — Designed by Vlad Gerasimov from Georgia.

Me And The Key Three

Designed by Bart Bonte from Belgium.

Outer Space

“We were inspired by nature around us and the universe above us, so we created an out-of-this-world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Captain’s Home

Designed by Elise Vanoorbeek from Belgium.

Deer Fall, I Love You

Designed by Maria Porter from the United States.

Holiday Season Is Approaching

Designed by ActiveCollab from the United States.

International Civil Aviation Day

“On December 7, we mark International Civil Aviation Day, celebrating those who prove day by day that the sky really is the limit. As the engine of global connectivity, civil aviation is now, more than ever, a symbol of social and economic progress and a vehicle of international understanding. This monthly calendar is our sign of gratitude to those who dedicate their lives to enabling everyone to reach their dreams.” — Designed by PopArt Studio from Serbia.

Peanut Butter Jelly Time

“November is the Peanut Butter Month so I decided to make a wallpaper around that. As everyone knows peanut butter goes really well with some jelly, so I made two sandwiches, one with peanut butter and one with jelly. Together they make the best combination.” — Designed by Senne Mommens from Belgium.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Bug

Designed by Ricardo Gimenes from Spain.

Go To Japan

“November is the perfect month to go to Japan. Autumn is beautiful with its brown colors. Let’s enjoy it!” — Designed by Veronica Valenzuela from Spain.

The Kind Soul

“Kindness drives humanity. Be kind. Be humble. Be humane. Be the best of yourself!” — Designed by Color Mean Creative Studio from Dubai.

Mushroom Season

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Tempestuous November

“By the end of autumn, ferocious Poseidon will part from tinted clouds and timid breeze. After this uneven clash, the sky once more becomes pellucid just in time for imminent luminous snow.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Cozy Autumn Cups And Cute Pumpkins

“Autumn coziness, which is created by fallen leaves, pumpkins, and cups of cocoa, inspired our designers for this wallpaper. — Designed by MasterBundles from Ukraine.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

Coco Chanel

Beauty begins the moment you decide to be yourself. (Coco Chanel)” — Designed by Tazi from Australia.

Stars

“I don’t know anyone who hasn’t enjoyed a cold night looking at the stars.” — Designed by Ema Rede from Portugal.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Happy Birthday C.S.Lewis!

“It’s C.S. Lewis’s birthday on November 29th, so I decided to create this ‘Chronicles of Narnia’ inspired wallpaper to honour this day.” — Designed by Safia Begum from the United Kingdom.

Autumn Choir

Designed by Hatchers from Ukraine / China.

Star Wars

Designed by Ricardo Gimenes from Spain.

Hello World, Happy November

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I wanted to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

Get Featured Next Month

Feeling inspired? We’ll publish the December wallpapers on November 30, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[JavaScript For Everyone: Iterators]]> https://smashingmagazine.com/2025/10/javascript-for-everyone-iterators/ https://smashingmagazine.com/2025/10/javascript-for-everyone-iterators/ Mon, 27 Oct 2025 13:00:00 GMT Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at Piccalil.li’s JavaScript for Everyone course to teach you JavaScript. The following is an excerpt from the Iterables and Iterators module: the lesson on Iterators.

Iterators are one of JavaScript’s more linguistically confusing topics, sailing easily over what is already a pretty high bar. There are iterables — array, Set, Map, and string — all of which follow the iterable protocol. To follow said protocol, an object must implement the iterable interface. In practice, that means that the object needs to include a [Symbol.iterator]() method somewhere in its prototype chain. Iterable protocol is one of two iteration protocols. The other iteration protocol is the iterator protocol.

See what I mean about this being linguistically fraught? Iterables implement the iterable iteration interface, and iterators implement the iterator iteration interface! If you can say that five times fast, then you’ve pretty much got the gist of it; easy-peasy, right?

No, listen, by the time you reach the end of this lesson, I promise it won’t be half as confusing as it might sound, especially with the context you’ll have from the lessons that precede it.

An iterable object follows the iterable protocol, which just means that the object has a conventional method for making iterators. The elements that it contains can be looped over with forof.

An iterator object follows the iterator protocol, and the elements it contains can be accessed sequentially, one at a time.

To reiterate — a play on words for which I do not forgive myself, nor expect you to forgive me — an iterator object follows iterator protocol, and the elements it contains can be accessed sequentially, one at a time. Iterator protocol defines a standard way to produce a sequence of values, and optionally return a value once all possible values have been generated.

In order to follow the iterator protocol, an object has to — you guessed it — implement the iterator interface. In practice, that once again means that a certain method has to be available somewhere on the object's prototype chain. In this case, it’s the next() method that advances through the elements it contains, one at a time, and returns an object each time that method is called.

In order to meet the iterator interface criteria, the returned object must contain two properties with specific keys: one with the key value, representing the value of the current element, and one with the key done, a Boolean value that tells us if the iterator has advanced beyond the final element in the data structure. That’s not an awkward phrasing the editorial team let slip through: the value of that done property is true only when a call to next() results in an attempt to access an element beyond the final element in the iterator, not upon accessing the final element in the iterator. Again, a lot in print, but it’ll make more sense when you see it in action.

You’ve seen an example of a built-in iterator before, albeit briefly:

const theMap = new Map([ [ "aKey", "A value." ] ]);

console.log( theMap.keys() );
// Result: Map Iterator { constructor: Iterator() }

That’s right: while a Map object itself is an iterable, Map’s built-in methods keys(), values(), and entries() all return Iterator objects. You’ll also remember that I looped through those using forEach (a relatively recent addition to the language). Used that way, an iterator is indistinguishable from an iterable:

const theMap = new Map([ [ "key", "value " ] ]);

theMap.keys().forEach( thing => {
  console.log( thing );
});
// Result: key

All iterators are iterable; they all implement the iterable interface:

const theMap = new Map([ [ "key", "value " ] ]);

theMap.keys()[ Symbol.iterator ];
// Result: function Symbol.iterator()

And if you’re angry about the increasing blurriness of the line between iterators and iterables, wait until you get a load of this “top ten anime betrayals” video candidate: I’m going to demonstrate how to interact with an iterator by using an array.

“BOO,” you surely cry, having been so betrayed by one of your oldest and most indexed friends. “Array is an iterable, not an iterator!” You are both right to yell at me in general, and right about array in specific — an array is an iterable, not an iterator. In fact, while all iterators are iterable, none of the built-in iterables are iterators.

However, when you call that [ Symbol.iterator ]() method — the one that defines an object as an iterable — it returns an iterator object created from an iterable data structure:

const theIterable = [ true, false ];
const theIterator = theIterable[ Symbol.iterator ]();

theIterable;
// Result: Array [ true, false ]

theIterator;
// Result: Array Iterator { constructor: Iterator() }

The same goes for Set, Map, and — yes — even strings:

const theIterable = "A string."
const theIterator = theIterable[ Symbol.iterator ]();

theIterator;
// Result: String Iterator { constructor: Iterator() }

What we’re doing here manually — creating an iterator from an iterable using %Symbol.iterator% — is precisely how iterable objects work internally, and why they have to implement %Symbol.iterator% in order to be iterables. Any time you loop through an array, you’re actually looping through an iterator created from that Array. All built-in iterators are iterable. All built-in iterables can be used to create iterators.

Alternately — preferably, even, since it doesn’t require you to graze up against %Symbol.iterator% directly — you can use the built-in Iterator.from() method to create an iterator object from any iterable:

const theIterator = Iterator.from([ true, false ]);

theIterator;
// Result: Array Iterator { constructor: Iterator() }

You remember how I mentioned that an iterator has to provide a next() method (that returns a very specific Object)? Calling that next() method steps through the elements that the iterator contains one at a time, with each call returning an instance of that Object:

const theIterator = Iterator.from([ 1, 2, 3 ]);

theIterator.next();
// Result: Object { value: 1, done: false }

theIterator.next();
// Result: Object { value: 2, done: false }

theIterator.next();
// Result: Object { value: 3, done: false }

theIterator.next();
// Result: Object { value: undefined, done: true }

You can think of this as a more controlled form of traversal than the traditional “wind it up and watch it go” for loops you’re probably used to — a method of accessing elements one step at a time, as-needed. Granted, you don’t have to step through an iterator in this way, since they have their very own Iterator.forEach method, which works exactly like you would expect — to a point:

const theIterator = Iterator.from([ true, false ]);

theIterator.forEach( element => console.log( element ) );
/* Result:
true
false
*/

But there’s another big difference between iterables and iterators that we haven’t touched on yet, and for my money, it actually goes a long way toward making linguistic sense of the two. You might need to humor me for a little bit here, though.

See, an iterable object is an object that is iterable. No, listen, stay with me: you can iterate over an Array, and when you’re done doing so, you can still iterate over that Array. It is, by definition, an object that can be iterated over; it is the essential nature of an iterable to be iterable:

const theIterable = [ 1, 2 ];

theIterable.forEach( el => {
  console.log( el );
});
/* Result:
1
2
*/

theIterable.forEach( el => {
  console.log( el );
});
/* Result:
1
2
*/

In a way, an iterator object represents the singular act of iteration. Internal to an iterable, it is the mechanism by which the iterable is iterated over, each time that iteration is performed. As a stand-alone iterator object — whether you step through it using the next method or loop over its elements using forEach — once iterated over, that iterator is past tense; it is iterated. Because they maintain an internal state, the essential nature of an iterator is to be iterated over, singular:

const theIterator = Iterator.from([ 1, 2 ]);

theIterator.next();
// Result: Object { value: 1, done: false }

theIterator.next();
// Result: Object { value: 2, done: false }

theIterator.next();
// Result: Object { value: undefined, done: true }

theIterator.forEach( el => console.log( el ) );
// Result: undefined

That makes for neat work when you're using the Iterator constructor’s built-in methods to, say, filter or extract part of an Iterator object:

const theIterator = Iterator.from([ "First", "Second", "Third" ]);

// Take the first two values from theIterator:
theIterator.take( 2 ).forEach( el => {
  console.log( el );
});
/* Result:
"First"
"Second"
*/

// theIterator now only contains anything left over after the above operation is complete:
theIterator.next();
// Result: Object { value: "Third", done: false }

Once you reach the end of an iterator, the act of iterating over it is complete. Iterated. Past-tense.

And so too is your time in this lesson, you might be relieved to hear. I know this was kind of a rough one, but the good news is: this course is iterable, not an iterator. This step in your iteration through it — this lesson — may be over, but the essential nature of this course is that you can iterate through it again. Don’t worry about committing all of this to memory right now — you can come back and revisit this lesson anytime.

Conclusion

I stand by what I wrote there, unsurprising as that probably is: this lesson is a tricky one, but listen, you got this. JavaScript for Everyone is designed to take you inside JavaScript’s head. Once you’ve started seeing how the gears mesh — seen the fingerprints left behind by the people who built the language, and the good, bad, and sometimes baffling decisions that went into that — no itera-, whether -ble or -tor will be able to stand in your way.

My goal is to teach you the deep magic — the how and the why of JavaScript, using the syntaxes you’re most likely to encounter in your day-to-day work, at your pace and on your terms. If you’re new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error. If you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.

I hope to see you there.

]]>
hello@smashingmagazine.com (Mat Marquis)
<![CDATA[Ambient Animations In Web Design: Practical Applications (Part 2)]]> https://smashingmagazine.com/2025/10/ambient-animations-web-design-practical-applications-part2/ https://smashingmagazine.com/2025/10/ambient-animations-web-design-practical-applications-part2/ Wed, 22 Oct 2025 13:00:00 GMT First, a recap:

Ambient animations are the kind of passive movements you might not notice at first. However, they bring a design to life in subtle ways. Elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly, adding depth to a brand’s personality.

In Part 1, I illustrated the concept of ambient animations by recreating the cover of a Quick Draw McGraw comic book as a CSS/SVG animation. But I know not everyone needs to animate cartoon characters, so in Part 2, I’ll share how ambient animation works in three very different projects: Reuven Herman, Mike Worth, and EPD. Each demonstrates how motion can enhance brand identity, personality, and storytelling without dominating a page.

Reuven Herman

Los Angeles-based composer Reuven Herman didn’t just want a website to showcase his work. He wanted it to convey his personality and the experience clients have when working with him. Working with musicians is always creatively stimulating: they’re critical, engaged, and full of ideas.

Reuven’s classical and jazz background reminded me of the work of album cover designer Alex Steinweiss.

I was inspired by the depth and texture that Alex brought to his designs for over 2,500 unique covers, and I wanted to incorporate his techniques into my illustrations for Reuven.

To bring Reuven’s illustrations to life, I followed a few core ambient animation principles:

  • Keep animations slow and smooth.
  • Loop seamlessly and avoid abrupt changes.
  • Use layering to build complexity.
  • Avoid distractions.
  • Consider accessibility and performance.

…followed by their straight state:

The first step in my animation is to morph the stave lines between states. They’re made up of six paths with multi-coloured strokes. I started with the wavy lines:

<!-- Wavy state -->
<g fill="none" stroke-width="2" stroke-linecap="round">
<path id="p1" stroke="#D2AB99" d="[…]"/>
<path id="p2" stroke="#BDBEA9" d="[…]"/>
<path id="p3" stroke="#E0C852" d="[…]"/>
<path id="p4" stroke="#8DB38B" d="[…]"/>
<path id="p5" stroke="#43616F" d="[…]"/>
<path id="p6" stroke="#A13D63" d="[…]"/>
</g>

Although CSS now enables animation between path points, the number of points in each state needs to match. GSAP doesn’t have that limitation and can animate between states that have different numbers of points, making it ideal for this type of animation. I defined the new set of straight paths:

<!-- Straight state -->
const Waves = {
  p1: "[…]",
  p2: "[…]",
  p3: "[…]",
  p4: "[…]",
  p5: "[…]",
  p6: "[…]"
};

Then, I created a GSAP timeline that repeats backwards and forwards over six seconds:

const waveTimeline = gsap.timeline({
  repeat: -1,
  yoyo: true,
  defaults: { duration: 6, ease: "sine.inOut" }
});

Object.entries(Waves).forEach(([id, d]) => {
  waveTimeline.to(`#${id}`, { morphSVG: d }, 0);
});

Another ambient animation principle is to use layering to build complexity. Think of it like building a sound mix. You want variation in rhythm, tone, and timing. In my animation, three rows of musical notes move at different speeds:

<path id="notes-row-1"/>
<path id="notes-row-2"/>
<path id="notes-row-3"/>

The duration of each row’s animation is also defined using GSAP, from 100 to 400 seconds to give the overall animation a parallax-style effect:

const noteRows = [
  { id: "#notes-row-1", duration: 300, y: 100 }, // slowest
  { id: "#notes-row-2", duration: 200, y: 250 }, // medium
  { id: "#notes-row-3", duration: 100, y: 400 }  // fastest
];

[…]

The next layer contains a shadow cast by the piano keys, which slowly rotates around its centre:

gsap.to("shadow", {
  y: -10,
  rotation: -2,
  transformOrigin: "50% 50%",
  duration: 3,
  ease: "sine.inOut",
  yoyo: true,
  repeat: -1
});

And finally, the piano keys themselves, which rotate at the same time but in the opposite direction to the shadow:

gsap.to("#g3-keys", {
  y: 10,
  rotation: 2,
  transformOrigin: "50% 50%",
  duration: 3,
  ease: "sine.inOut",
  yoyo: true,
  repeat: -1
});

The complete animation can be viewed in my lab. By layering motion thoughtfully, the site feels alive without ever dominating the content, which is a perfect match for Reuven’s energy.

Mike Worth

As I mentioned earlier, not everyone needs to animate cartoon characters, but I do occasionally. Mike Worth is an Emmy award-winning film, video game, and TV composer who asked me to design his website. For the project, I created and illustrated the character of orangutan adventurer Orango Jones.

Orango proved to be the perfect subject for ambient animations and features on every page of Mike’s website. He takes the reader on an adventure, and along the way, they get to experience Mike’s music.

For Mike’s “About” page, I wanted to combine ambient animations with interactions. Orango is in a cave where he has found a stone tablet with faint markings that serve as a navigation aid to elsewhere on Mike’s website. The illustration contains a hidden feature, an easter egg, as when someone presses Orango’s magnifying glass, moving shafts of light stream into the cave and onto the tablet.

I also added an anchor around a hidden circle, which I positioned over Orango’s magnifying glass, as a large tap target to toggle the light shafts on and off by changing the data-lights value on the SVG:

<a href="javascript:void(0);" id="light-switch" title="Lights on/off">
  <circle cx="700" cy="1000" r="100" opacity="0" />
</a>

Then, I added two descendant selectors to my CSS, which adjust the opacity of the light shafts depending on the data-lights value:

[data-lights="lights-off"] .light-shaft {
  opacity: .05;
  transition: opacity .25s linear;
}

[data-lights="lights-on"] .light-shaft {
  opacity: .25;
  transition: opacity .25s linear;
}

A slow and subtle rotation adds natural movement to the light shafts:

@keyframes shaft-rotate {
  0% { rotate: 2deg; }
  50% { rotate: -2deg; }
  100% { rotate: 2deg; }
}

Which is only visible when the light toggle is active:

[data-lights="lights-on"] .light-shaft {
  animation: shaft-rotate 20s infinite;
  transform-origin: 100% 0;
}

When developing any ambient animation, considering performance is crucial, as even though CSS animations are lightweight, features like blur filters and drop shadows can still strain lower-powered devices. It’s also critical to consider accessibility, so respect someone’s prefers-reduced-motion preferences:

@media screen and (prefers-reduced-motion: reduce) {
  html {
    scroll-behavior: auto;
    animation-duration: 1ms !important;
    animation-iteration-count: 1 !important;
    transition-duration: 1ms !important;
  }
}

When an animation feature is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree:

<a href="javascript:void(0);" id="light-switch" aria-hidden="true">
  […]
</a>

With Mike’s Orango Jones, ambient animation shifts from subtle atmosphere to playful storytelling. Light shafts and soft interactions weave narrative into the design without stealing focus, proving that animation can support both brand identity and user experience. See this animation in my lab.

EPD

Moving away from composers, EPD is a property investment company. They commissioned me to design creative concepts for a new website. A quick search for property investment companies will usually leave you feeling underwhelmed by their interchangeable website designs. They include full-width banners with faded stock photos of generic city skylines or ethnically diverse people shaking hands.

For EPD, I wanted to develop a distinctive visual style that the company could own, so I proposed graphic, stylised skylines that reflect both EPD’s brand and its global portfolio. I made them using various-sized circles that recall the company’s logo mark.

The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to it, it’s probably too much, so I dial back the animation until it feels like something you’d only catch if you’re really looking. I created three skyline designs, including Dubai, London, and Manchester.

In each of these ambient animations, the wheels rotate and the large circles change colour at random intervals.

Next, I exported a layer containing the circle elements I want to change colour.

<g id="banner-dots">
  <circle class="data-theme-fill" […]/>
  <circle class="data-theme-fill" […]/>
  <circle class="data-theme-fill" […]/>
  […]
</g>

Once again, I used GSAP to select groups of circles that flicker like lights across the skyline:

function animateRandomDots() {
  const circles = gsap.utils.toArray("#banner-dots circle")
  const numberToAnimate = gsap.utils.random(3, 6, 1)
  const selected = gsap.utils.shuffle(circles).slice(0, numberToAnimate)
}

Then, at two-second intervals, the fill colour of those circles changes from the teal accent to the same off-white colour as the rest of my illustration:

gsap.to(selected, {
  fill: "color(display-p3 .439 .761 .733)",
  duration: 0.3,
  stagger: 0.05,
  onComplete: () => {
    gsap.to(selected, {
      fill: "color(display-p3 .949 .949 .949)",
      duration: 0.5,
      delay: 2
    })
  }
})

gsap.delayedCall(gsap.utils.random(1, 3), animateRandomDots) }
animateRandomDots()

The result is a skyline that gently flickers, as if the city itself is alive. Finally, I rotated the wheel. Here, there was no need to use GSAP as this is possible using CSS rotate alone:

<g id="banner-wheel">
  <path stroke="#F2F2F2" stroke-linecap="round" stroke-width="4" d="[…]"/>
  <path fill="#D8F76E" d="[…]"/>
</g>


#banner-wheel {
  transform-box: fill-box;
  transform-origin: 50% 50%;
  animation: rotateWheel 30s linear infinite;
}

@keyframes rotateWheel {
  to { transform: rotate(360deg); }
}

CSS animations are lightweight and ideal for simple, repetitive effects, like fades and rotations. They’re easy to implement and don’t require libraries. GSAP, on the other hand, offers far more control as it can handle path morphing and sequence timelines. The choice of which to use depends on whether I need the precision of GSAP or the simplicity of CSS.

By keeping the wheel turning and the circles glowing, the skyline animations stay in the background yet give the design a distinctive feel. They avoid stock photo clichés while reinforcing EPD’s brand identity and are proof that, even in a conservative sector like property investment, ambient animation can add atmosphere without detracting from the message.

Wrapping up

From Reuven’s musical textures to Mike’s narrative-driven Orango Jones and EPD’s glowing skylines, these projects show how ambient animation adapts to context. Sometimes it’s purely atmospheric, like drifting notes or rotating wheels; other times, it blends seamlessly with interaction, rewarding curiosity without getting in the way.

Whether it echoes a composer’s improvisation, serves as a playful narrative device, or adds subtle distinction to a conservative industry, the same principles hold true:

Keep motion slow, seamless, and purposeful so that it enhances, rather than distracts from, the design.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[AI In UX: Achieve More With Less]]> https://smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/ https://smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/ Fri, 17 Oct 2025 08:00:00 GMT I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.

But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.

Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.

How To Work With AI

Here is the mental model that has been most helpful for me. Treat AI like an intern with zero experience.

An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.

This is exactly how you should work with AI.

The Basics Of Prompting

I am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.

  • Define the role.
    Start with something like “Act as a user researcher” or “Act as a copywriter.” This gives the AI context for how to respond.
  • Break it into steps.
    Do not just say “Analyze these interview transcripts.” Instead, say “I want you to complete the following steps. One, identify recurring themes. Two, look for questions users are trying to answer. Three, note any objections that come up. Four, output a summary of each.”
  • Define success.
    Tell it what good looks like. “I am looking for a report that gives a clear indication of recurring themes and questions in a format I can send to stakeholders. Do not use research terminology because they will not understand it.”
  • Make it think.
    Tell it to think deeply about its approach before responding. Get it to create a way to test for success (known as a rubric) and iterate on its work until it passes that test.

Here is a real prompt I use for online research:

Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.

Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.

That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.

Learn When To Trust It

You should never fully trust AI. Just like you would never fully trust an intern you have only just met.

To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.

But even after months of working with it daily, I still check its work. I still challenge it. I still make it cite sources and explain its reasoning.

The key is that even with all that checking, it is still faster than doing it yourself. Much faster.

Using AI For User Research

This is where AI has genuinely transformed my work. I use it constantly for five main things.

Online Research

I love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.

This would have taken me days of trawling through social media and review sites. Now it takes minutes.

I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.

Analyzing Interviews And Surveys

I used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.

For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.

The quality is good. Really good. As long as you give it clear instructions about what you want.

Making Sense Of Data

I am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.

AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. “What patterns do you see?” “Can you reformat this?” “Show me this data in a different way.”

Microsoft Clarity now has Copilot built in, so you can ask it questions about your analytics data. Triple Whale does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.

Research Projects

This is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.

When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.

Then I give it custom instructions. Here is one I use for my own business:

Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my UX consultant business and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.

I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.

Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. Challenge my thinking. It is like having a co-worker who never gets tired and has a perfect memory.

I do this for every client project now. It is invaluable.

Creating Personas

AI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.

Now I can create what I call functional personas. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.

I upload all my research to a project and say:

Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.

The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.

Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a hypothetical user. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.

My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.

Using AI For Design And Development

Let me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.

Three reasons why:

  1. It is slow if you want something specific or complicated.
  2. It can be frustrating because it gets close but not quite there.
  3. And the quality is often subpar. Unpolished code, questionable design choices, that kind of thing.

But that does not mean it is not useful. It absolutely is. Just not for final production work.

Functional Prototypes

If you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. It’s the biggest thing people do online other than click links — and you cannot test it.

Tools like Relume and Bolt can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.

But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.

Small Coding Tasks

I use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.

AI lets me create the little tools I need. A calculator that calculates the ROI of my UX work. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.

Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted £16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.

If you are a developer, you should absolutely be using tools like Cursor by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.

Reviewing Existing Services

There are some great tools for getting quick feedback on existing websites when budget and time are tight.

If you need to conduct a UX audit, Wevo Pulse is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.

Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.

For e-commerce sites, Baymard has UX Ray, which analyzes flaws based on their massive database of user research.

Checking Your Designs

Attention Insight has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.

You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.

It is great for dealing with stakeholders who say, “People won’t see that.” You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.

I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cat’s face.

I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.

Creating The Perfect Image

I use AI all the time for creating images that do a specific job. My preferred tools are Midjourney and Gemini.

I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.

So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. “Make the guy reach here” or “Add glasses to this person.” I can get pretty much exactly what I want.

The other thing I love about Midjourney is that you can upload a photograph and say, “Replicate this style.” This keeps consistency across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.

Using AI For Content

Most clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.

I have completely stopped asking clients for copy since AI came along. Here is my process.

Build Everything Around Questions

Once I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a top task analysis where people vote on which questions matter most.

I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.

Get Bullet Point Answers From Stakeholders

I spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.

Then I go to my clients and say: “I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.”

That is their job done. Pretty much.

Let AI Draft The Copy

Now I take control. I feed ChatGPT the questions and bullet points and say:

Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.

I often upload a full style guide as well, with details about how I want it to be written.

The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.

Stakeholders Review And Provide Feedback

That goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, “Rewrite using these comments.”

Job done.

The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.

It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.

Tools That Help

If your stakeholders are still giving you content, Hemingway Editor is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.

If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.

What This Means for You

Let me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.

But here is what I know from two years of using this stuff daily. It has made me faster. It has made me better. It has freed me up to do more strategic thinking and less grunt work.

A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.

Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.

Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.

AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.

That is where you come in. That is where you will always come in.

Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.

Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.

And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[How To Make Your UX Research Hard To Ignore]]> https://smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/ https://smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/ Thu, 16 Oct 2025 13:00:00 GMT In the early days of my career, I believed that nothing wins an argument more effectively than strong and unbiased research. Surely facts speak for themselves, I thought.

If I just get enough data, just enough evidence, just enough clarity on where users struggle — well, once I have it all and I present it all, it alone will surely change people’s minds, hearts, and beliefs. And, most importantly, it will help everyone see, understand, and perhaps even appreciate and commit to what needs to be done.

Well, it’s not quite like that. In fact, the stronger and louder the data, the more likely it is to be questioned. And there is a good reason for that, which is often left between the lines.

Research Amplifies Internal Flaws

Throughout the years, I’ve often seen data speaking volumes about where the business is failing, where customers are struggling, where the team is faltering — and where an urgent turnaround is necessary. It was right there, in plain sight: clear, loud, and obvious.

But because it’s so clear, it reflects back, often amplifying all the sharp edges and all the cut corners in all the wrong places. It reflects internal flaws, wrong assumptions, and failing projects — some of them signed off years ago, with secured budgets, big promotions, and approved headcounts. Questioning them means questioning authority, and often it’s a tough path to take.

As it turns out, strong data is very, very good at raising uncomfortable truths that most companies don’t really want to acknowledge. That’s why, at times, research is deemed “unnecessary,” or why we don’t get access to users, or why loud voices always win big arguments.

So even if data is presented with a lot of eagerness, gravity, and passion in that big meeting, it will get questioned, doubted, and explained away. Not because of its flaws, but because of hope, reluctance to change, and layers of internal politics.

This shows up most vividly in situations when someone raises concerns about the validity and accuracy of research. Frankly, it’s not that somebody is wrong and somebody is right. Both parties just happen to be right in a different way.

What To Do When Data Disagrees

We’ve all heard that data always tells a story. However, it’s never just a single story. People are complex, and pointing out a specific truth about them just by looking at numbers is rarely enough.

When data disagrees, it doesn’t mean that either is wrong. It’s just that different perspectives reveal different parts of a whole story that isn’t completed yet.

In digital products, most stories have 2 sides:

  • Quantitative data ← What/When: behavior patterns at scale.
  • Qualitative data ← Why/How: user needs and motivations.
  • ↳ Quant usually comes from analytics, surveys, and experiments.
  • ↳ Qual comes from tests, observations, and open-ended surveys.

Risk-averse teams overestimate the weight of big numbers in quantitative research. Users exaggerate the frequency and severity of issues that are critical for them. As Archana Shah noted, designers get carried away by users’ confident responses and often overestimate what people say and do.

And so, eventually, data coming from different teams paints a different picture. And when it happens, we need to reconcile and triangulate. With the former, we track what’s missing, omitted, or overlooked. With the latter, we cross-validate data — e.g., finding pairings of qual/quant streams of data, then clustering them together to see what’s there and what’s missing, and exploring from there.

And even with all of it in place and data conflicts resolved, we still need to do one more thing to make a strong argument: we need to tell a damn good story.

Facts Don’t Win Arguments, Stories Do

Research isn’t everything. Facts don’t win argumentspowerful stories do. But a story that starts with a spreadsheet isn’t always inspiring or effective. Perhaps it brings a problem into the spotlight, but it doesn’t lead to a resolution.

The very first thing I try to do in that big boardroom meeting is to emphasize what unites us — shared goals, principles, and commitments that are relevant to the topic at hand. Then, I show how new data confirms or confronts our commitments, with specific problems we believe we need to address.

When a question about the quality of data comes in, I need to show that it has been reconciled and triangulated already and discussed with other teams as well.

A good story has a poignant ending. People need to see an alternative future to trust and accept the data — and a clear and safe path forward to commit to it. So I always try to present options and solutions that we believe will drive change and explain our decision-making behind that.

They also need to believe that this distant future is within reach, and that they can pull it off, albeit under a tough timeline or with limited resources.

And: a good story also presents a viable, compelling, shared goal that people can rally around and commit to. Ideally, it’s something that has a direct benefit for them and their teams.

These are the ingredients of the story that I always try to keep in my mind when working on that big presentation. And in fact, data is a starting point, but it does need a story wrapped around it to be effective.

Wrapping Up

There is nothing more disappointing than finding a real problem that real people struggle with and facing the harsh reality of research not being trusted or valued.

We’ve all been there before. The best thing you can do is to be prepared: have strong data to back you up, include both quantitative and qualitative research — preferably with video clips from real customers — but also paint a viable future which seems within reach.

And sometimes nothing changes until something breaks. And at times, there isn’t much you can do about it unless you are prepared when it happens.

“Data doesn’t change minds, and facts don’t settle fights. Having answers isn’t the same as learning, and it for sure isn’t the same as making evidence-based decisions.”

— Erika Hall
Meet “How To Measure UX And Design Impact”

You can find more details on UX Research in Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Useful Books

  • Just Enough Research, by Erika Hall
  • Designing Surveys That Work, by Caroline Jarrett
  • Designing Quality Survey Questions, by Sheila B. Robinson
]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Grayscale Problem]]> https://smashingmagazine.com/2025/10/the-grayscale-problem/ https://smashingmagazine.com/2025/10/the-grayscale-problem/ Mon, 13 Oct 2025 10:00:00 GMT Last year, a study found that cars are steadily getting less colourful. In the US, around 80% of cars are now black, white, gray, or silver, up from 60% in 2004. This trend has been attributed to cost savings and consumer preferences. Whatever the reasons, the result is hard to deny: a big part of daily life isn’t as colourful as it used to be.

The colourfulness of mass consumer products is hardly the bellwether for how vibrant life is as a whole, but the study captures a trend a lot of us recognise — offline and on. From colour to design to public discourse, a lot of life is getting less varied, more grayscale.

The web is caught in the same current. There is plenty right with it — it retains plenty of its founding principles — but its state is not healthy. From AI slop to shoddy service providers to enshittification, the digital world faces its own grayscale problem.

This bears talking about. One of life’s great fallacies is that things get better over time on their own. They can, but it’s certainly not a given. I don’t think the moral arc of the universe does not bend towards justice, not on its own; I think it bends wherever it is dragged, kicking and screaming, by those with the will and the means to do so.

Much of the modern web, and the forces of optimisation and standardisation that drive it, bear an uncanny resemblance to the trend of car colours. Processes like market research and A/B testing — the process by which two options are compared to see which ‘performs’ better on clickthrough, engagement, etc. — have their value, but they don’t lend themselves to particularly stimulating design choices.

The spirit of free expression that made the formative years of the internet so exciting — think GeoCities, personal blogging, and so on — is on the slide.

The ongoing transition to a more decentralised, privacy-aware Web3 holds some promise. Two-thirds of the world’s population now has online access — though that still leaves plenty of work to do — with a wealth of platforms allowing billions of people to connect. The dream of a digital world that is open, connected, and flat endures, but is tainted.

Monopolies

One of the main sources of concern for me is that although more people are online than ever, they are concentrating on fewer and fewer sites. A study published in 2021 found that activity is concentrated in a handful of websites. Think Google, Amazon, Facebook, Instagram, and, more recently, ChatGPT:

“So, while there is still growth in the functions, features, and applications offered on the web, the number of entities providing these functions is shrinking. [...] The authority, influence, and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.”

Monopolies by nature reduce variance, both through their domination of the market and (understandably in fairness) internal preferences for consistency. And, let’s be frank, they have a vested interest in crushing any potential upstarts.

Dominant websites often fall victim to what I like to call Internet Explorer Syndrome, where their dominance breeds a certain amount of complacency. Why improve your quality when you’re sitting on 90% market share? No wonder the likes of Google are getting worse.

The most immediate sign of this is obviously how sites are designed and how they look. A lot of the big players look an awful lot like each other. Even personal websites are built atop third-party website builders. Millions of people wind up using the same handful of templates, and that’s if they have their own website at all. On social media, we are little more than a profile picture and a pithy tagline. The rest is boilerplate.

Should there be sleek, minimalist, ‘grayscale’ design systems and websites? Absolutely. But there should be colourful, kooky ones too, and if anything, they’re fading away. Do we really want to spend our online lives in the digital equivalent of Levittowns? Even logos are contriving to be less eye-catching. It feels like a matter of time before every major logo is a circle in a pastel colour.

The arrival of Artificial Intelligence into our everyday lives (and a decent chunk of the digital services we use) has put all of this into overdrive. Amalgamating — and hallucinating from — content that was already trending towards a perfect average, it is grayscale in its purest form.

Mix all the colours together, and what do you get? A muddy gray gloop.

I’m not railing against best practice. A lot of conventions have become the standard for good reason. One could just as easily shake their fist at the sky and wonder why all newspapers look the same, or all books. I hope the difference here is clear, though.

The web is a flexible enough domain that I think it belongs in the realm of architecture. A city where all buildings look alike has a soul-crushing quality about it. The same is true, I think, of the web.

In the Oscar Wilde play Lady Windermere’s Fan, a character quips that a cynic “knows the price of everything and the value of nothing.” In fairness, another quips back that a sentimentalist “sees an absurd value in everything, and doesn’t know the market price of any single thing.”

The sweet spot is somewhere in between. Structure goes a long way, but life needs a bit of variety too.

So, how do we go about bringing that variety? We probably shouldn’t hold our breath on big players to lead the way. They have the most to lose, after all. Why risk being colourful or dynamic if it impacts the bottom line?

We, the citizens of the web, have more power than we realise. This is the web, remember, a place where if you can imagine it, odds are you can make it. And at zero cost. No materials to buy and ship, no shareholders to appease. A place as flexible — and limitless — as the web has no business being boring.

There are plenty of ways, big and small, of keeping this place colourful. Whether our digital footprints are on third-party websites or ones we build ourselves, we needn’t toe the line.

Colour seems an appropriate place to start. When given the choice, try something audacious rather than safe. The worst that can happen is that it doesn’t work. It’s not like the sunk cost of painting a room; if you don’t like the palette, you simply change the hex codes. The same is true of fonts, icons, and other building blocks of the web.

As an example, a couple of friends and I listen to and review albums occasionally as a hobby. On the website, the palette of each review page reflects the album artwork:

I couldn’t tell you if reviews ‘perform’ better or worse than if they had a grayscale palette, because I don’t care. I think it’s a lot nicer to look at. And for those wondering, yes, I have tried to make every page meet AA Web Accessibility standards. Vibrant and accessible aren’t mutually exclusive.

Another great way of bringing vibrancy to the web is a degree of randomisation. Bruno Simon of Three Journey and awesome portfolio fame weaves random generation into a lot of his projects, and the results are gorgeous. What’s more, they feel familiar, natural, because life is full of wildcards.

This needn’t be in fancy 3D models. You could lightly rotate images to create a more informal, photo album mood, or chuck in the occasional random link in a list of recommended articles, just to shake things up.

In a lot of ways, it boils down to an attitude of just trying stuff out. Make your own font, give the site a sepia filter, and add that easter egg you keep thinking about. Just because someone, somewhere has already done it doesn’t mean you can’t do it your own way. And who knows, maybe your way stumbles onto someplace wholly new.

I’m wary of being too prescriptive. I don’t have the keys to a colourful web. No one person does. A vibrant community is the sum total of its people. What keeps things interesting is individuals trying wacky ideas and putting them out there. Expression for expression’s sake. Experimentation for experimentation’s sake. Tinkering for tinkering’s sake.

As users, there’s also plenty of room to be adventurous and try out open source alternatives to the software monopolies that shape so much of today’s Web. Being active in the communities that shape those tools helps to sustain a more open, collaborative digital world.

Although there are lessons to be taken from it, we won’t get a more colourful web by idealising the past or pining to get back to the ‘90s. Nor is there any point in resisting new technologies. AI is here; the choice is whether we use it or it uses us. We must have the courage to carry forward what still holds true, drop what doesn’t, and explore new ideas with a spirit of play.

Here are a few more Smashing articles in that spirit:

I do think there’s a broader discussion to be had about the extent to which A/B tests, bottom lines, and focus groups seem to dictate much of how the modern web looks and feels. With sites being squeezed tighter and tighter by dwindling advertising revenues, and AI answers muscling in on search traffic, the corporate entities behind larger websites can’t justify doing anything other than what is safe and proven, for fear of shrinking their slice of the pie.

Lest we forget, though, most of the web isn’t beholden to those types of pressure. From pet projects to wikis to forums to community news outlets to all manner of other things, there are countless reasons for websites to exist, and they needn’t take design cues from the handful of sites slugging it out at the top.

Connected with this is the dire need for digital literacy (PDF) — ‘the confident and critical use of a full range of digital technologies for information, communication and basic problem-solving in all aspects of life.’ For as long as using third-party platforms is a necessity rather than a choice, the needle’s only going to move so much.

There’s a reason why Minecraft is the world’s best-selling game. People are creative. When given the tools — and the opportunity — that creativity will manifest in weird and wonderful ways. That game is a lot of things, but gray ain’t one of them.

The web has all of that flexibility and more. It is a manifestation of imagination. Imagination trends towards colour, not grayness. It doesn’t always feel like it, but where the internet goes is decided by its citizens. The internet is ours. If we want to, we can make it technicolor.

]]>
hello@smashingmagazine.com (Frederick O’Brien)
<![CDATA[Smashing Animations Part 5: Building Adaptive SVGs With `<symbol>`, `<use>`, And CSS Media Queries]]> https://smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/ https://smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/ Mon, 06 Oct 2025 13:00:00 GMT `, ``, and CSS Media Queries.]]> I’ve written quite a lot recently about how I prepare and optimise SVG code to use as static graphics or in animations. I love working with SVG, but there’s always been something about them that bugs me.

To illustrate how I build adaptive SVGs, I’ve selected an episode of The Quick Draw McGraw Show called “Bow Wow Bandit,” first broadcast in 1959.

In it, Quick Draw McGraw enlists his bloodhound Snuffles to rescue his sidekick Baba Looey. Like most Hanna-Barbera title cards of the period, the artwork was made by Lawrence (Art) Goble.

Let’s say I’ve designed an SVG scene like that one that’s based on Bow Wow Bandit, which has a 16:9 aspect ratio with a viewBox size of 1920×1080. This SVG scales up and down (the clue’s in the name), so it looks sharp when it’s gigantic and when it’s minute.

But on small screens, the 16:9 aspect ratio (live demo) might not be the best format, and the image loses its impact. Sometimes, a portrait orientation, like 3:4, would suit the screen size better.

But, herein lies the problem, as it’s not easy to reposition internal elements for different screen sizes using just viewBox. That’s because in SVG, internal element positions are locked to the coordinate system from the original viewBox, so you can’t easily change their layout between, say, desktop and mobile. This is a problem because animations and interactivity often rely on element positions, which break when the viewBox changes.

My challenge was to serve a 1080×1440 version of Bow Wow Bandit to smaller screens and a different one to larger ones. I wanted the position and size of internal elements — like Quick Draw McGraw and his dawg Snuffles — to change to best fit these two layouts. To solve this, I experimented with several alternatives.

Note: Why are we not just using the <picture> with external SVGs? The <picture> element is brilliant for responsive images, but it only works with raster formats (like JPEG or WebP) and external SVG files treated as images. That means that you can’t animate or style internal elements using CSS.

Showing And Hiding SVG

The most obvious choice was to include two different SVGs in my markup, one for small screens, the other for larger ones, then show or hide them using CSS and Media Queries:

<svg id="svg-small" viewBox="0 0 1080 1440">
  <!-- ... -->
</svg>

<svg id="svg-large" viewBox="0 0 1920 1080">
  <!--... -->
</svg>


#svg-small { display: block; }
#svg-large { display: none; }

@media (min-width: 64rem) {
  #svg-small { display: none; }
  #svg-mobile { display: block; }
}

But using this method, both SVG versions are loaded, which, when the graphics are complex, means downloading lots and lots and lots of unnecessary code.

Replacing SVGs Using JavaScript

I thought about using JavaScript to swap in the larger SVG at a specified breakpoint:

if (window.matchMedia('(min-width: 64rem)').matches) {
  svgContainer.innerHTML = desktopSVG; 
} else {
  svgContainer.innerHTML = mobileSVG;
}

Leaving aside the fact that JavaScript would now be critical to how the design is displayed, both SVGs would usually be loaded anyway, which adds DOM complexity and unnecessary weight. Plus, maintenance becomes a problem as there are now two versions of the artwork to maintain, doubling the time it would take to update something as small as the shape of Quick Draw’s tail.

The Solution: One SVG Symbol Library And Multiple Uses

Remember, my goal is to:

  • Serve one version of Bow Wow Bandit to smaller screens,
  • Serve a different version to larger screens,
  • Define my artwork just once (DRY), and
  • Be able to resize and reposition elements.

I don’t read about it enough, but the <symbol> element lets you define reusable SVG elements that can be hidden and reused to improve maintainability and reduce code bloat. They’re like components for SVG: create once and use wherever you need them:

<svg xmlns="http://www.w3.org/2000/svg" style="display: none;">
  <symbol id="quick-draw-body" viewBox="0 0 620 700">
    <g class="quick-draw-body">[…]</g>
  </symbol>
  <!-- ... -->
</svg>

<use href="#quick-draw-body" />

A <symbol> is like storing a character in a library. I can reference it as many times as I need, to keep my code consistent and lightweight. Using <use> elements, I can insert the same symbol multiple times, at different positions or sizes, and even in different SVGs.

Each <symbol> must have its own viewBox, which defines its internal coordinate system. That means paying special attention to how SVG elements are exported from apps like Sketch.

Exporting For Individual Viewboxes

I wrote before about how I export elements in layers to make working with them easier. That process is a little different when creating symbols.

Ordinarily, I would export all my elements using the same viewBoxsize. But when I’m creating a symbol, I need it to have its own specific viewBox.

So I export each element as an individually sized SVG, which gives me the dimensions I need to convert its content into a symbol. Let’s take the SVG of Quick Draw McGraw’s hat, which has a viewBox size of 294×182:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 294 182">
  <!-- ... -->
</svg>

I swap the SVG tags for <symbol> and add its artwork to my SVG library:

<svg xmlns="http://www.w3.org/2000/svg" style="display: none;">
  <symbol id="quick-draw-hat" viewBox="0 0 294 182">
    <g class="quick-draw-hat">[…]</g>
  </symbol>
</svg>

Then, I repeat the process for all the remaining elements in my artwork. Now, if I ever need to update any of my symbols, the changes will be automatically applied to every instance it’s used.

Using A <symbol> In Multiple SVGs

I wanted my elements to appear in both versions of Bow Wow Bandit, one arrangement for smaller screens and an alternative arrangement for larger ones. So, I create both SVGs:

<svg class="svg-small" viewBox="0 0 1080 1440">
  <!-- ... -->
</svg>

<svg class="svg-large" viewBox="0 0 1920 1080">
  <!-- ... -->
</svg>

…and insert links to my symbols in both:

<svg class="svg-small" viewBox="0 0 1080 1440">
  <use href="#quick-draw-hat" />
</svg>

<svg class="svg-large" viewBox="0 0 1920 1080">
  <use href="#quick-draw-hat" />
</svg>
Positioning Symbols

Once I’ve placed symbols into my layout using <use>, my next step is to position them, which is especially important if I want alternative layouts for different screen sizes. Symbols behave like <g> groups, so I can scale and move them using attributes like width, height, and transform:

<svg class="svg-small" viewBox="0 0 1080 1440">
  <use href="#quick-draw-hat" width="294" height="182" transform="translate(-30,610)"/>
</svg>

<svg class="svg-large" viewBox="0 0 1920 1080">
  <use href="#quick-draw-hat" width="294" height="182" transform="translate(350,270)"/>
</svg>

I can place each <use> element independently using transform. This is powerful because rather than repositioning elements inside my SVGs, I move the <use> references. My internal layout stays clean, and the file size remains small because I’m not duplicating artwork. A browser only loads it once, which reduces bandwidth and speeds up page rendering. And because I’m always referencing the same symbol, their appearance stays consistent, whatever the screen size.

Animating <use> Elements

Here’s where things got tricky. I wanted to animate parts of my characters — like Quick Draw’s hat tilting and his legs kicking. But when I added CSS animations targeting internal elements inside a <symbol>, nothing happened.

Tip: You can animate the <use> element itself, but not elements inside the <symbol>. If you want individual parts to move, make them their own symbols and animate each <use>.

Turns out, you can’t style or animate a <symbol>, because <use> creates shadow DOM clones that aren’t easily targetable. So, I had to get sneaky. Inside each <symbol> in my library SVG, I added a <g> element around the part I wanted to animate:

<symbol id="quick-draw-hat" viewBox="0 0 294 182">
  <g class="quick-draw-hat">
    <!-- ... -->
  </g>
</symbol>

…and animated it using an attribute substring selector, targeting the href attribute of the use element:

use[href="#quick-draw-hat"] {
  animation-delay: 0.5s;
  animation-direction: alternate;
  animation-duration: 1s;
  animation-iteration-count: infinite;
  animation-name: hat-rock;
  animation-timing-function: ease-in-out;
  transform-origin: center bottom;
}

@keyframes hat-rock {
from { transform: rotate(-2deg); }
to   { transform: rotate(2deg); } }
Media Queries For Display Control

Once I’ve created my two visible SVGs — one for small screens and one for larger ones — the final step is deciding which version to show at which screen size. I use CSS Media Queries to hide one SVG and show the other. I start by showing the small-screen SVG by default:

.svg-small { display: block; }
.svg-large { display: none; }

Then I use a min-width media query to switch to the large-screen SVG at 64rem and above:

@media (min-width: 64rem) {
  .svg-small { display: none; }
  .svg-large { display: block; }
}

This ensures there’s only ever one SVG visible at a time, keeping my layout simple and the DOM free from unnecessary clutter. And because both visible SVGs reference the same hidden <symbol> library, the browser only downloads the artwork once, regardless of how many <use> elements appear across the two layouts.

Wrapping Up

By combining <symbol>, <use>, CSS Media Queries, and specific transforms, I can build adaptive SVGs that reposition their elements without duplicating content, loading extra assets, or relying on JavaScript. I need to define each graphic only once in a hidden symbol library. Then I can reuse those graphics, as needed, inside several visible SVGs. With CSS doing the layout switching, the result is fast and flexible.

It’s a reminder that some of the most powerful techniques on the web don’t need big frameworks or complex tooling — just a bit of SVG know-how and a clever use of the basics.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)]]> https://smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/ https://smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/ Fri, 03 Oct 2025 10:00:00 GMT In Part 1 of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:

How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?

In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.

The answer is a more disciplined process I call Intent Prototyping (kudos to Marco Kotrotsos, who coined Intent-Oriented Programming). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit intent at the very center of the process. It receives a holistic expression of intent (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.

This method solves the concerns we’ve discussed in Part 1 in the best way possible:

  • Unlike static mockups, the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.
  • Unlike a vibe-coded prototype, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.

This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.

My Workflow

To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this GitHub repository.

Step 1: Expressing An Intent

Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:

In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.

What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:

  • Navigational paths (clicking here takes you to).
  • Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).
  • What parts might make sense to build as reusable components.
  • Which components from the design system (I’m using Ant Design Library) should be used.
  • Any other comments that help understand how this thing should work (while sketches illustrate how it should look).

Having added all those details, we end up with such an annotated sketch:

As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our intent will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.

For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:

Note: All the prompts that I use here and below can be found in the Appendices. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.

As a result, Gemini gives us a description and the following diagram:

The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing intent, along with the Flow and Visualization.

As a result of this step, our intent is fully expressed in two files: Sketch.png and Model.md. This will be our durable source of truth.

Step 2: Preparing A Spec And A Plan

The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.

I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see Appendices 2 and 3). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see Appendices 8, 9, and 10). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.

As a result, Gemini provides us with content for DAL.md and UI.md. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.

It’s important to remember that by their nature, LLMs are not deterministic and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.

Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.

Once we have Sketch.png, Model.md, DAL.md, UI.md, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.

One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in Appendices 4 and 5.

Step 3: Executing The Plan

To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.

However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.

My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.

Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.

Then we put all our files into that project:

Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:

And we send the prompt to build the Data Access Layer (see Appendix 6). That prompt implies step-by-step execution, so upon completion of each step, I send the following:

Thank you! Now, please move to the next task.
Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. 
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.

As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:

It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.

And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see Appendix 7). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from UI-plan.md. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:

Once I’m happy with the result of a step, I ask Gemini to move on:

Thank you! Now, please move to the next task.
Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.

Before long, the result looks like this, and in every detail it works exactly as we intended:

The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.

Step 4: Learning And Iterating

It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.

And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.

Is This Workflow Too Heavy?

This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:

  • In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.
  • Individual iterations don’t need to be big. You can start with a Walking Skeleton: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.
  • And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.
Intent Prototyping Vs. Other Methods

There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.

Method/Tool Goal Risks it is best suited to mitigate Examples Why
Intent Prototyping To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows. Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring.
  • A CRM (Customer Relationship Management system).
  • A Resource Management Tool.
  • A No-Code Integration Platform (admin’s UI).
It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff.
Vibe Coding (Conversational) To rapidly explore interactive ideas through improvisation. Losing momentum because of analysis paralysis.
  • An interactive data table with live sorting/filtering.
  • A novel navigation concept.
  • A proof-of-concept for a single, complex component.
It has the smallest loop between an idea conveyed in natural language and an interactive outcome.
Axure To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works. Designing flows that break when users don’t follow the “happy path.”
  • A multi-step e-commerce checkout.
  • A software configuration wizard.
  • A dynamic form with dependent fields.
It’s made to create complex if-then logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code.
Figma To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture. Making a product that looks bad, doesn't fit with the brand, or has a layout that is hard to understand.
  • A marketing landing page.
  • A user onboarding flow.
  • Presenting a new visual identity.
It excels at high-fidelity visual design and provides simple, fast tools for linking static screens.
ProtoPie, Framer To make high-fidelity micro-interactions feel just right. Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions.
  • A custom pull-to-refresh animation.
  • A fluid drag-and-drop interface.
  • An animated chart or data visualization.
These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use.
Low-code / No-code Tools (e.g., Bubble, Retool) To create a working, data-driven app as quickly as possible. The application will never be built because traditional development is too expensive.
  • An internal inventory tracker.
  • A customer support dashboard.
  • A simple directory website.
They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs.


The key takeaway is that each method is a specialized tool for mitigating a specific type of risk. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.

Bringing It All Together

The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster — it’s a fundamental shift in how we design. By putting a clear, unambiguous intent at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.

There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design intent makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented intent becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.

Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating pictures of a product and empowers us to become architects of blueprints for a system. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.

Appendices

You can find the full Intent Prototyping Starter Kit, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this GitHub repository.

Appendix 1: Sketch to UML Class Diagram
+

You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch.

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the sketch carefully. There should be no ambiguity about what we are building.

**Step 2:** Generate the conceptual model description in the Mermaid format using a UML class diagram.

## Ground Rules

- Every entity must have the following attributes:
    - id (string)
    - createdAt (string, ISO 8601 format)
    - updatedAt (string, ISO 8601 format)
- Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes.
- Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. 
- Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be "1" -- "0..*", it must be "0..1" -- "0..*").
- Use only valid syntax in the Mermaid diagram.
- Do not include enumerations in the Mermaid diagram.
- Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file).

## Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for entity names.
- Use camelCase for attributes and relationships.
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).

## Final Instructions

- **No Assumptions: Base every detail on visual evidence in the sketch, not on common design patterns. 
- **Double-Check: After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- **Do not add redundant empty lines between items.** 

Your final output should be the complete, raw markdown content for Model.md.

Appendix 2: Sketch to DAL Spec
+

You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. 

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the documentation carefully:

- Model.md: the conceptual model
- Sketch.png: the UI sketch

There should be no ambiguity about what we are building.

**Step 2:** Check out the guidelines:

- TS-guidelines.md: TypeScript Best Practices
- React-guidelines.md: React Best Practices
- Zustand-guidelines.md: Zustand Best Practices

**Step 3:** Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations.

---

## Markdown Output Structure

Use this template for the entire document.

markdown

&#35; Data Access Layer Specification

This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. Type Definitions

Location: `src/types/entities.ts`

&#35;&#35;&#35; 1.1. `BaseEntity`

A shared interface that all entities should extend.

[TypeScript interface definition]

&#35;&#35;&#35; 1.2. `[Entity Name]`

The interface for the [Entity Name] entity.

[TypeScript interface definition]

&#35;&#35; 2. Zustand Stores

&#35;&#35;&#35; 2.1. Store for `[Entity Name]`

&#42;&#42;Location:&#42;&#42; `src/stores/[Entity Name (plural)].ts`

The Zustand store will manage the state of all [Entity Name] items.

&#42;&#42;Store State (`[Entity Name]State`):&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Store Implementation (`use[Entity Name]Store`):&#42;&#42;

- The store will be created using `create&lt;[Entity Name]State&gt;()(...)`.
- It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`.
- `[Entity Name (plural, camelCase)]` will be a dictionary (`Record&lt;string, [Entity]&gt;`) for O(1) access.

&#42;&#42;Actions:&#42;&#42;

- &#42;&#42;`add[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`update[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`remove[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`doSomethingElseWith[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]

&#35;&#35; 3. Custom Hooks

&#35;&#35;&#35; 3.1. `use[Entity Name (plural)]`

&#42;&#42;Location:&#42;&#42; `src/hooks/use[Entity Name (plural)].ts`

The hook will be the primary interface for UI components to interact with [Entity Name] data.

&#42;&#42;Hook Return Value:&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Hook Implementation:&#42;&#42;

[List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]

--- 

## Final Instructions

- **No Assumptions:** Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. 
- **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- **Do not add redundant empty lines between items.** 

Your final output should be the complete, raw markdown content for DAL.md.

Appendix 3: Sketch to UI Spec
+

You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team.

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the documentation carefully: 

- Sketch.png: the UI sketch
  - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly.
- Model.md: the conceptual model
- DAL.md: the Data Access Layer spec

There should be no ambiguity about what we are building.

**Step 2:** Check out the guidelines:

- TS-guidelines.md: TypeScript Best Practices
- React-guidelines.md: React Best Practices

**Step 3:** Generate the complete markdown content for a new file, UI.md.

---

## Markdown Output Structure

Use this template for the entire document.

markdown

&#35; UI Layer Specification

This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. High-Level Structure

The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. 

&#35;&#35;&#35; 1.1. `App` Component

The root component that sets up routing and global providers.

-   &#42;&#42;Location&#42;&#42;: `src/App.tsx`
-   &#42;&#42;Purpose&#42;&#42;: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page.
-   &#42;&#42;Composition&#42;&#42;:
  -   Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`.
  -   Renders `[Page Name]`.

&#35;&#35; 2. Pages

&#35;&#35;&#35; 2.1. `[Page Name]`

-   &#42;&#42;Location:&#42;&#42; `src/pages/PageName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Briefly describe the main goal and function of this page]
-   &#42;&#42;Data Access:&#42;&#42;
  [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this page using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this page]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with this page] 
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this page should work]

&#35;&#35; 3. Components

&#35;&#35;&#35; 3.1. `[Component Name]`

-   &#42;&#42;Location:&#42;&#42; `src/components/ComponentName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Explain what this component does and where it's used]
-   &#42;&#42;Props:&#42;&#42;
  [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.]
-   &#42;&#42;Data Access:&#42;&#42;
    [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this component using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this component]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with the component]
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this component should work]

--- 

## Final Instructions

- **No Assumptions:** Base every detail on the visual evidence in the sketch, not on common design patterns. 
- **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- **Do not add redundant empty lines between items.** 

Your final output should be the complete, raw markdown content for UI.md.

Appendix 4: DAL Spec to Plan
+

You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec.

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the documentation carefully:

- DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.

There should be no ambiguity about what we are building.

**Step 2:** Check out the guidelines:

- TS-guidelines.md: TypeScript Best Practices
- React-guidelines.md: React Best Practices
- Zustand-guidelines.md: Zustand Best Practices

**Step 3:** Create a step-by-step plan to build a Data Access Layer according to the spec. 

Each task should:

- Focus on one concern
- Be reasonably small
- Have a clear start + end
- Contain clearly defined Objectives and Acceptance Criteria

The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. 

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between.

## Final Instructions

- Note that we are not starting from scratch; the basic template has already been created using Vite.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for DAL-plan.md.

Appendix 5: UI Spec to Plan
+

You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch.

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the documentation carefully:

- UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.

There should be no ambiguity about what we are building.

**Step 2:** Check out the guidelines:

- TS-guidelines.md: TypeScript Best Practices
- React-guidelines.md: React Best Practices

**Step 3:** Create a step-by-step plan to build a UI layer according to the spec and the sketch. 

Each task must:

- Focus on one concern.
- Be reasonably small.
- Have a clear start + end.
- Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding.
- Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan.

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between.

## Final Instructions

- Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully.
- For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully).
- The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for UI-plan.md.


Appendix 6: DAL Plan to Code
+

You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec.

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the documentation carefully:

- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. 

There should be no ambiguity about what we are building.

**Step 2:** Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices
- @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices

**Step 3:** Read the plan:

- @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application.

**Step 4:** Build a Data Access Layer for this application according to the spec and following the plan. 

- Complete one task from the plan at a time. 
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 
- Do not do anything else. At this point, we are focused on building the Data Access Layer.

## Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Do not start the development server, I'll do it by myself.

Appendix 7: UI Plan to Code
+

You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch.

## Workflow

Follow these steps precisely:

**Step 1:** Analyze the documentation carefully:

- @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. 

There should be no ambiguity about what we are building.

**Step 2:** Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices

**Step 3:** Read the plan:

- @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application.

**Step 4:** Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: 

- Complete one task from the plan at a time. 
- Make sure you build the UI according to the sketch; this is very important.
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 

## Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Follow Ant Design's default styles and components. 
- Do not touch the data access layer: it's ready and it's perfect. 
- Do not start the development server, I'll do it by myself.

Appendix 8: TS-guidelines.md
+

# Guidelines: TypeScript Best Practices

## Type System & Type Safety

- Use TypeScript for all code and enable strict mode.
- Ensure complete type safety throughout stores, hooks, and component interfaces.
- Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types.
- Entity interfaces should extend common patterns while maintaining their specific properties.
- Use TypeScript type guards in filtering operations for relationship safety.
- Avoid the 'any' type; prefer 'unknown' when necessary.
- Use generics to create reusable components and functions.
- Utilize TypeScript's features to enforce type safety.
- Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled.
- Avoid enums; use maps instead.

## Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for component names and types/interfaces.
- Prefix interfaces for React props with 'Props' (e.g., ButtonProps).
- Use camelCase for variables and functions.
- Use UPPER_CASE for constants.
- Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx).
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
- Favor named exports for components.

## Code Structure & Patterns

- Write concise, technical TypeScript code with accurate examples.
- Use functional and declarative programming patterns; avoid classes.
- Prefer iteration and modularization over code duplication.
- Use the "function" keyword for pure functions.
- Use curly braces for all conditionals for consistency and clarity.
- Structure files appropriately based on their purpose.
- Keep related code together and encapsulate implementation details.

## Performance & Error Handling

- Use immutable and efficient data structures and algorithms.
- Create custom error types for domain-specific errors.
- Use try-catch blocks with typed catch clauses.
- Handle Promise rejections and async errors properly.
- Log errors appropriately and handle edge cases gracefully.

## Project Organization

- Place shared types in a types directory.
- Use barrel exports (index.ts) for organizing exports.
- Structure files and directories based on their purpose.

## Other Rules

- Use comments to explain complex logic or non-obvious decisions.
- Follow the single responsibility principle: each function should do exactly one thing.
- Follow the DRY (Don't Repeat Yourself) principle.
- Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only.
- Use 2 spaces for indentation (no tabs).

Appendix 9: React-guidelines.md
+

# Guidelines: React Best Practices

## Component Structure

- Use functional components over class components
- Keep components small and focused
- Extract reusable logic into custom hooks
- Use composition over inheritance
- Implement proper prop types with TypeScript
- Structure React files: exported component, subcomponents, helpers, static content, types
- Use declarative TSX for React components
- Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components

## React Patterns

- Utilize useState and useEffect hooks for state and side effects
- Use React.memo for performance optimization when needed
- Utilize React.lazy and Suspense for code-splitting
- Implement error boundaries for robust error handling
- Keep styles close to components

## React Performance

- Avoid unnecessary re-renders
- Lazy load components and images when possible
- Implement efficient state management
- Optimize rendering strategies
- Optimize network requests
- Employ memoization techniques (e.g., React.memo, useMemo, useCallback)

## React Project Structure

/src
- /components - UI components (every component in a separate file)
- /hooks - public-facing custom hooks (every hook in a separate file)
- /providers - React context providers (every provider in a separate file)
- /pages - page components (every page in a separate file)
- /stores - entity-specific Zustand stores (every store in a separate file)
- /styles - global styles (if needed)
- /types - shared TypeScript types and interfaces

Appendix 10: Zustand-guidelines.md
+

# Guidelines: Zustand Best Practices

## Core Principles

- **Implement a data layer** for this React application following this specification carefully and to the letter.
- **Complete separation of concerns**: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic.
- **Shared state architecture**: Different UI components should work with the same shared state, despite using entity-specific hooks separately.

## Technology Stack

- **State management**: Use Zustand for state management with automatic localStorage persistence via the persist middleware.

## Store Architecture

- **Base entity:** Implement a BaseEntity interface with common properties that all entities extend:
typescript 
export interface BaseEntity { 
  id: string; 
  createdAt: string; // ISO 8601 format 
  updatedAt: string; // ISO 8601 format 
}
- **Entity-specific stores**: Create separate Zustand stores for each entity type.
- **Dictionary-based storage**: Use dictionary/map structures (Record<string, Entity>) rather than arrays for O(1) access by ID.
- **Handle relationships**: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate.

## Hook Layer

The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities.

### Core Principles

1.  **One Hook Per Entity**: There will be a single, comprehensive custom hook for each entity (e.g., useBlogPosts, useCategories). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created.
2.  **Return reactive data, not getter functions**: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes.
3.  **Expose Dictionaries for O(1) Access**: To provide simple and direct access to data, every hook will return a dictionary (Record<string, Entity>) of the relevant items.

### The Standard Hook Pattern

Every entity hook will follow this implementation pattern:

1.  **Subscribe** to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data.
2.  **Filter** the data based on the parameters passed into the hook. This logic will be memoized with useMemo for efficiency. If no parameters are provided, the hook will operate on the entire dataset.
3.  **Return a Consistent Shape**: The hook will always return an object containing:
    *   A **filtered and sorted array** (e.g., blogPosts) for rendering lists.
    *   A **filtered dictionary** (e.g., blogPostsDict) for convenient O(1) lookup within the component.
    *   All necessary **action functions** (add, update, remove) and **relationship operations**.
    *   All necessary **helper functions** and **derived data objects**. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes.

## API Design Standards

- **Object Parameters**: Use object parameters instead of multiple direct parameters for better extensibility:
typescript

// ✅ Preferred

add({ title, categoryIds })

// ❌ Avoid

add(title, categoryIds)
- **Internal Methods**: Use underscore-prefixed methods for cross-store operations to maintain clean separation.

## State Validation Standards

- **Existence checks**: All update and remove operations should validate entity existence before proceeding.
- **Relationship validation**: Verify both entities exist before establishing relationships between them.

## Error Handling Patterns

- **Operation failures**: Define behavior when operations fail (e.g., updating non-existent entities).
- **Graceful degradation**: How to handle missing related entities in helper functions.

## Other Standards

- **Secure ID generation**: Use crypto.randomUUID() for entity ID generation instead of custom implementations for better uniqueness guarantees and security.
- **Return type consistency**: add operations return generated IDs for component workflows requiring immediate entity access, while update and remove operations return void to maintain clean modification APIs.


]]>
hello@smashingmagazine.com (Yegor Gilyov)
<![CDATA[Shades Of October (2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/09/desktop-wallpaper-calendars-october-2025/ https://smashingmagazine.com/2025/09/desktop-wallpaper-calendars-october-2025/ Tue, 30 Sep 2025 11:00:00 GMT As September comes to a close and October takes over, we are in the midst of a time of transition. The air in the morning feels crisper, the leaves are changing colors, and winding down with a warm cup of tea regains its almost-forgotten appeal after a busy summer. When we look closely, October is full of little moments that have the power to inspire, and whatever your secret to finding new inspiration might be, our monthly wallpapers series is bound to give you a little inspiration boost, too.

For this October edition, artists and designers from across the globe once again challenged their creative skills and designed wallpapers to spark your imagination. You find them compiled below, along with a selection of timeless October treasures from our wallpapers archives that are just too good to gather dust.

A huge thank you to everyone who shared their designs with us this month — this post wouldn’t exist without your creativity and kind support! Happy October!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 👩‍🎨
    Feeling inspired? We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬
Midnight Mischief

Designed by Libra Fire from Serbia.

AI

Designed by Ricardo Gimenes from Spain.

Glowing Pumpkin Lanterns

“I was inspired by the classic orange and purple colors of October and Halloween, and wanted to combine those two themes to create a fun pumpkin lantern background.” — Designed by Melissa Bostjancic from New Jersey, United States.

Halloween 2040

Designed by Ricardo Gimenes from Spain.

When The Mind Opens

“In October, we observe World Mental Health Day. The open window in the head symbolizes light and fresh thoughts, the plant represents quiet inner growth and resilience, and the bird brings freedom and connection with the world. Together, they create an image of a mind that breathes, grows, and remains open to new beginnings.” — Designed by Ginger IT Solutions from Serbia.

Enter The Factory

“I took this photo while visiting an old factory. The red light was astonishing.” — Designed by Philippe Brouard from France.

The Crow And The Ghosts

“If my heart were a season, it would be autumn.” — Designed by Lívia Lénárt from Hungary.

The Night Drive

Designed by Vlad Gerasimov from Georgia.

Spooky Town

Designed by Xenia Latii from Germany.

Bird Migration Portal

“When I was young, I had a bird’s nest not so far from my room window. I watched the birds almost every day; because those swallows always left their nests in October. As a child, I dreamt that they all flew together to a nicer place, where they were not so cold.” — Designed by Eline Claeys from Belgium.

Hanlu

“The term ‘Hanlu’ literally translates as ‘Cold Dew.’ The cold dew brings brisk mornings and evenings. Eventually the briskness will turn cold, as winter is coming soon. And chrysanthemum is the iconic flower of Cold Dew.” — Designed by Hong, ZI-Qing from Taiwan.

Autumn’s Splendor

“The transition to autumn brings forth a rich visual tapestry of warm colors and falling leaves, making it a natural choice for a wallpaper theme.” — Designed by Farhan Srambiyan from India.

Ghostbusters

Designed by Ricardo Gimenes from Spain.

Hello Autumn

“Did you know that squirrels don’t just eat nuts? They really like to eat fruit, too. Since apples are the seasonal fruit of October, I decided to combine both things into a beautiful image.” — Designed by Erin Troch from Belgium.

Discovering The Universe

“Autumn is the best moment for discovering the universe. I am looking for a new galaxy or maybe… a UFO!” — Designed by Verónica Valenzuela from Spain.

The Return Of The Living Dead

Designed by Ricardo Gimenes from Spain.

Goddess Makosh

“At the end of the kolodar, as everything begins to ripen, the village sets out to harvesting. Together with the farmers goes Makosh, the Goddess of fields and crops, ensuring a prosperous harvest. What she gave her life and health all year round is now mature and rich, thus, as a sign of gratitude, the girls bring her bread and wine. The beautiful game of the goddess makes the hard harvest easier, while the song of the farmer permeates the field.” — Designed by PopArt Studio from Serbia.

Strange October Journey

“October makes the leaves fall to cover the land with lovely auburn colors and brings out all types of weird with them.” — Designed by Mi Ni Studio from Serbia.

Autumn Deer

Designed by Amy Hamilton from Canada.

Transitions

“To me, October is a transitional month. We gradually slide from summer to autumn. That’s why I chose to use a lot of gradients. I also wanted to work with simple shapes, because I think of October as the ‘back to nature/back to basics month’.” — Designed by Jelle Denturck from Belgium.

Happy Fall!

“Fall is my favorite season!” — Designed by Thuy Truong from the United States.

Roger That Rogue Rover

“The story is a mash-up of retro science fiction and zombie infection. What would happen if a Mars rover came into contact with an unknown Martian material and got infected with a virus? What if it reversed its intended purpose of research and exploration? Instead choosing a life of chaos and evil. What if they all ran rogue on Mars? Would humans ever dare to voyage to the red planet?” Designed by Frank Candamil from the United States.

Turtles In Space

“Finished September, with October comes the month of routines. This year we share it with turtles that explore space.” — Designed by Veronica Valenzuela from Spain.

First Scarf And The Beach

“When I was little, my parents always took me and my sister for a walk at the beach in Nieuwpoort. We didn't really do those beach walks in the summer but always when the sky started to turn gray and the days became colder. My sister and I always took out our warmest scarfs and played in the sand while my parents walked behind us. I really loved those Saturday or Sunday mornings where we were all together. I think October (when it’s not raining) is the perfect month to go to the beach for ‘uitwaaien’ (to blow out), to walk in the wind and take a break and clear your head, relieve the stress or forget one’s problems.” — Designed by Gwen Bogaert from Belgium.

Shades Of Gold

“We are about to experience the magical imagery of nature, with all the yellows, ochers, oranges, and reds coming our way this fall. With all the subtle sunrises and the burning sunsets before us, we feel so joyful that we are going to shout it out to the world from the top of the mountains.” — Designed by PopArt Studio from Serbia.

Autumn Vibes

“Autumn has come, the time of long walks in the rain, weekends spent with loved ones, with hot drinks, and a lot of tenderness. Enjoy.” — Designed by LibraFire from Serbia.

Game Night And Hot Chocolate

“To me, October is all about cozy evenings with hot chocolate, freshly baked cookies, and a game night with friends or family.” — Designed by Lieselot Geirnaert from Belgium.

Haunted House

“Love all the Halloween costumes and decorations!” — Designed by Tazi from Australia.

Say Bye To Summer

“And hello to autumn! The summer heat and high season is over. It’s time to pack our backpacks and head for the mountains — there are many treasures waiting to be discovered!” Designed by Agnes Sobon from Poland.

Tea And Cookies

“As it gets colder outside, all I want to do is stay inside with a big pot of tea, eat cookies and read or watch a movie, wrapped in a blanket. Is it just me?” — Designed by Miruna Sfia from Romania.

The Return

Designed by Ricardo Gimenes from Spain.

Boo!

Designed by Mad Fish Digital from Portland, OR.

Trick Or Treat

“Have you ever wondered if all the little creatures of the animal kingdom celebrate Halloween as humans do? My answer is definitely ‘YES! They do!’ They use acorns as baskets to collect all the treats, pastry brushes as brooms for the spookiest witches and hats made from the tips set of your pastry bag. So, if you happen to miss something from your kitchen or from your tool box, it may be one of them, trying to get ready for All Hallows’ Eve.” — Designed by Carla Dipasquale from Italy.

Dope Code

“October is the month when the weather in Poland starts to get colder, and it gets very rainy, too. You can’t always spend your free time outside, so it’s the perfect opportunity to get some hot coffee and work on your next cool web project!” — Designed by Robert Brodziak from Poland.

Happy Halloween

Designed by Ricardo Gimenes from Spain.

Ghostober

Designed by Ricardo Delgado from Mexico City.

Get Featured Next Month

Would you like to get featured in our next wallpapers post? We’ll publish the November wallpapers on October 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[From Prompt To Partner: Designing Your Custom AI Assistant]]> https://smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/ https://smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/ Fri, 26 Sep 2025 10:00:00 GMT In “A Week In The Life Of An AI-Augmented Designer”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “Prompting Is A Design Act”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.

AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function — allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.

Why Build Your Own Assistant?

If you’ve ever copied and pasted the same mega-prompt for the nth time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.

Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in your voice, with your context and constraints baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.

Benefits Of Reusing Prompts, Even Your Own

Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:

  • Focused on a real repeating problem
    A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback).
  • Customized for your context
    Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.
  • Consistency at scale
    You can use the WIRE+FRAME prompt framework to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.
  • Codifying expertise
    Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).
  • Faster ramp-up for teammates
    Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.

Reasons For Your Own AI Assistant Instead Of Public AI Assistants

Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.

A few reasons for building your AI Assistant instead of using a public assistant someone else created include:

  • Fit: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.
  • Trust & Security: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.
  • Evolution: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog — things a public bot won’t do for you.

Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:

Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.

Note: We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.

When Not to Build An AI Assistant (Yet)

An AI Assistant is great when the same audience has the same problem often. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:

  • One-off or rare tasks
    If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.
  • Sensitive or regulated data
    If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).
  • Heavy orchestration or logic
    Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases.
  • Real-time information
    AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.
  • High-stakes outputs
    For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.
  • No measurable win
    If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.

Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.

In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.

As Always, Start with the User

This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.

  • Who will use this assistant?
  • What’s the specific pain or task they struggle with today?
  • What language, tone, and examples will feel natural to them?

Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.

From Prompt To Assistant

You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.

  • M: Map your prompt
    Port your successful WIRE+FRAME prompt into the AI assistant.
  • A: Add knowledge and training
    Ground the assistant in your world. Upload knowledge files, examples, or guides that make it uniquely yours.
  • T: Tailor for audience
    Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.
  • C: Check, test, and refine
    Test the preview with different inputs and refine until you get the results you expect.
  • H: Hand off and maintain
    Set sharing options and permissions, share the link, and maintain it.

A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:

  • Prototype Prodigy: Transform rough ideas into prototypes and export them into Figma to refine.
  • Critique Coach: Review wireframes or mockups and point out accessibility and usability gaps.

But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: “An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”

And that’s the one we will build in this article — say hello to Insight Interpreter.

Walkthrough: Insight Interpreter

Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in Prompting Is A Design Act into a CustomGPT.

When you start building a CustomGPT by visiting https://chat.openai.com/gpts/editor, you’ll see two paths:

  • Conversational interface
    Vibe-chat your way — it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.
  • Configure interface
    The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.

The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.

M: Map Your Prompt

Paste your full WIRE+FRAME prompt into the Instructions section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:

  • Who & What: The AI persona and the core deliverable (“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”).
  • Input Context: Background or data scope to frame the task (“…analyzing customer feedback uploaded from sources such as…”).
  • Rules & Constraints: Boundaries (“…do not fabricate pain points, representative quotes, journey stages, or patterns…”).
  • Expected Output: Format and fields of the deliverable (“…a structured list of themes. For each theme, include…”).
  • Flow: Explicit, ordered sub-tasks (“Recommended flow of tasks: Step 1…”).
  • Reference Voice: Tone, mood, or reference (“…concise, pattern-driven, and objective…”).
  • Ask for Clarification: Ask questions if unclear (“…if data is missing or unclear, ask before continuing…”).
  • Memory: Memory to recall earlier definitions (“Unless explicitly instructed otherwise, keep using this process…”).
  • Evaluate & Iterate: Have the AI self-critique outputs (“…critically evaluate…suggest improvements…”).

If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective Instructions sections.

A: Add Knowledge And Training

In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: reviews_Q2_2025.csv beats latestfile_final2.csv. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:

  • Taxonomy of themes;
  • Instructions on parsing uploaded data;
  • Examples of real UX research reports using this structure;
  • Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;
  • Customer journey map stages;
  • Customer feedback file templates (not actual data).

An example of a file to help it parse uploaded data is shown below:

T: Tailor For Audience

  • Audience tailoring
    If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the Conversation starters section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”.
  • Functional tailoring
    Fill in the CustomGPT name, icon, description, and capabilities.
    • Name: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter — Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.
    • Icon: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.
    • Description: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.
    • Recommended Model: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.
    • Capabilities: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter & Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” - if the CustomGPT needs to create images.
    • Actions: Making third-party APIs available to the CustomGPT, advanced functionality we don’t need.
    • Additional Settings: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.

C: Check, Test & Refine

Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.

Use the Preview panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.

When things don’t look right, here are quick debugging fixes:

  • Generic answers?
    Tighten Input Context or update the knowledge files.
  • Hallucinations?
    Revisit your Rules section. Turn off web browsing if you don’t need external data.
  • Wrong tone?
    Strengthen Reference Voice or swap in clearer examples.
  • Inconsistent?
    Test across models in preview and set the most reliable one as “Recommended.”

H: Hand Off And Maintain

When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:

  • Only me: Private use. Perfect if you’re still experimenting or keeping it personal.
  • Anyone with the link: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.
  • GPT Store: Fully public. Your assistant is listed and findable by anyone browsing the store. (This is the option we’ll use.)
  • Business workspace (if you’re on GPT Business): Share with others within your business account only — the easiest way to keep it in-house and controlled.

But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:

  • Collect feedback: Ask teammates what worked, what didn’t, and what they had to fix manually.
  • Iterate: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: https://chatgpt.com/gpts/mine
  • Track changes: Keep a simple changelog (date, version, updates) for traceability.
  • Refresh knowledge: Update knowledge files and examples on a regular cadence so answers don’t go stale.

And that’s it! Our Insights Interpreter is now live!

Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:

The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.

Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.

While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs — including the tone, context, output formats, and get the real AI Assistant you need!

Inspiration For Other AI Assistants

We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:

  • Workshop Wizard: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.
  • Research Roundup Buddy: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.
  • Persona Refresher: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).
  • Content Checker: Proofs copy for tone, accessibility, and reading level before it ever hits your site.
  • Trend Tamer: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.
  • Microcopy Provocateur: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.
  • Ethical UX Debater: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.

The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.

Ask Me Anything About Assistants
  • What are some limitations of a CustomGPT?
    Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.
  • Can I copy someone else’s public CustomGPT and tweak it?
    Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME & MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent — Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.
  • How private are my uploaded files?
    The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company.
  • How many files can I upload, and does size matter?
    Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.
  • What’s the difference between a CustomGPT and a Project?
    A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.
From Reading To Building

In this AI x Design series, we’ve gone from messy prompting (“A Week In The Life Of An AI-Augmented Designer”) to a structured prompt framework, WIRE+FRAME (“Prompting Is A Design Act”). And now, in this article, your very own reusable AI sidekick.

CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in how you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They extend your craft, codify your expertise, and give your team leverage that generic AI models can’t.

Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.

]]>
hello@smashingmagazine.com (Lyndon Cerejo)
<![CDATA[Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)]]> https://smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/ https://smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/ Wed, 24 Sep 2025 17:00:00 GMT There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in AI-assisted prototyping, and I’m here to share my thoughts on how it can change the process of designing digital products.

To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery.

How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer — I’m a designer, so let me focus on my “sweet spot”.

And my “sweet spot” has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.

I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.

What We Create During Solution Discovery

The Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.

Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.

It’s important to think holistically, considering different aspects of the solution. I would highlight three dimensions:

  1. Conceptual model: Objects, relations, attributes, actions;
  2. Visualization: Screens, from rough sketches to hi-fi mockups;
  3. Flow: From the very high-level user journeys to more detailed ones.

One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.

This is how different types of design artifacts map to these dimensions:

As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.

Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.

This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.

The Problem With The Mockup-Centric Approach

Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.

This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.

On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.

It’s no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.

As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.

The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).

While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.

However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.

Transforming The Design Process

This situation makes me wonder:

How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?

If we were able to answer this question, we would:

  • Learn faster.
    By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.
  • Gain more confidence.
    Users interact with real logic, which gives us more proof that the idea works.
  • Enforce conceptual clarity.
    A live prototype cannot hide a flawed or ambiguous conceptual model.
  • Establish a clear and lasting source of truth.
    A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.

Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.

The Seductive Promise Of “Vibe Coding”

And the answer seems to be obvious: vibe coding!

“Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”

Wikipedia

The original tweet by Andrej Karpathy:

The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it?

The Trap: A Process Without A Blueprint

Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model.

The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.

This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.

I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:

I need an app to track tests. For every test, I need to fill out the following data:
- Hypothesis (we believe that...) 
- Experiment (to verify that, we will...)
- When (a single date, or a period) 
- Status (New/Planned/In Progress/Proven/Disproven)

And in a minute or so, I get a working prototype:

Inspired by success, I go further:

Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.

And the result is still pretty good:

But then I want to extend the functionality related to product ideas:

Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.

And from this point on, the results are getting more and more confusing.

The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:

I need to create my ideas from scratch, and they are not connected to the tests I created before:

Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:

No, this is not expected behavior — it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.

Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.

Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.

For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:

Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages.

Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.

Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.

From Fast Magic To A Solid Foundation

Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.

This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.

In Part 2 of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding.

Thank you for reading, and I look forward to seeing you in Part 2.

]]>
hello@smashingmagazine.com (Yegor Gilyov)
<![CDATA[Ambient Animations In Web Design: Principles And Implementation (Part 1)]]> https://smashingmagazine.com/2025/09/ambient-animations-web-design-principles-implementation/ https://smashingmagazine.com/2025/09/ambient-animations-web-design-principles-implementation/ Mon, 22 Sep 2025 13:00:00 GMT Unlike timeline-based animations, which tell stories across a sequence of events, or interaction animations that are triggered when someone touches something, ambient animations are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.

In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.

Ambient animations aren’t intrusive; they don’t demand attention, aren’t distracting, and don’t interfere with what someone’s trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations add depth to a brand’s personality.

To illustrate the concept of ambient animations, I’ve recreated the cover of a Quick Draw McGraw comic book (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didn’t move, making them ideal candidates to transform into ambient animations.

FYI: Original cover artist Ray Dirgo was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.

Tip: You can view the complete ambient animation code on CodePen.

Choosing Elements To Animate

Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is knowing when to stop. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesn’t belong.

Natural Motion Cues

When I’m deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: “Does this thing have weight?”, “Is it flexible?”, and “Would it move in real life?” If the answer’s “yes,” it’ll probably feel right if it moves. There are several motion cues in Ray Dirgo’s cover artwork.

For example, the peace pipe Quick Draw’s puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.

#quick-draw-pipe {
  animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate;
}

@keyframes quick-draw-pipe-rotate {
  0% { transform: rotate(3deg); }
  100% { transform: rotate(-3deg); }
}

#quick-draw-feather-1 {
  animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate;
}

#quick-draw-feather-2 {
  animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate;
}

@keyframes quick-draw-feather-1-rotate {
  0% { transform: rotate(3deg); }
  100% { transform: rotate(-3deg); }
}

@keyframes quick-draw-feather-2-rotate {
  0% { transform: rotate(-3deg); }
  100% { transform: rotate(3deg); }
}

Atmosphere, Not Action

I often choose elements or decorative details that add to the vibe but don’t fight for attention.

Ambient animations aren’t about signalling to someone where they should look; they’re about creating a mood.

Here, the chief slowly and subtly rises and falls as he puffs on his pipe.

#chief {
  animation: chief-rise-fall 3s ease-in-out infinite alternate;
}

@keyframes chief-group-rise-fall {
  0% { transform: translateY(0); }
  100% { transform: translateY(-20px); }
}

For added effect, the feather on his head also moves in time with his rise and fall:

#chief-feather-1 {
  animation: chief-feather-1-rotate 3s ease-in-out infinite alternate;
}

#chief-feather-2 {
  animation: chief-feather-2-rotate 3s ease-in-out infinite alternate;
}

@keyframes chief-feather-1-rotate {
  0% { transform: rotate(0deg); }
  100% { transform: rotate(-9deg); }
}

@keyframes chief-feather-2-rotate {
  0% { transform: rotate(0deg); }
  100% { transform: rotate(9deg); }
}

Playfulness And Fun

One of the things I love most about ambient animations is how they bring fun into a design. They’re an opportunity to demonstrate personality through playful details that make people smile when they notice them.

Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draw’s eyebrows also bounce at what look like random intervals.

#quick-draw-eyebrow {
  animation: quick-draw-eyebrow-raise 5s ease-in-out infinite;
}

@keyframes quick-draw-eyebrow-raise {
  0%, 20%, 60%, 100% { transform: translateY(0); }
  10%, 50%, 80% { transform: translateY(-10px); }
}
Keep Hierarchy In Mind

Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact.

Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements — including his pipe and its feathers — within a new SVG group, and then I made that wobble.

#quick-draw-group {
  animation: quick-draw-group-wobble 6s ease-in-out infinite;
}

@keyframes quick-draw-group-wobble {
  0% { transform: rotate(0deg); }
  15% { transform: rotate(2deg); }
  30% { transform: rotate(-2deg); }
  45% { transform: rotate(1deg); }
  60% { transform: rotate(-1deg); }
  75% { transform: rotate(0.5deg); }
  100% { transform: rotate(0deg); }
}

Then, to emphasise this motion, I mirrored those values to wobble his shadow:

#quick-draw-shadow {
  animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
}

@keyframes quick-draw-shadow-wobble {
  0% { transform: rotate(0deg); }
  15% { transform: rotate(-2deg); }
  30% { transform: rotate(2deg); }
  45% { transform: rotate(-1deg); }
  60% { transform: rotate(1deg); }
  75% { transform: rotate(-0.5deg); }
  100% { transform: rotate(0deg); }
}
Apply Restraint

Just because something can be animated doesn’t mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: “What’s the story I’m telling? Where does movement help, and when might it become distracting?”

Remember, restraint isn’t just about doing less; it’s about doing the right things less often.

Layering SVGs For Export

In “Smashing Animations Part 4: Optimising SVGs,” I wrote about the process I rely on to “prepare, optimise, and structure SVGs for animation.” When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.

That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time — always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.

I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.

Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.

Before finally exporting, naming, and adding details, like Quick Draw’s pipe, eyes, and his stoned sparkles.

Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues as they all slot into place automatically.

Implementing Ambient Animations

You don’t need an animation framework or library to add ambient animations to a project. Most of the time, all you’ll need is a well-prepared SVG and some thoughtful CSS.

But, let’s start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like #quick-draw-tail or #chief-smoke-2. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.

Once the SVG is set up, CSS does most of the work. I can use @keyframes for more expressive movement, or animation-delay to simulate randomness and stagger timings. The trick is to keep everything subtle and remember I’m not animating for attention, I’m animating for atmosphere.

Remember that most ambient animations loop continuously, so they should be lightweight and performance-friendly. And of course, it’s good practice to respect users who’ve asked for less motion. You can wrap your animations in an @media prefers-reduced-motion query so they only run when they’re welcome.

@media (prefers-reduced-motion: no-preference) {
  #quick-draw-shadow {
    animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
  }
}

It’s a small touch that’s easy to implement, and it makes your designs more inclusive.

Ambient Animation Design Principles

If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These aren’t hard and fast rules, but rather things I’ve learned while animating smoke, sparkles, eyeballs, and eyebrows.

Keep Animations Slow And Smooth

Ambient animations should feel relaxed, so use longer durations and choose easing curves that feel organic. I often use ease-in-out, but cubic Bézier curves can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.

Loop Seamlessly And Avoid Abrupt Changes

Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by matching start and end keyframes, or by setting the animation-direction to alternate the value so the animation plays forward, then back.

Use Layering To Build Complexity

A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix — you want variation in rhythm, tone, and timing. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.

Avoid Distractions

The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to a raised eyebrow, it’s probably too much, so dial back the animation until it feels like something you’d only catch if you’re really looking.

Consider Accessibility And Performance

Check prefers-reduced-motion, and don’t assume everyone’s device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree.

Quick On The Draw

Ambient animation is like seasoning on a great dish. It’s the pinch of salt you barely notice, but you’d miss when it’s gone. It doesn’t shout, it whispers. It doesn’t lead, it lingers. It’s floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when it’s done well, ambient animation adds personality to a design without asking for applause.

Now, I realise that not everyone needs to animate cartoon characters. So, in part two, I’ll share how I created animations for several recent client projects. Until next time, if you’re crafting an illustration or working with SVG, ask yourself: What would move if this were real? Then animate just that. Make it slow and soft. Keep it ambient.

You can view the complete ambient animation code on CodePen.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence]]> https://smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/ https://smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/ Fri, 19 Sep 2025 10:00:00 GMT Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AI’s fallibility.

This goes beyond a technical glitch; it’s a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold — the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.

Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.

With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?

Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.

The Anatomy of Trust: A Psychological Framework for AI

To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these “legs” for the AI context.

1. Ability (or Competence)

This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.

2. Benevolence

This moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.

3. Integrity

Does AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.

4. Predictability & Reliability

Can the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.

The Trust Spectrum: The Goal of a Well-Calibrated Relationship

Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust.

Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:

  • Active Distrust
    The user believes the AI is incompetent or malicious. They will avoid it or actively work against it.
  • Suspicion & Scrutiny
    The user interacts cautiously, constantly verifying the AI’s outputs. This is a common and often healthy state for users of new AI.
  • Calibrated Trust (The Ideal State)
    This is the sweet spot. The user has an accurate understanding of the AI’s capabilities—its strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.
  • Over-trust & Automation Bias
    The user unquestioningly accepts the AI’s outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.

Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.

The Researcher’s Toolkit: How to Measure Trust In AI

Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods.

Qualitative Probes: Listening For The Language Of Trust

During interviews and usability tests, go beyond “Was that easy to use?” and listen for the underlying psychology. Here are some questions you can start using tomorrow:

  • To measure Ability:
    “Tell me about a time this tool’s performance surprised you, either positively or negatively.”
  • To measure Benevolence:
    “Do you feel this system is on your side? What gives you that impression?”
  • To measure Integrity:
    “If this AI made a mistake, how would you expect it to handle it? What would be a fair response?”
  • To measure Predictability:
    “Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?”

Investigating Existential Fears (The Job Displacement Scenario)

One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.

Imagine a participant says, “Wow, it does that part of my job pretty well. I guess I should be worried.”

An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:

“Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?”

This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.

Quantitative Measures: Putting A Number On Confidence

You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:

  • “The AI’s suggestion was reliable.” (1-7, Strongly Disagree to Strongly Agree)
  • “I am confident in the AI’s output.” (1-7)
  • “I understood why the AI made that recommendation.” (1-7)
  • “The AI responded in a way that I expected.” (1-7)
  • “The AI provided consistent responses over time.” (1-7)

Over time, these metrics can track how trust is changing as your product evolves.

Note: If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.

Behavioral Metrics: Observing What Users Do, Not Just What They Say

People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.

  • Correction Rate
    How often do users manually edit, undo, or ignore the AI’s output? A high correction rate is a powerful signal of low trust in its Ability.
  • Verification Behavior
    Do users switch to Google or open another application to double-check the AI’s work? This indicates they don’t trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.
  • Disengagement
    Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.
Designing For Trust: From Principles to Pixels

Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.

Designing for Competence and Predictability

  • Set Clear Expectations
    Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple “I’m still learning about [topic X], so please double-check my answers” can work wonders.
  • Show Confidence Levels
    Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says “70% chance of rain” is more trustworthy than one that just says “It will rain” and is wrong. An AI could say, “I’m 85% confident in this summary,” or highlight sentences it’s less sure about.

The Role of Explainability (XAI) and Transparency

Explainability isn’t about showing users the code. It’s about providing a useful, human-understandable rationale for a decision.

Instead of:
“Here is your recommendation.”

Try:
“Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”

This addition transforms AI from an opaque oracle to a transparent logical partner.

Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.

Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.

Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer

Your AI will make mistakes.

Trust is not determined by the absence of errors, but by how those errors are handled.
  • Acknowledge Errors Humbly.
    When the AI is wrong, it should be able to state that clearly. “My apologies, I misunderstood that request. Could you please rephrase it?” is far better than silence or a nonsensical answer.
  • Provide an Easy Path to Correction.
    Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A “Thank you, I’m learning from your correction” can help rebuild trust after a failure. As long as this is true.

Likewise, your AI can’t know everything. You should acknowledge this to your users.

UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.

This can include the following:

  • Establish User-Centric Metrics: Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:
    • Hallucination Rate: The frequency with which the AI provides verifiably false information.
    • Successful Fallback Rate: How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.
  • Prioritize the “I Don’t Know” Experience: UXers should frame the “I don’t know” response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.
UX Writing And Trust

All of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy.

The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.

A few key areas for UX writers to focus on when writing for AI include:

  • Prioritize Transparency
    Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if its responses are generated rather than factual. Use phrases that indicate the AI’s nature, such as “As an AI, I can...” or “This is a generated response.”
  • Design for Explainability
    When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.
  • Emphasize User Control
    Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.
The Ethical Tightrope: The Researcher’s Responsibility

As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.

The Danger Of “Trustwashing”

We must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.

Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.

Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.

To avoid and prevent trustwashing, researchers and UX teams should:

  • Prioritize genuine transparency.
    Clearly communicate the limitations, biases, and uncertainties of AI systems. Don’t overstate capabilities or obscure potential risks.
  • Conduct rigorous, independent evaluations.
    Go beyond internal testing and seek external validation of system performance, fairness, and robustness.
  • Engage with diverse stakeholders.
    Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.
  • Be accountable for outcomes.
    Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.
  • Be accountable for outcomes.
    Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.
  • Educate the public.
    Help users understand how AI works, its limitations, and what to look for when evaluating AI products.
  • Advocate for ethical guidelines and regulations.
    Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.
  • Be wary of marketing hype.
    Critically assess claims made about AI systems, especially those that emphasize “trustworthiness” without clear evidence or detailed explanations.
  • Publish negative findings.
    Don’t shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.
  • Focus on user empowerment.
    Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.

The Duty To Advocate

When our research uncovers deep-seated distrust or potential harm — like the fear of job displacement — our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap.

I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.

For example, instead of stating “Users don’t trust our AI because they fear job displacement,” I might frame it as “Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.” This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.

It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10–20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.

Conclusion: Building Our Digital Future On A Foundation Of Trust

The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals.

Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.

Table 1: Published Academic Scales Measuring Trust In Automated Systems

Survey Tool Name Focus Key Dimensions of Trust Citation
Trust in Automation Scale 12-item questionnaire to assess trust between people and automated systems. Measures a general level of trust, including reliability, predictability, and confidence. Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71.
Trust of Automated Systems Test (TOAST) 9-items used to measure user trust in a variety of automated systems, designed for quick administration. Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness). Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735–750.
Trust in Automation Questionnaire A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis. Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer.
Human Computer Trust Scale 12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology. Divided into two key factors:
  1. Benevolence and Competence: This dimension captures the positive attributes of the technology
  2. Perceived Risk: This factor measures the user’s subjective assessment of the potential for negative consequences when using a technical artifact.
Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology

Appendix A: Trust-Building Tactics Checklist

To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:

1. Ability (Competence) & Predictability

  • Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate the AI’s strengths and weaknesses.
  • Show Confidence Levels: Display the AI’s uncertainty (e.g., “70% chance,” “85% confident”) or highlight less certain parts of its output.
  • Provide Explainability (XAI): Offer useful, human-understandable rationales for the AI’s decisions or recommendations (e.g., “Because you frequently read X, I’m recommending Y”).
  • Design for Graceful Error Handling:
    • ✅ Acknowledge errors humbly (e.g., “My apologies, I misunderstood that request.”).
    • ✅ Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down).
    • ✅ Show that feedback is being used (e.g., “Thank you, I’m learning from your correction”).
  • Design for “I Don’t Know” Responses:
    • ✅ Acknowledge limitations honestly.
    • ✅ Prioritize a high-quality, helpful fallback experience when the AI cannot answer.
  • Prioritize Transparency: Clearly communicate the AI’s capabilities and limitations, especially if responses are generated.

2. Benevolence

  • Address Existential Fears: When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools.
  • Prioritize User Well-being: Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap.
  • Emphasize User Control: Provide clear ways for users to give feedback, correct errors, or opt out of AI features.

3. Integrity

  • Adhere to Ethical Principles: Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty.
  • Prioritize Genuine Transparency: Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks.
  • Conduct Rigorous, Independent Evaluations: Seek external validation of system performance, fairness, and robustness to mitigate bias.
  • Engage Diverse Stakeholders: Involve users, ethics experts, and impacted communities in the design and evaluation processes.
  • Be Accountable for Outcomes: Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended.
  • Educate the Public: Help users understand how AI works, its limitations, and how to evaluate AI products.
  • Advocate for Ethical Guidelines: Support the development and implementation of industry standards and policies that promote responsible AI.
  • Be Wary of Marketing Hype: Critically assess claims about AI “trustworthiness” and demand verifiable data.
  • Publish Negative Findings: Be transparent about challenges, failures, or ethical dilemmas encountered during research.

4. Predictability & Reliability

  • Set Clear Expectations: Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle.
  • Show Confidence Levels: Instead of just giving an answer, have the AI signal its own uncertainty.
  • Provide Explainability (XAI) and Transparency: Offer a useful, human-understandable rationale for AI decisions.
  • Design for Graceful Error Handling: Acknowledge errors humbly and provide easy paths to correction.
  • Prioritize the “I Don’t Know” Experience: Frame “I don’t know” as a feature and design a high-quality fallback experience.
  • Prioritize Transparency (UX Writing): Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if responses are generated.
  • Design for Explainability (UX Writing): Explain the reasoning behind AI recommendations, decisions, or complex outputs.
]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[How To Minimize The Environmental Impact Of Your Website]]> https://smashingmagazine.com/2025/09/how-minimize-environmental-impact-website/ https://smashingmagazine.com/2025/09/how-minimize-environmental-impact-website/ Thu, 18 Sep 2025 10:00:00 GMT Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the sun’s heat.

The average temperature of the earth’s surface is now 1.2°C warmer than it was in the late 1800’s, and projected to more than double by the end of the century.

The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity.

The Internet Is A Significant Part Of The Problem

Shockingly, the internet is responsible for higher global greenhouse emissions than the aviation industry, and is projected to be responsible for 14% of all global greenhouse gas emissions by 2040.

If the internet were a country, it would be the 4th largest polluter in the world and represents the largest coal-powered machine on the planet.

But how can something digital like the internet produce harmful emissions?

Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity.

Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet.

Unsurprisingly, internet related emissions are increasing, given that 60% of the world’s population spend, on average, 40% of their waking hours online.

We Must Urgently Reduce The Environmental Impact Of The Internet

As responsible digital professionals, we must act quickly to minimise the environmental impact of our work.

It is encouraging to see the UK government encourage action by adding “Minimise environmental impact” to their best practice design principles, but there is still too much talking and not enough corrective action taking place within our industry.

The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda.

So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to?

A eureka moment on a recent web optimisation project gave me an idea.

My Eureka Moment

I led a project to optimise the mobile performance of www.talktofrank.com, a government drug advice website that aims to keep everyone safe from harm.

Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need.

Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as Google Lighthouse that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%.

We hadn’t set out to reduce the carbon footprint, but seeing these results led to my eureka moment.

I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet — so everyone wins.

This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the triple bottom line of people, planet and profit to include performance and purpose.

So why is ‘minimising’ such a great digital sustainability strategy?

  • Profit
    Website providers win because their website becomes more efficient and more likely to meet its intended outcomes, and a lighter site should also lead to lower hosting bills.
  • People
    People win because they get to use a website that downloads faster, is quick and easy to use because it's been intentionally designed to be as simple as possible, enabling them to complete their tasks with the minimum amount of effort and mental energy.
  • Performance
    Lightweight webpages download faster so perform better for users, particularly those on older devices and on slower network connections.
  • Planet
    The planet wins because the amount of energy (and associated emissions) that is required to deliver the website is reduced.
  • Purpose
    We know that we do our best work when we feel a sense of purpose. It is hugely gratifying as a digital professional to know that our work is doing good in the world and contributing to making things better for people and the environment.

In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible.

So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one).

Doing the right thing isn’t always easy, but we’ve done it before when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects.

Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so don’t worry about it all being another new thing to learn about!

So this all makes sense in theory, but what’s the master plan to use when putting it into practice?

The Masterplan

The masterplan for creating websites that have minimal environmental impact is to focus on offering the maximum value from the minimum input of energy.

It’s an adaptation of Buckminister Fuller’s ‘Dymaxion’ principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources.

Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them.

You can achieve this by minimising websites to their core content, features, and functionality, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isn’t adding a proportional amount of value to the amount of energy it requires to provide it should be removed.

So that’s the masterplan, but how do you put it into practice?

Decarbonise Your Highest Value User Journeys

I’ve developed a new approach called ‘Decarbonising User Journeys’ that will help you to minimise the environmental impact of your website and maximise its performance.

Note: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started.

The secret here is to start small, demonstrate improvements, and then scale.

The approach consists of five simple steps:

  1. Identify your highest value user journey,
  2. Benchmark your user journey,
  3. Set targets,
  4. Decarbonise your user journey,
  5. Track and share your progress.

Here’s how it works.

Step 1: Identify Your Highest Value User Journey

Your highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation.

You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved.

You may have lots of important user journeys, and it’s fine to decarbonise multiple journeys in parallel if you have the resources, but I’d recommend starting with one first to keep things simple.

To bring this to life, let’s consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income.

Step 2: Benchmark Your User Journey

Once you’ve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint.

It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need.

You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met.

You can benchmark the carbon footprint and performance of your user journey using online tools such as Cardamon, Ecograder, Website Carbon Calculator, Google Lighthouse, and Bioscore. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint.

To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. Google Lighthouse works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating.

A great way to bring your benchmarking scores to life is to visualise them in a similar way to how you would present a customer journey map or service blueprint.

This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate.

I’ve found that adding the energy efficiency ratings is really effective because it’s an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people.

Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page.

Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view.

In our football user journey example, it’s clear that the ‘News’ and ‘Tickets’ pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising.

Step 3: Set Targets

Use your benchmarking results to help you set targets to aim for, such as a carbon budget, energy efficiency, maximum page weight, and minimum Google Lighthouse performance targets for each individual page, in addition to your existing UX metrics and business KPIs.

There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and you’ll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys.

Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it.

Step 4: Decarbonise Your User Journey

Your objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible.

It’s up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey.

Here’s a list of activities that will all help to reduce the carbon footprint of your user journey:

  • Work through the recommendations in the ‘diagnostics’ section of your Google Lighthouse report to help optimise page performance.
  • Switch to a green hosting provider if you are not already using one. Use the Green Web Directory to help you choose one.
  • Work through the W3C Web Sustainability Guidelines, implementing the most relevant guidelines to your specific user journey.
  • Remove anything that is not adding any user or business value.
  • Reduce the amount of information on your webpages to make them easier to read and less overwhelming for people.
  • Replace content with a lighter-weight alternative (such as swapping a video for text) if the lighter-weight alternative provides the same value.
  • Optimise assets such as photos, videos, and code to reduce file sizes.
  • Remove any barriers to accessing your website and any distractions that are getting in the way.
  • Re-use familiar components and design patterns to make your websites quicker and easier to use.
  • Write simply and clearly in plain English to help people get the most value from your website and to help them avoid making mistakes that waste time and energy to resolve.
  • Fix any usability issues you identified during your benchmarking to ensure that your website is as easy to use and useful as possible.
  • Ensure your user journey is as accessible as possible so the widest possible audience can benefit from using it, offsetting the environmental cost of providing the website.

Step 5: Track And Share Your Progress

As you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives.

All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint.

Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys.

You should also start to communicate your progress with your users.

It’s important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use.

Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly.

Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible.

You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a digital sustainability statement, such as Unilever’s, as shown below.

A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further.

As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers.

Useful Decarbonising Principles

Keep these principles in mind to help you decarbonise your user journeys:

  • More doing and less talking.
    Start decarbonising your user journeys as soon as possible to accelerate your learning and positive change.
  • Start small.
    Starting small by decarbonising an individual journey makes it easier to get started and generates results to demonstrate value faster.
  • Aim to do more with less.
    Minimise what you offer to ensure you are providing the maximum amount of value for the energy you are consuming.
  • Make your website as useful and as easy to use as possible.
    Useful websites can justify the energy they consume to provide them, ensuring they are net positive in terms of doing more good than harm.
  • Focus on progress over perfection.
    Websites are never finished or perfect but they can always be improved, every small improvement you make will make a difference.
Start Decarbonising Your User Journeys Today

Decarbonising user journeys shouldn’t be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a continual basis as part of your broader digital sustainability strategy.

We know that websites are never finished and that the best websites continually improve as both user and business needs change. I’d like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites.

Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to ‘minimise’ the things they are working on.

This avoids building ‘carbon debt’ that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place.

By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority.

You’ll have noticed that, other than using website carbon calculator tools, this approach doesn’t require any skills that don’t already exist within typical digital teams today. This is great because it means you’ve already got the skills that you need to do this important work.

I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet.

Good luck!

]]>
hello@smashingmagazine.com (James Chudley)
<![CDATA[SerpApi: A Complete API For Fetching Search Engine Data]]> https://smashingmagazine.com/2025/09/serpapi-complete-api-fetching-search-engine-data/ https://smashingmagazine.com/2025/09/serpapi-complete-api-fetching-search-engine-data/ Tue, 16 Sep 2025 17:00:00 GMT This article is a sponsored by SerpApi

SerpApi leverages the power of search engine giants, like Google, DuckDuckGo, Baidu, and more, to put together the most pertinent and accurate search result data for your users from the comfort of your app or website. It’s customizable, adaptable, and offers an easy integration into any project.

What do you want to put together?

  • Search information on a brand or business for SEO purposes;
  • Input data to train AI models, such as the Large Language Model, for a customer service chatbot;
  • Top news and websites to pick from for a subscriber newsletter;
  • Google Flights API: collect flight information for your travel app;
  • Price comparisons for the same product across different platforms;
  • Extra definitions and examples for words that can be offered along a language learning app.

The list goes on.

In other words, you get to leverage the most comprehensive source of data on the internet for any number of needs, from competitive SEO research and tracking news to parsing local geographic data and even completing personal background checks for employment.

Start With A Simple GET Request

The results from the search API are only a URL request away for those who want a super quick start. Just add your search details in the URL parameters. Say you need the search result for “Stone Henge” from the location “Westminster, England, United Kingdom” in language “en-GB”, and country of search origin “uk” from the domain “google.co.uk”. Here’s how simple it is to put the GET request together:

Then there’s the impressive list of libraries that seamlessly integrate the APIs into mainstream programming languages and frameworks such as JavaScript, Ruby, .NET, and more.

Give It A Quick Try

Want to give it a spin? Sign up and start for free, or tinker with the SerpApi’s live playground without signing up. The playground allows you to choose which search engine to target, and you can fill in the values for all the basic parameters available in the chosen API to customize your search. On clicking “Search”, you get the search result page and its extracted JSON data.

If you need to get a feel for the full API first, you can explore their easy-to-grasp web documentation before making any decision. You have the chance to work with all of the APIs to your satisfaction before committing to it, and when that time comes, SerpApi’s multiple price plans tackle anywhere between an economic few hundred searches a month and bulk queries fit for large corporations.

What Data Do You Need?

Beyond the rudimentary search scraping, SerpApi provides a range of configurations, features, and additional APIs worth considering.

Geolocation

Capture the global trends, or refine down to more localized particulars by names of locations or Google’s place identifiers. SerpApi’s optimized routing of requests ensures accurate retrieval of search result data from any location worldwide. If locations themselves are the answers to your queries — say, a cycle trail to be suggested in a fitness app — those can be extracted and presented as maps using SerpApi’s Google Maps API.

Structured JSON

Although search engines reveal results in a tidy user interface, deriving data into your application could cause you to end up with a large data dump to be sifted through — but not if you’re using SerpApi.

SerpApi pulls data in a well-structured JSON format, even for the popular kinds of enriched search results, such as knowledge graphs, review snippets, sports league stats, ratings, product listings, AI overview, and more.

Speedy Results

SerpApi’s baseline performance can take care of timely search data for real-time requirements. But what if you need more? SerpApi’s Ludicrous Speed option, easily enabled from the dashboard with an upgrade, provides a super-fast response time. More than twice as fast as usual, thanks to twice the server power.

There’s also Ludicrous Speed Max, which allocates four times more server resources for your data retrieval. Data that is time-sensitive and for monitoring things in real-time, such as sports scores and tracking product prices, will lose its value if it is not handled in a timely manner. Ludicrous Speed Max guarantees no delays, even for a large-scale enterprise haul.

You can also use a relevant SerpApi API to hone in on your relevant category, like Google Flights API, Amazon API, Google News API, etc., to get fresh and apt results.

If you don’t need the full depth of the search API, there’s a Light version available for Google Search, Google Images, Google Videos, Google News, and DuckDuckGo Search APIs.

Search Controls & Privacy

Need the results asynchronously picked up? Want a refined output using advanced search API parameters and a JSON Restrictor? Looking for search outcomes for specific devices? Don’t want auto-corrected query results? There’s no shortage of ways to configure SerpApi to get exactly what you need.

Additionally, if you prefer not to have your search metadata on their servers, simply turn on the “ZeroTrace” mode that’s available for selected plans.

The X-Ray

Save yourself a headache, literally, trying to play match between what you see on a search result page and its extracted data in JSON. SerpApi’s X-Ray tool shows you where what comes from. It’s available and free in all plans.

Inclusive Support

If you don’t have the expertise or resources for tackling the validity of scraping search results, here’s what SerpApi says:

“SerpApi, LLC assumes scraping and parsing liabilities for both domestic and foreign companies unless your usage is otherwise illegal”.

You can reach out and have a conversation with them regarding the legal protections they offer, as well as inquire about anything else you might want to know about, including SerpApi in your project, such as pricing, performance expected, on-demand options, and technical support. Just drop a message at their contact page.

In other words, the SerpApi team has your back with the support and expertise to get the most from your fetched data.

Try SerpApi Free

That’s right, you can get your hands on SerpApi today and start fetching data with absolutely no commitment, thanks to a free starter plan that gives you up to 250 free search queries. Give it a try and then bump up to one of the reasonably-priced monthly subscription plans with generous search limits.

]]>
hello@smashingmagazine.com (Preethi Sam)
<![CDATA[Functional Personas With AI: A Lean, Practical Workflow]]> https://smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/ https://smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/ Tue, 16 Sep 2025 08:00:00 GMT Traditional personas suck for UX work. They obsess over marketing metrics like age, income, and job titles while missing what actually matters in design: what people are trying to accomplish.

Functional personas, on the other hand, focus on what people are trying to do, not who they are on paper. With a simple AI‑assisted workflow, you can build and maintain personas that actually guide design, content, and conversion decisions.

  • Keep users front of mind with task‑driven personas,
  • Skip fragile demographics; center on goals, questions, and blockers,
  • Use AI to process your messy inputs fast and fill research gaps,
  • Validate lightly, ship confidently, and keep them updated.

In this article, I want to breathe new life into a stale UX asset.

For too long, personas have been something that many of us just created, despite the considerable work that goes into them, only to find they have limited usefulness.

I know that many of you may have given up on them entirely, but I am hoping in this post to encourage you that it is possible to create truly useful personas in a lightweight way.

Why Personas Still Matter

Personas give you a shared lens. When everyone uses the same reference point, you cut debate and make better calls. For UX designers, developers, and digital teams, that shared lens keeps you from designing in silos and helps you prioritize work that genuinely improves the experience.

I use personas as a quick test: Would this change help this user complete their task faster, with fewer doubts? If the answer is no (or a shrug), it’s probably a sign the idea isn’t worth pursuing.

From Demographics To Function

Traditional personas tell you someone’s age, job title, or favorite brand. That makes a nice poster, but it rarely changes design or copy.

Functional personas flip the script. They describe:

  • Goals & tasks: What the person is here to achieve.
  • Questions & objections: What they need to know before they act.
  • Touchpoints: How the person interacts with the organization.
  • Service gaps: How the company might be letting this persona down.

When you center on tasks and friction, you get direct lines from user needs to UI decisions, content, and conversion paths.

But remember, this list isn’t set in stone — adapt it to what’s actually useful in your specific situation.

One of the biggest problems with traditional personas was following a rigid template regardless of whether it made sense for your project. We must not fall into that same mistake with functional personas.

The Benefits of Functional Personas

For small startups, functional personas reduce wasted effort. For enterprise teams, they keep sprawling projects grounded in what matters most.

However, because of the way we are going to produce our personas, they provide certain benefits in either case:

  • Lighten the load: They’re easier to update without large research cycles.
  • Stay current: Because they are easy to produce, we can update them more often.
  • Tie to outcomes: Tasks, objections, and proof points map straight to funnels, flows, and product decisions.

We can deliver these benefits because we are going to use AI to help us, rather than carrying out a lot of time-consuming new research.

How AI Helps Us Get There

Of course, doing fresh research is always preferable. But in many cases, it is not feasible due to time or budget constraints. I would argue that using AI to help us create personas based on existing assets is preferable to having no focus on user attention at all.

AI tools can chew through the inputs you already have (surveys, analytics, chat logs, reviews) and surface patterns you can act on. They also help you scan public conversations around your product category to fill gaps.

I therefore recommend using AI to:

  • Synthesize inputs: Turn scattered notes into clean themes.
  • Spot segments by need: Group people by jobs‑to‑be‑done, not demographics.
  • Draft quickly: Produce first‑pass personas and sample journeys in minutes.
  • Iterate with stakeholders: Update on the fly as you get feedback.

AI doesn’t remove the need for traditional research. Rather, it is a way of extracting more value from the scattered insights into users that already exist within an organization or online.

The Workflow

Here’s how to move from scattered inputs to usable personas. Each step builds on the last, so treat it as a cycle you can repeat as projects evolve.

1. Set Up A Dedicated Workspace

Create a dedicated space within your AI tool for this work. Most AI platforms offer project management features that let you organize files and conversations:

  • In ChatGPT and Claude, use “Projects” to store context and instructions.
  • In Perplexity, Gemini and CoPilot similar functionality is referred to as “Spaces.”

This project space becomes your central repository where all uploaded documents, research data, and generated personas live together. The AI will maintain context between sessions, so you won’t have to re-upload materials each time you iterate. This structured approach makes your workflow more efficient and helps the AI deliver more consistent results.

2. Write Clear Instructions

Next, you can brief your AI project so that it understands what it wants from you. For example:

“Act as a user researcher. Create realistic, functional personas using the project files and public research. Segment by needs, tasks, questions, pain points, and goals. Show your reasoning.”

Asking for a rationale gives you a paper trail you can defend to stakeholders.

3. Upload What You’ve Got (Even If It’s Messy)

This is where things get really powerful.

Upload everything (and I mean everything) you can put your hands on relating to the user. Old surveys, past personas, analytics screenshots, FAQs, support tickets, review snippets; dump them all in. The more varied the sources, the stronger the triangulation.

4. Run Focused External Research

Once you have done that, you can supplement that data by getting AI to carry out “deep research” about your brand. Have AI scan recent (I often focus on the last year) public conversations for your brand, product space, or competitors. Look for:

  • Who’s talking and what they’re trying to do;
  • Common questions and blockers;
  • Phrases people use (great for copywriting).

Save the report you get back into your project.

5. Propose Segments By Need

Once you have done that, ask AI to suggest segments based on tasks and friction points (not demographics). Push back until each segment is distinct, observable, and actionable. If two would behave the same way in your flow, merge them.

This takes a little bit of trial and error and is where your experience really comes into play.

6. Generate Draft Personas

Now you have your segments, the next step is to draft your personas. Use a simple template so the document is read and used. If your personas become too complicated, people will not read them. Each persona should:

  • State goals and tasks,
  • List objections and blockers,
  • Highlight pain points,
  • Show touchpoints,
  • Identify service gaps.

Below is a sample template you can work with:

# Persona Title: e.g. Savvy Shopper
- Person's Name: e.g. John Smith.
- Age: e.g. 24
- Job: e.g. Social Media Manager

"A quote that sums up the persona's general attitude"

## Primary Goal
What they’re here to achieve (1–2 lines).

## Key Tasks
• Task 1
• Task 2
• Task 3

## Questions & Objections
• What do they need to know before they act?
• What might make them hesitate?

## Pain Points
• Where do they get stuck?
• What feels risky, slow, or confusing?

## Touchpoints
• What channels are they most commonly interacting with?

## Service Gaps
• How is the organization currently failing this persona?

Remember, you should customize this to reflect what will prove useful within your organization.

7. Validate

It is important to validate that what the AI has produced is realistic. Obviously, no persona is a true representation as it is a snapshot in time of a Hypothetical user. However, we do want it to be as accurate as possible.

Share your drafts with colleagues who interact regularly with real users — people in support cells or research teams. Where possible, test with a handful of users. Then cut anything that you can’t defend or correct any errors that are identified.

Troubleshooting & Guardrails

As you work through the above process, you will encounter problems. Here are common pitfalls and how to avoid them:

  • Too many personas?
    Merge until each one changes a design or copy decision. Three strong personas beat seven weak ones.
  • Stakeholder wants demographics?
    Only include details that affect behavior. Otherwise, leave them out. Suggest separate personas for other functions (such as marketing).
  • AI hallucinations?
    Always ask for a rationale or sources. Cross‑check with your own data and customer‑facing teams.
  • Not enough data?
    Mark assumptions clearly, then validate with quick interviews, surveys, or usability tests.
Making Personas Useful In Practice

The most important thing to remember is to actually use your personas once they’ve been created. They can easily become forgotten PDFs rather than active tools. Instead, personas should shape your work and be referenced regularly. Here are some ways you can put personas to work:

  • Navigation & IA: Structure menus by top tasks.
  • Content & Proof: Map objections to FAQs, case studies, and microcopy.
  • Flows & UI: Streamline steps to match how people think.
  • Conversion: Match CTAs to personas’ readiness, goals, and pain points.
  • Measurement: Track KPIs that map to personas, not vanity metrics.

With this approach, personas evolve from static deliverables into dynamic reference points your whole team can rely on.

Keep Them Alive

Treat personas as a living toolkit. Schedule a refresh every quarter or after major product changes. Rerun the research pass, regenerate summaries, and archive outdated assumptions. The goal isn’t perfection; it’s keeping them relevant enough to guide decisions.

Bottom Line

Functional personas are faster to build, easier to maintain, and better aligned with real user behavior. By combining AI’s speed with human judgment, you can create personas that don’t just sit in a slide deck; they actively shape better products, clearer interfaces, and smoother experiences.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Creating Elastic And Bounce Effects With Expressive Animator]]> https://smashingmagazine.com/2025/09/creating-elastic-bounce-effects-expressive-animator/ https://smashingmagazine.com/2025/09/creating-elastic-bounce-effects-expressive-animator/ Mon, 15 Sep 2025 10:00:00 GMT This article is a sponsored by Expressive

In the world of modern web design, SVG images are used everywhere, from illustrations to icons to background effects, and are universally prized for their crispness and lightweight size. While static SVG images play an important role in web design, most of the time their true potential is unlocked only when they are combined with motion.

Few things add more life and personality to a website than a well-executed SVG animation. But not all animations have the same impact in terms of digital experience. For example, elastic and bounce effects have a unique appeal in motion design because they bring a sense of realism into movement, making animations more engaging and memorable.

(Large preview)

However, anyone who has dived into animating SVGs knows the technical hurdles involved. Creating a convincing elastic or bounce effect traditionally requires handling complex CSS keyframes or wrestling with JavaScript animation libraries. Even when using an SVG animation editor, it will most likely require you to manually add the keyframes and adjust the easing functions between them, which can become a time-consuming process of trial and error, no matter the level of experience you have.

This is where Expressive Animator shines. It allows creators to apply elastic and bounce effects in seconds, bypassing the tedious work of manual keyframe editing. And the result is always exceptional: animations that feel alive, produced with a fraction of the effort.

Using Expressive Animator To Create An Elastic Effect

Creating an elastic effect in Expressive Animator is remarkably simple, fast, and intuitive, since the effect is built right into the software as an easing function. This means you only need two keyframes (start and end) to make the effect, and the software will automatically handle the springy motion in between. Even better, the elastic easing can be applied to any animatable property (e.g., position, scale, rotation, opacity, morph, etc.), giving you a consistent way to add it to your animations.

Before we dive into the tutorial, take a look at the video below to see what you will learn to create and the entire process from start to finish.

Once you hit the “Create project” button, you can use the Pen and Ellipse tools to create the artwork that will be animated, or you can simply copy and paste the artwork below.

Press the A key on your keyboard to switch to the Node tool, then select the String object and move its handle to the center-right point of the artboard. Don’t worry about precision, as the snapping will do all the heavy lifting for you. This will bend the shape and add keyframes for the Morph animator.

Next, press the V key on your keyboard to switch to the Selection tool. With this tool enabled, select the Ball, move it to the right, and place it in the middle of the string. Once again, snapping will do all the hard work, allowing you to position the ball exactly where you want to, while auto-recording automatically adds the appropriate keyframes.

You can now replay the animation and disable auto-recording by clicking on the Auto-Record button again.

As you can see when replaying, the direction in which the String and Ball objects are moving is wrong. Fortunately, we can fix this extremely easily just by reversing the keyframes. To do this, select the keyframes in the timeline and right-click to open the context menu and choose Reverse. This will reverse the keyframes, and if you replay the animation, you will see that the direction is now correct.

With this out of the way, we can finally add the elastic effect. Select all the keyframes in the timeline and click on the Custom easing button to open a dialog with easing options. From the dialog, choose Elastic and set the oscillations to 4 and the stiffness to 2.5.

That’s it! Click anywhere outside the easing dialog to close it and replay the animation to see the result.

The animation can be exported as well. Press Cmd/Ctrl + E on your keyboard to open the export dialog and choose from various export options, ranging from vectorized formats, such as SVG and Lottie, to rasterized formats, such as GIF and video.

For this specific animation, we’re going to choose the SVG export format. Expressive Animator allows you to choose between three different types of SVG, depending on the technology used for animation: SMIL, CSS, or JavaScript.

Each of these technologies has different strengths and weaknesses, but for this tutorial, we are going to choose SMIL. This is because SMIL-based animations are widely supported, even on Safari browsers, and can be used as background images or embedded in HTML pages using the <img> tag. In fact, Andy Clarke recently wrote all about SMIL animations here at Smashing Magazine if you want a full explanation of how it works.

You can visualize the exported SVG in the following CodePen demo:

Conclusion

Elastic and bounce effects have long been among the most desirable but time-consuming techniques in motion design. By integrating them directly into its easing functions, Expressive Animator removes the complexity of manual keyframe manipulation and transforms what used to be a technical challenge into a creative opportunity.

The best part is that getting started with Expressive Animator comes with zero risk. The software offers a full 7–day free trial without requiring an account, so you can download it instantly and begin experimenting with your own designs right away. After the trial ends, you can buy Expressive Animator with a one-time payment, no subscription required. This will give you a perpetual license covering both Windows and macOS.

To help you get started even faster, I’ve prepared some extra resources for you. You’ll find the source files for the animations created in this tutorial, along with a curated list of useful links that will guide you further in exploring Expressive Animator and SVG animation. These materials are meant to give you a solid starting point so you can learn, experiment, and build on your own with confidence.

]]>
hello@smashingmagazine.com (Marius Sarca)
<![CDATA[From Data To Decisions: UX Strategies For Real-Time Dashboards]]> https://smashingmagazine.com/2025/09/ux-strategies-real-time-dashboards/ https://smashingmagazine.com/2025/09/ux-strategies-real-time-dashboards/ Fri, 12 Sep 2025 15:00:00 GMT I once worked with a fleet operations team that monitored dozens of vehicles in multiple cities. Their dashboard showed fuel consumption, live GPS locations, and real-time driver updates. Yet the team struggled to see what needed urgent attention. The problem was not a lack of data but a lack of clear indicators to support decision-making. There were no priorities, alerts, or context to highlight what mattered most at any moment.

Real-time dashboards are now critical decision-making tools in industries like logistics, manufacturing, finance, and healthcare. However, many of them fail to help users make timely and confident decisions, even when they show live data.

Designing for real-time use is very different from designing static dashboards. The challenge is not only presenting metrics but enabling decisions under pressure. Real-time users face limited time and a high cognitive load. They need clarity on actions, not just access to raw data.

This requires interface elements that support quick scanning, pattern recognition, and guided attention. Layout hierarchy, alert colors, grouping, and motion cues all help, but they must be driven by a deeper strategy: understanding what the user must decide in that moment.

This article explores practical UX strategies for real-time dashboards that enable real decisions. Instead of focusing only on visual best practices, it looks at how user intent, personalization, and cognitive flow can turn raw data into meaningful, timely insights.

Designing for Real-Time Comprehension: Helping Users Stay Focused Under Pressure

A GPS app not only shows users their location but also helps them decide where to go next. In the same way, a real-time dashboard should go beyond displaying the latest data. Its purpose is to help users quickly understand complex information and make informed decisions, especially in fast-paced environments with short attention spans.

How Users Process Real-Time Updates

Humans have limited cognitive capacity, so they can only process a small amount of data at once. Without proper context or visual cues, rapidly updating dashboards can overwhelm users and shift attention away from key information.

To address this, I use the following approaches:

  • Delta Indicators and Trend Sparklines
    Delta indicators show value changes at a glance, while sparklines are small line charts that reveal trends over time in a compact space. For example, a sales dashboard might show a green upward arrow next to revenue to indicate growth, along with a sparkline displaying sales trends over the past week.
  • Subtle Micro-Animations
    Small animations highlight changes without distracting users. Research in cognitive psychology shows that such animations effectively draw attention, helping users notice updates while staying focused. For instance, a soft pulse around a changing metric can signal activity without overwhelming the viewer.
  • Mini-History Views
    Showing a short history of recent changes reduces reliance on memory. For example, a dashboard might let users scroll back a few minutes to review updates, supporting better understanding and verification of data trends.

Common Challenges In Real-Time Dashboards

Many live dashboards fail when treated as static reports instead of dynamic tools for quick decision-making.

In my early projects, I made this mistake, resulting in cluttered layouts, distractions, and frustrated users.

Typical errors include the following:

  • Overcrowded Interfaces: Presenting too many metrics competes for users’ attention, making it hard to focus.
  • Flat Visual Hierarchy: Without clear emphasis on critical data, users might focus on less important information.
  • No Record of Changes: When numbers update instantly with no explanation, users can feel lost or confused.
  • Excessive Refresh Rates: Not all data needs constant updates. Updating too frequently can create unnecessary motion and cognitive strain.

Managing Stress And Cognitive Overload

Under stress, users depend on intuition and focus only on immediately relevant information. If a dashboard updates too quickly or shows conflicting alerts, users may delay actions or make mistakes. It is important to:

  • Prioritize the most important data first to avoid overwhelming the user.
  • Offer snapshot or pause options so users can take time to process information.
  • Use clear indicators to show if an action is required or if everything is operating normally.

In real-time environments, the best dashboards balance speed with calmness and clarity. They are not just data displays but tools that promote live thinking and better decisions.

Enabling Personalization For Effective Data Consumption

Many analytics tools let users build custom dashboards, but these design principles guide layouts that support decision-making. Personalization options such as custom metric selection, alert preferences, and update pacing help manage cognitive load and improve data interpretation.

Cognitive Challenge UX Risk in Real-Time Dashboards Design Strategy to Mitigate
Users can’t track rapid changes Confusion, missed updates, second-guessing Use delta indicators, change animations, and trend sparklines
Limited working memory Overload from too many metrics at once Prioritize key KPIs, apply progressive disclosure
Visual clutter under stress Tunnel vision or misprioritized focus Apply a clear visual hierarchy, minimize non-critical elements
Unclear triggers or alerts Decision delays, incorrect responses Use thresholds, binary status indicators, and plain language
Lack of context/history Misinterpretation of sudden shifts Provide micro-history, snapshot freeze, or hover reveal

Common Cognitive Challenges in Real-Time Dashboards and UX Strategies to Overcome Them.

Designing For Focus: Using Layout, Color, And Animation To Drive Real-Time Decisions

Layout, color, and animation do more than improve appearance. They help users interpret live data quickly and make decisions under time pressure. Since users respond to rapidly changing information, these elements must reduce cognitive load and highlight key insights immediately.

  • Creating a Visual Hierarchy to Guide Attention.
    A clear hierarchy directs users’ eyes to key metrics. Arrange elements so the most important data stands out. For example, place critical figures like sales volume or system health in the upper left corner to match common scanning patterns. Limit visible elements to about five to prevent overload and ease processing—group related data into cards to improve scannability and help users focus without distraction.
  • Using Color Purposefully to Convey Meaning.
    Color communicates meaning in data visualization. Red or orange indicates critical alerts or negative trends, signaling urgency. Blue and green represent positive or stable states, offering reassurance. Neutral tones like gray support background data and make key colors stand out. Ensure accessibility with strong contrast and pair colors with icons or labels. For example, bright red can highlight outages while muted gray marks historical logs, keeping attention on urgent issues.
  • Supporting Comprehension with Subtle Animation.
    Animation should clarify, not distract. Smooth transitions of 200 to 400 milliseconds communicate changes effectively. For instance, upward motion in a line chart reinforces growth. Hover effects and quick animations provide feedback and improve interaction. Thoughtful motion makes changes noticeable while maintaining focus.

Layout, color, and animation create an experience that enables fast, accurate interpretation of live data. Real-time dashboards support continuous monitoring and decision-making by reducing mental effort and highlighting anomalies or trends. Personalization allows users to tailor dashboards to their roles, improving relevance and efficiency. For example, operations managers may focus on system health metrics while sales directors prioritize revenue KPIs. This adaptability makes dashboards dynamic, strategic tools.

Element Placement & Visual Weight Purpose & Suggested Colors Animation Use Case & Effect
Primary KPIs Center or top-left; bold, large font Highlight critical metrics; typically stable states Value updates: smooth increase (200–400 ms)
Controls Top or left panel; light, minimal visual weight Provide navigation/filtering; neutral color schemes User actions: subtle feedback (100–150 ms)
Charts Middle or right; medium emphasis Show trends and comparisons; use blue/green for positives, grey for neutral Chart trends: trail or fade (300–600 ms)
Alerts Edge of dashboard or floating; high contrast (bold) Signal critical issues; red/orange for alerts, yellow/amber for warnings Quick animations for appearance; highlight changes

Design Elements, Placement, Color, and Motion Strategies for Effective Real-Time Dashboards.

Clarity In Motion: Designing Dashboards That Make Change Understandable

If users cannot interpret changes quickly, the dashboard fails regardless of its visual design. Over time, I have developed methods that reduce confusion and make change feel intuitive rather than overwhelming.

One of the most effective tools I use is the sparkline, a compact line chart that shows a trend over time and is typically placed next to a key performance indicator. Unlike full charts, sparklines omit axes and labels. Their simplicity makes them powerful, since they instantly show whether a metric is trending up, down, or steady. For example, placing a sparkline next to monthly revenue immediately reveals if performance is improving or declining, even before the viewer interprets the number.

When using sparklines effectively, follow these principles:

  • Pair sparklines with metrics such as revenue, churn rate, or user activity so users can see both the value and its trajectory at a glance.
  • Simplify by removing clutter like axis lines or legends unless they add real value.
  • Highlight the latest data point with a dot or accent color since current performance often matters more than historical context.
  • Limit the time span. Too many data points compress the sparkline and hurt readability. A focused window, such as the last 7 or 30 days, keeps the trend clear.
  • Use sparklines in comparative tables. When placed in rows (for example, across product lines or regions), they reveal anomalies or emerging patterns that static numbers may hide.
Interactive P&L Performance Dashboard with Forecast and Variance Tracking. (Large preview)

I combine sparklines with directional indicators like arrows and percentage deltas to support quick interpretation.

For example, pairing “▲ +3.2%” with a rising sparkline shows both the direction and scale of change. I do not rely only on color to convey meaning.

Since 1 in 12 men is color-blind, using red and green alone can exclude some users. To ensure accessibility, I add shapes and icons alongside color cues.

Micro-animations provide subtle but effective signals. This counters change blindness — our tendency to miss non-salient changes.

  • When numbers update, I use fade-ins or count-up transitions to indicate change without distraction.
  • If a list reorders, such as when top-performing teams shift positions, a smooth slide animation under 300 milliseconds helps users maintain spatial memory. These animations reduce cognitive friction and prevent disorientation.

Layout is critical for clarifying change:

  • I use modular cards with consistent spacing, alignment, and hierarchy to highlight key metrics.
  • Cards are arranged in a sortable grid, allowing filtering by severity, recency, or relevance.
  • Collapsible sections manage dense information while keeping important data visible for quick scanning and deeper exploration.

For instance, in a logistics dashboard, a card labeled “On-Time Deliveries” may display a weekly sparkline. If performance dips, the line flattens or turns slightly red, a downward arrow appears with a −1.8% delta, and the updated number fades in. This gives instant clarity without requiring users to open a detailed chart.

All these design choices support fast, informed decision-making. In high-velocity environments like product analytics, logistics, or financial operations, dashboards must do more than present data. They must reduce ambiguity and help teams quickly detect change, understand its impact, and take action.

Making Reliability Visible: Designing for Trust In Real-Time Data Interfaces

In real-time data environments, reliability is not just a technical feature. It is the foundation of user trust. Dashboards are used in high-stakes, fast-moving contexts where decisions depend on timely, accurate data. Yet these systems often face less-than-ideal conditions such as unreliable networks, API delays, and incomplete datasets. Designing for these realities is not just damage control. It is essential for making data experiences usable and trustworthy.

When data lags or fails to load, it can mislead users in serious ways:

  • A dip in a trendline may look like a market decline when it is only a delay in the stream.
  • Missing categories in a bar chart, if not clearly signaled, can lead to flawed decisions.

To mitigate this:

  • Every data point should be paired with its condition.
  • Interfaces must show not only what the data says but also how current or complete it is.

One effective strategy is replacing traditional spinners with skeleton UIs. These are greyed-out, animated placeholders that suggest the structure of incoming data. They set expectations, reduce anxiety, and show that the system is actively working. For example, in a financial dashboard, users might see the outline of a candlestick chart filling in as new prices arrive. This signals that data is being refreshed, not stalled.

Handling Data Unavailability

When data is unavailable, I show cached snapshots from the most recent successful load, labeled with timestamps such as “Data as of 10:42 AM.” This keeps users aware of what they are viewing.

In operational dashboards such as logistics or monitoring systems, this approach lets users act confidently even when real-time updates are temporarily out of sync.

Managing Connectivity Failures

To handle connectivity failures, I use auto-retry mechanisms with exponential backoff, giving the system several chances to recover quietly before notifying the user.

If retries fail, I maintain transparency with clear banners such as “Offline… Reconnecting…” In one product, this approach prevented users from reloading entire dashboards unnecessarily, especially in areas with unreliable Wi-Fi.

Ensuring Reliability with Accessibility

Reliability strongly connects with accessibility:

  • Real-time interfaces must announce updates without disrupting user focus, beyond just screen reader compatibility.
  • ARIA live regions quietly narrate significant changes in the background, giving screen reader users timely updates without confusion.
  • All controls remain keyboard-accessible.
  • Animations follow motion-reduction preferences to support users with vestibular sensitivities.

Data Freshness Indicator

A compact but powerful pattern I often implement is the Data Freshness Indicator, a small widget that:

  • Shows sync status,
  • Displays the last updated time,
  • Includes a manual refresh button.

This improves transparency and reinforces user control. Since different users interpret these cues differently, advanced systems allow personalization. For example:

  • Analysts may prefer detailed logs of update attempts.
  • Business users might see a simple status such as “Live”, “Stale”, or “Paused”.

Reliability in data visualization is not about promising perfection. It is about creating a resilient, informative experience that supports human judgment by revealing the true state of the system.

When users understand what the dashboard knows, what it does not, and what actions it is taking, they are more likely to trust the data and make smarter decisions.

Real-World Case Study

In my work across logistics, hospitality, and healthcare, the challenge has always been to distill complexity into clarity. A well-designed dashboard is more than functional; it serves as a trusted companion in decision-making, embedding clarity, speed, and confidence from the start.

1. Fleet Management Dashboard

A client in the car rental industry struggled with fragmented operational data. Critical details like vehicle locations, fuel usage, maintenance schedules, and downtime alerts were scattered across static reports, spreadsheets, and disconnected systems. Fleet operators had to manually cross-reference data sources, even for basic dispatch tasks, which caused missed warnings, inefficient routing, and delays in response.

We solved these issues by redesigning the dashboard strategically, focusing on both layout improvements and how users interpret and act on information.

Strategic Design Improvements and Outcomes:

  • Instant visibility of KPIs
    High-contrast cards at the top of the dashboard made key performance indicators instantly visible.
    Example: Fuel consumption anomalies that previously went unnoticed for days were flagged within hours, enabling quick corrective action.
  • Clear trend and pattern visualization
    Booking forecasts, utilization graphs, and city-by-city comparisons highlighted performance trends.
    Example: A weekday-weekend booking chart helped a regional manager spot underperformance in one city and plan targeted vehicle redistribution.
  • Unified operational snapshot
    Cost, downtime, and service schedules were grouped into one view.
    Result: The operations team could assess fleet health in under five minutes each morning instead of using multiple tools.
  • Predictive context for planning
    Visual cues showed peak usage periods and historical demand curves.
    Result: Dispatchers prepared for forecasted spikes, reducing customer wait times and improving resource availability.
  • Live map with real-time status
    A color-coded map displays vehicle status: green for active, red for urgent attention, gray for idle.
    Result: Supervisors quickly identified inactive or delayed vehicles and rerouted resources as needed.
  • Role-based personalization
    Personalization options were built in, allowing each role to customize dashboard views.
    Example: Fleet managers prioritized financial KPIs, while technicians filtered for maintenance alerts and overdue service reports.

Strategic Impact: The dashboard redesign was not only about improving visuals. It changed how teams interacted with data. Operators no longer needed to search for insights, as the system presented them in line with tasks and decision-making. The dashboard became a shared reference for teams with different goals, enabling real-time problem solving, fewer manual checks, and stronger alignment across roles. Every element was designed to build both understanding and confidence in action.

2. Hospitality Revenue Dashboard

One of our clients, a hospitality group with 11 hotels in the UAE, faced a growing strategic gap. They had data from multiple departments, including bookings, events, food and beverage, and profit and loss, but it was spread across disconnected dashboards.

Strategic Design Improvements and Outcomes:

  • All revenue streams (rooms, restaurants, bars, and profit and loss) were consolidated into a single filterable dashboard.
    Example: A revenue manager could filter by property to see if a drop in restaurant revenue was tied to lower occupancy or was an isolated issue. The structure supported daily operations, weekly reviews, and quarterly planning.
  • Disconnected charts and metrics were replaced with a unified visual narrative showing how revenue streams interacted.
    Example: The dashboard revealed how event bookings influenced bar sales or staffing. This shifted teams from passive data consumption to active interpretation.
  • AI modules for demand forecasting, spend prediction, and pricing recommendations were embedded in the dashboard.
    Result: Managers could test rate changes with interactive sliders and instantly view effects on occupancy, revenue per available room, and food and beverage income. This enabled proactive scenario planning.
  • Compact, color-coded sparklines were placed next to each key metric to show short- and long-term trends.
    Result: These visuals made it easy to spot seasonal shifts or channel-specific patterns without switching views or opening separate reports.
  • Predictive overlays such as forecast bands and seasonality markers were added to performance graphs.
    Example: If occupancy rose but lagged behind seasonal forecasts, the dashboard surfaced the gap, prompting early action such as promotions or issue checks.

Strategic Impact: By aligning the dashboard structure with real pricing and revenue strategies, the client shifted from static reporting to forward-looking decision-making. This was not a cosmetic interface update. It was a complete rethinking of how data could support business goals. The result enabled every team, from finance to operations, to interpret data based on their specific roles and responsibilities.

3. Healthcare Interoperability Dashboard

In healthcare, timely and accurate access to patient information is essential. A multi-specialist hospital client struggled with fragmented data. Doctors had to consult separate platforms such as electronic health records, lab results, and pharmacy systems to understand a patient’s condition. This fragmented process slowed decision-making and increased risks to patient safety.

Strategic Design Improvements and Outcomes:

  • Patient medical history was integrated to unify lab reports, medications, and allergy information in one view.
    Example: A cardiologist, for example, could review recent cardiac markers with active medications and allergy alerts in the same place, enabling faster diagnosis and treatment.
  • Lab report tracking was upgraded to show test type, date, status, and a clear summary with labels such as Pending, Completed, and Awaiting Review.
    Result: Trends were displayed with sparklines and color-coded indicators, helping clinicians quickly spot abnormalities or improvements.
  • A medication management module was added for prescription entry, viewing, and exporting. It included dosage, frequency, and prescribing physician details.
    Example: Specialists could customize it to highlight drugs relevant to their practice, reducing overload and focusing on critical treatments.
  • Rapid filtering options were introduced to search by patient name, medical record number, date of birth, gender, last visit, insurance company, or policy number.
    Example: Billing staff could locate patients by insurance details, while clinicians filtered records by visits or demographics.
  • Visual transparency was provided through interactive tooltips explaining alert rationales and flagged data points.
    Result: Clinicians gained immediate context, such as the reason a lab value was marked as critical, supporting informed and timely decisions.

Strategic Impact: Our design encourages active decision-making instead of passive data review. Interactive tooltips ensure visual transparency by explaining the rationale behind alerts and flagged data points. These information boxes give clinicians immediate context, such as why a lab value is marked critical, helping them understand implications and next steps without delay.

Key UX Insights from the Above 3 Examples

  • Design should drive conclusions, not just display data.
    Contextualized data enabled faster and more confident decisions. For example, a logistics dashboard flagged high-risk delays so dispatchers could act immediately.
  • Complexity should be structured, not eliminated.
    Tools used timelines, layering, and progressive disclosure to handle dense information. A financial tool groups transactions by time blocks, easing cognitive load without losing detail.
  • Trust requires clear system logic.
    Users trusted predictive alerts only after understanding their triggers. A healthcare interface added a "Why this alert?" option that explained the reasoning.
  • The aim is clarity and action, not visual polish.
    Redesigns improved speed, confidence, and decision-making. In real-time contexts, confusion delays are more harmful than design flaws.
Final Takeaways

Real-time dashboards are not about overwhelming users with data. They are about helping them act quickly and confidently. The most effective dashboards reduce noise, highlight the most important metrics, and support decision-making in complex environments. Success lies in balancing visual clarity with cognitive ease while accounting for human limits like memory, stress, and attention alongside technical needs.

Do:

  • Prioritize key metrics in a clear order so priorities are obvious. For instance, a support manager may track open tickets before response times.
  • Use subtle micro-animations and small visual cues to indicate changes, helping users spot trends without distraction.
  • Display data freshness and sync status to build trust.
  • Plan for edge cases like incomplete or offline data to keep the experience consistent.
  • Ensure accessibility with high contrast, ARIA labels, and keyboard navigation.

Don’t:

  • Overcrowd the interface with too many metrics.
  • Rely only on color to communicate critical information.
  • Update all data at once or too often, which can cause overload.
  • Hide failures or delays; transparency helps users adapt.

Over time, I’ve come to see real-time dashboards as decision assistants rather than control panels. When users say, “This helps me stay in control,” it reflects a design built on empathy that respects cognitive limits and enhances decision-making. That is the true measure of success.

]]>
hello@smashingmagazine.com (Karan Rawal)
<![CDATA[Integrating CSS Cascade Layers To An Existing Project]]> https://smashingmagazine.com/2025/09/integrating-css-cascade-layers-existing-project/ https://smashingmagazine.com/2025/09/integrating-css-cascade-layers-existing-project/ Wed, 10 Sep 2025 10:00:00 GMT You can always get a fantastic overview of things in Stephenie Eckles’ article, “Getting Started With CSS Cascade Layers”. But let’s talk about the experience of integrating cascade layers into real-world code, the good, the bad, and the spaghetti.

I could have created a sample project for a classic walkthrough, but nah, that’s not how things work in the real world. I want to get our hands dirty, like inheriting code with styles that work and no one knows why.

Finding projects without cascade layers was easy. The tricky part was finding one that was messy enough to have specificity and organisation issues, but broad enough to illustrate different parts of cascade layers integration.

Ladies and gentlemen, I present you with this Discord bot website by Drishtant Ghosh. I’m deeply grateful to Drishtant for allowing me to use his work as an example. This project is a typical landing page with a navigation bar, a hero section, a few buttons, and a mobile menu.

You see how it looks perfect on the outside. Things get interesting, however, when we look at the CSS styles under the hood.

Understanding The Project

Before we start throwing @layers around, let’s get a firm understanding of what we’re working with. I cloned the GitHub repo, and since our focus is working with CSS Cascade Layers, I’ll focus only on the main page, which consists of three files: index.html, index.css, and index.js.

Note: I didn’t include other pages of this project as it’d make this tutorial too verbose. However, you can refactor the other pages as an experiment.

The index.css file is over 450 lines of code, and skimming through it, I can see some red flags right off the bat:

  • There’s a lot of code repetition with the same selectors pointing to the same HTML element.
  • There are quite a few #id selectors, which one might argue shouldn’t be used in CSS (and I am one of those people).
  • #botLogo is defined twice and over 70 lines apart.
  • The !important keyword is used liberally throughout the code.

And yet the site works. There is nothing “technically” wrong here, which is another reason CSS is a big, beautiful monster — errors are silent!

Planning The Layer Structure

Now, some might be thinking, “Can’t we simply move all of the styles into a single layer, like @layer legacy and call it a day?”

You could… but I don’t think you should.

Think about it: If more layers are added after the legacy layer, they should override the styles contained in the legacy layer because the specificity of layers is organized by priority, where the layers declared later carry higher priority.

/* new is more specific */
@layer legacy, new;

/* legacy is more specific */
@layer new, legacy;

That said, we must remember that the site’s existing styles make liberal use of the !important keyword. And when that happens, the order of cascade layers gets reversed. So, even though the layers are outlined like this:

@layer legacy, new;

…any styles with an !important declaration suddenly shake things up. In this case, the priority order becomes:

  1. !important styles in the legacy layer (most powerful),
  2. !important styles in the new layer,
  3. Normal styles in the new layer,
  4. Normal styles in the legacy layer (least powerful).

I just wanted to clear that part up. Let’s continue.

We know that cascade layers handle specificity by creating an explicit order where each layer has a clear responsibility, and later layers always win.

So, I decided to split things up into five distinct layers:

  • reset: Browser default resets like box-sizing, margins, and paddings.
  • base: Default styles of HTML elements, like body, h1, p, a, etc., including default typography and colours.
  • layout: Major page structure stuff for controlling how elements are positioned.
  • components: Reusable UI segments, like buttons, cards, and menus.
  • utilities: Single helper modifiers that do just one thing and do it well.

This is merely how I like to break things out and organize styles. Zell Liew, for example, has a different set of four buckets that could be defined as layers.

There’s also the concept of dividing things up even further into sublayers:

@layer components {
  /* sub-layers */
  @layer buttons, cards, menus;
}

/* or this: */
@layer components.buttons, components.cards, components.menus;

That might come in handy, but I also don’t want to overly abstract things. That might be a better strategy for a project that’s scoped to a well-defined design system.

Another thing we could leverage is unlayered styles and the fact that any styles not contained in a cascade layer get the highest priority:

@layer legacy { a { color: red !important; } }
@layer reset { a { color: orange !important; } }
@layer base { a { color: yellow !important; } }

/* unlayered */
a { color: green !important; } /* highest priority */

But I like the idea of keeping all styles organized in explicit layers because it keeps things modular and maintainable, at least in this context.

Let’s move on to adding cascade layers to this project.

Integrating Cascade Layers

We need to define the layer order at the top of the file:

@layer reset, base, layout, components, utilities;

This makes it easy to tell which layer takes precedence over which (they get more priority from left to right), and now we can think in terms of layer responsibility instead of selector weight. Moving forward, I’ll proceed through the stylesheet from top to bottom.

First, I noticed that the Poppins font was imported in both the HTML and CSS files, so I removed the CSS import and left the one in index.html, as that’s generally recommended for quickly loading fonts.

Next is the universal selector (*) styles, which include classic reset styles that are perfect for @layer reset:

@layer reset {
  * {
    margin: 0;
    padding: 0;
    box-sizing: border-box;
  }
}

With that out of the way, the body selector is next. I’m putting this into @layer base because it contains core styles for the project, like backgrounds and fonts:

@layer base {
  body {
    background-image: url("bg.svg"); /* Renamed to bg.svg for clarity */
    font-family: "Poppins", sans-serif;
    /* ... other styles */
  }
}

The way I’m tackling this is that styles in the base layer should generally affect the whole document. So far, no page breaks or anything.

Swapping IDs For Classes

Following the body element selector is the page loader, which is defined as an ID selector, #loader.

I’m a firm believer in using class selectors over ID selectors as much as possible. It keeps specificity low by default, which prevents specificity battles and makes the code a lot more maintainable.

So, I went into the index.html file and refactored elements with id="loader" to class="loader". In the process, I saw another element with id="page" and changed that at the same time.

While still in the index.html file, I noticed a few div elements missing closing tags. It is astounding how permissive browsers are with that. Anyways, I cleaned those up and moved the <script> tag out of the .heading element to be a direct child of body. Let’s not make it any tougher to load our scripts.

Now that we’ve levelled the specificity playing field by moving IDs to classes, we can drop them into the components layer since a loader is indeed a reusable component:

@layer components {
  .loader {
    width: 100%;
    height: 100vh;
    /* ... */
  }
  .loader .loading {
    /* ... */
  }
  .loader .loading span {
    /* ... */
  }
  .loader .loading span:before {
    /* ... */
  }
}

Animations

Next are keyframes, and this was a bit tricky, but I eventually chose to isolate animations in their own new fifth layer and updated the layer order to include it:

@layer reset, base, layout, components, utilities, animations;

But why place animations as the last layer? Because animations are generally the last to run and shouldn’t be affected by style conflicts.

I searched the project’s styles for @keyframes and dumped them into the new layer:

@layer animations {
  @keyframes loading {
    /* ... */
  }
  @keyframes loading2 {
    /* ... */
  }
  @keyframes pageShow {
    /* ... */
  }
}

This gives a clear distinction of static styles from dynamic ones while also enforcing reusability.

Layouts

The #page selector also has the same issue as #id, and since we fixed it in the HTML earlier, we can modify it to .page and drop it in the layout layer, as its main purpose is to control the initial visibility of the content:

@layer layout {
  .page {
    display: none;
  }
}

Custom Scrollbars

Where do we put these? Scrollbars are global elements that persist across the site. This might be a gray area, but I’d say it fits perfectly in @layer base since it’s a global, default feature.

@layer base {
  /* ... */
  ::-webkit-scrollbar {
    width: 8px;
  }
  ::-webkit-scrollbar-track {
    background: #0e0e0f;
  }
  ::-webkit-scrollbar-thumb {
    background: #5865f2;
    border-radius: 100px;
  }
  ::-webkit-scrollbar-thumb:hover {
    background: #202225;
  }
}

I also removed the !important keywords as I came across them.

Navigation

The nav element is pretty straightforward, as it is the main structure container that defines the position and dimensions of the navigation bar. It should definitely go in the layout layer:

@layer layout {
  /* ... */
  nav {
    display: flex;
    height: 55px;
    width: 100%;
    padding: 0 50px; /* Consistent horizontal padding */
    /* ... */
  }
}

Logo

We have three style blocks that are tied to the logo: nav .logo, .logo img, and #botLogo. These names are redundant and could benefit from inheritance component reusability.

Here’s how I’m approaching it:

  1. The nav .logo is overly specific since the logo can be reused in other places. I dropped the nav so that the selector is just .logo. There was also an !important keyword in there, so I removed it.
  2. I updated .logo to be a Flexbox container to help position .logo img, which was previously set with less flexible absolute positioning.
  3. The #botLogo ID is declared twice, so I merged the two rulesets into one and lowered its specificity by making it a .botLogo class. And, of course, I updated the HTML to replace the ID with the class.
  4. The .logo img selector becomes .botLogo, making it the base class for styling all instances of the logo.

Now, we’re left with this:

/* initially .logo img */
.botLogo {
  border-radius: 50%;
  height: 40px;
  border: 2px solid #5865f2;
}

/* initially #botLogo */
.botLogo {
  border-radius: 50%;
  width: 180px;
  /* ... */
}

The difference is that one is used in the navigation and the other in the hero section heading. We can transform the second .botLogo by slightly increasing the specificity with a .heading .botLogo selector. We may as well clean up any duplicated styles as we go.

Let’s place the entire code in the components layer as we’ve successfully turned the logo into a reusable component:

@layer components {
  /* ... */
  .logo {
    font-size: 30px;
    font-weight: bold;
    color: #fff;
    display: flex;
    align-items: center;
    gap: 10px;
  }
  .botLogo {
    aspect-ratio: 1; /* maintains square dimensions with width */
    border-radius: 50%;
    width: 40px;
    border: 2px solid #5865f2;
  }
  .heading .botLogo {
    width: 180px;
    height: 180px;
    background-color: #5865f2;
    box-shadow: 0px 0px 8px 2px rgba(88, 101, 242, 0.5);
    /* ... */
  }
}

This was a bit of work! But now the logo is properly set up as a component that fits perfectly in the new layer architecture.

Navigation List

This is a typical navigation pattern. Take an unordered list (<ul>) and turn it into a flexible container that displays all of the list items horizontally on the same row (with wrapping allowed). It’s a type of navigation that can be reused, which belongs in the components layer. But there’s a little refactoring to do before we add it.

There’s already a .mainMenu class, so let’s lean into that. We’ll swap out any nav ul selectors with that class. Again, it keeps specificity low while making it clearer what that element does.

@layer components {
  /* ... */
  .mainMenu {
    display: flex;
    flex-wrap: wrap;
    list-style: none;
  }
  .mainMenu li {
    margin: 0 4px;
  }
  .mainMenu li a {
    color: #fff;
    text-decoration: none;
    font-size: 16px;
    /* ... */
  }
  .mainMenu li a:where(.active, .hover) {
    color: #fff;
    background: #1d1e21;
  }
  .mainMenu li a.active:hover {
    background-color: #5865f2;
  }
}

There are also two buttons in the code that are used to toggle the navigation between “open” and “closed” states when the navigation is collapsed on smaller screens. It’s tied specifically to the .mainMenu component, so we’ll keep everything together in the components layer. We can combine and simplify the selectors in the process for cleaner, more readable styles:

@layer components {
  /* ... */
  nav:is(.openMenu, .closeMenu) {
    font-size: 25px;
    display: none;
    cursor: pointer;
    color: #fff;
  }
}

I also noticed that several other selectors in the CSS were not used anywhere in the HTML. So, I removed those styles to keep things trim. There are automated ways to go about this, too.

Media Queries

Should media queries have a dedicated layer (@layer responsive), or should they be in the same layer as their target elements? I really struggled with that question while refactoring the styles for this project. I did some research and testing, and my verdict is the latter, that media queries ought to be in the same layer as the elements they affect.

My reasoning is that keeping them together:

  • Maintains responsive styles with their base element styles,
  • Makes overrides predictable, and
  • Flows well with component-based architecture common in modern web development.

However, it also means responsive logic is scattered across layers. But it beats the one with a gap between the layer where elements are styled and the layer where their responsive behaviors are managed. That’s a deal-breaker for me because it’s way too easy to update styles in one layer and forget to update their corresponding responsive style in the responsive layer.

The other big point is that media queries in the same layer have the same priority as their elements. This is consistent with my overall goal of keeping the CSS Cascade simple and predictable, free of style conflicts.

Plus, the CSS nesting syntax makes the relationship between media queries and elements super clear. Here’s an abbreviated example of how things look when we nest media queries in the components layer:

@layer components {
  .mainMenu {
    display: flex;
    flex-wrap: wrap;
    list-style: none;
  }
  @media (max-width: 900px) {
    .mainMenu {
      width: 100%;
      text-align: center;
      height: 100vh;
      display: none;
    }
  }
}

This also allows me to nest a component’s child element styles (e.g., nav .openMenu and nav .closeMenu).

@layer components {
  nav {
    &.openMenu {
      display: none;

      @media (max-width: 900px) {
        &.openMenu {
          display: block;
        }
      }
    }
  }
}

Typography & Buttons

The .title and .subtitle can be seen as typography components, so they and their responsive associates go into — you guessed it — the components layer:

@layer components {
  .title {
    font-size: 40px;
    font-weight: 700;
    /* etc. */
  }
  .subtitle {
    color: rgba(255, 255, 255, 0.75);
    font-size: 15px;
    /* etc.. */
  }
  @media (max-width: 420px) {
    .title {
      font-size: 30px;
    }
    .subtitle {
      font-size: 12px;
    }
  }
}

What about buttons? Like many website’s this one has a class, .btn, for that component, so we can chuck those in there as well:

@layer components {
  .btn {
    color: #fff;
    background-color: #1d1e21;
    font-size: 18px;
    /* etc. */
  }
  .btn-primary {
    background-color: #5865f2;
  }
  .btn-secondary {
    transition: all 0.3s ease-in-out;
  }
  .btn-primary:hover {
    background-color: #5865f2;
    box-shadow: 0px 0px 8px 2px rgba(88, 101, 242, 0.5);
    /* etc. */
  }
  .btn-secondary:hover {
    background-color: #1d1e21;
    background-color: rgba(88, 101, 242, 0.7);
  }
  @media (max-width: 420px) {
    .btn {
      font-size: 14px;
      margin: 2px;
      padding: 8px 13px;
    }
  }
  @media (max-width: 335px) {
    .btn {
      display: flex;
      flex-direction: column;
    }
  }
}

The Final Layer

We haven’t touched the utilities layer yet! I’ve reserved this layer for helper classes that are designed for specific purposes, like hiding content — or, in this case, there’s a .noselect class that fits right in. It has a single reusable purpose: to disable selection on an element.

So, that’s going to be the only style rule in our utilities layer:

@layer utilities {
  .noselect {
    -webkit-touch-callout: none;
    -webkit-user-select: none;
    -khtml-user-select: none;
    -webkit-user-drag: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
  }
}

And that’s it! We’ve completely refactored the CSS of a real-world project to use CSS Cascade Layers. You can compare where we started with the final code.

It Wasn’t All Easy

That’s not to say that working with Cascade Layers was challenging, but there were some sticky points in the process that forced me to pause and carefully think through what I was doing.

I kept some notes as I worked:

  • It’s tough to determine where to start with an existing project.
    However, by defining the layers first and setting their priority levels, I had a framework for deciding how and where to move specific styles, even though I was not totally familiar with the existing CSS. That helped me avoid situations where I might second-guess myself or define extra, unnecessary layers.
  • Browser support is still a thing!
    I mean, Cascade Layers enjoy 94% support coverage as I’m writing this, but you might be one of those sites that needs to accommodate legacy browsers that are unable to support layered styles.
  • It wasn’t clear where media queries fit into the process.
    Media queries put me on the spot to find where they work best: nested in the same layers as their selectors, or in a completely separate layer? I went with the former, as you know.
  • The !important keyword is a juggling act.
    They invert the entire layering priority system, and this project was littered with instances. Once you start chipping away at those, the existing CSS architecture erodes and requires a balance between refactoring the code and fixing what’s already there to know exactly how styles cascade.

Overall, refactoring a codebase for CSS Cascade Layers is a bit daunting at first glance. The important thing, though, is to acknowledge that it isn’t really the layers that complicate things, but the existing codebase.

It’s tough to completely overhaul someone’s existing approach for a new one, even if the new approach is elegant.

Where Cascade Layers Helped (And Didn’t)

Establishing layers improved the code, no doubt. I’m sure there are some performance benchmarks in there since we were able to remove unused and conflicting styles, but the real win is in a more maintainable set of styles. It’s easier to find what you need, know what specific style rules are doing, and where to insert new styles moving forward.

At the same time, I wouldn’t say that Cascade Layers are a silver bullet solution. Remember, CSS is intrinsically tied to the HTML structure it queries. If the HTML you’re working with is unstructured and suffers from div-itus, then you can safely bet that the effort to untangle that mess is higher and involves rewriting markup at the same time.

Refactoring CSS for cascade layers is most certainly worth the maintenance enhancements alone.

It may be “easier” to start from scratch and define layers as you work from the ground up because there’s less inherited overhead and technical debt to sort through. But if you have to start from an existing codebase, you might need to de-tangle the complexity of your styles first to determine exactly how much refactoring you’re looking at.

]]>
hello@smashingmagazine.com (Victor Ayomipo)
<![CDATA[Designing For TV: Principles, Patterns And Practical Guidance (Part 2)]]> https://smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/ https://smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/ Thu, 04 Sep 2025 10:00:00 GMT Having covered the developmental history and legacy of TV in Part 1, let’s now delve into more practical matters. As a quick reminder, the “10-foot experience” and its reliance on the six core buttons of any remote form the basis of our efforts, and as you’ll see, most principles outlined simply reinforce the unshakeable foundations.

In this article, we’ll sift through the systems, account for layout constraints, and distill the guidelines to understand the essence of TV interfaces. Once we’ve collected all the main ingredients, we’ll see what we can do to elevate these inherently simplistic experiences.

Let’s dig in, and let’s get practical!

The Systems

When it comes to hardware, TVs and set-top boxes are usually a few generations behind phones and computers. Their components are made to run lightweight systems optimised for viewing, energy efficiency, and longevity. Yet even within these constraints, different platforms offer varying performance profiles, conventions, and price points.

Some notable platforms/systems of today are:

  • Roku, the most affordable and popular, but severely bottlenecked by weak hardware.
  • WebOS, most common on LG devices, relies on web standards and runs well on modest hardware.
  • Android TV, considered very flexible and customisable, but relatively demanding hardware-wise.
  • Amazon Fire, based on Android but with a separate ecosystem. It offers great smooth performance, but is slightly more limited than stock Android.
  • tvOS, by Apple, offering a high-end experience followed by a high-end price with extremely low customizability.

Despite their differences, all of the platforms above share something in common, and by now you’ve probably guessed that it has to do with the remote. Let’s take a closer look:

If these remotes were stripped down to just the D-pad, OK, and BACK buttons, they would still be capable of successfully navigating any TV interface. It is this shared control scheme that allows for the agnostic approach of this article with broadly applicable guidelines, regardless of the manufacturer.

Having already discussed the TV remote in detail in Part 1, let’s turn to the second part of the equation: the TV screen, its layout, and the fundamental building blocks of TV-bound experiences.

TV Design Fundamentals

The Screen

With almost one hundred years of legacy, TV has accumulated quite some baggage. One recurring topic in modern articles on TV design is the concept of “overscan” — a legacy concept from the era of cathode ray tube (CRT) screens. Back then, the lack of standards in production meant that television sets would often crop the projected image at its edges. To address this inconsistency, broadcasters created guidelines to keep important content from being cut off.

While overscan gets mentioned occasionally, we should call it what it really is — a thing of the past. Modern panels display content with greater precision, making thinking in terms of title and action safe areas rather archaic. Today, we can simply consider the margins and get the same results.

Google calls for a 5% margin layout and Apple advises a 60-point margin top and bottom, and 80 points on the sides in their Layout guidelines. The standard is not exactly clear, but the takeaway is simple: leave some breathing room between screen edge and content, like you would in any thoughtful layout.

Having left some baggage behind, we can start considering what to put within and outside the defined bounds.

The Layout

Considering the device is made for content consumption, streaming apps such as Netflix naturally come to mind. Broadly speaking, all these interfaces share a common layout structure where a vast collection of content is laid out in a simple grid.

These horizontally scrolling groups (sometimes referred to as “shelves”) resemble rows of a bookcase. Typically, they’ll contain dozens of items that don’t fit into the initial “fold”, so we’ll make sure the last visible item “peeks” from the edge, subtly indicating to the viewer there’s more content available if they continue scrolling.

If we were to define a standard 12-column layout grid, with a 2-column-wide item, we’d end up with something like this:

As you can see, the last item falls outside the “safe” zone.

Tip: A useful trick I discovered when designing TV interfaces was to utilise an odd number of columns. This allows the last item to fall within the defined margins and be more prominent while having little effect on the entire layout. We’ve concluded that overscan is not a prominent issue these days, yet an additional column in the layout helps completely circumvent it. Food for thought!

Typography

TV design requires us to practice restraint, and this becomes very apparent when working with type. All good typography practices apply to TV design too, but I’d like to point out two specific takeaways.

First, accounting for the distance, everything (including type) needs to scale up. Where 16–18px might suffice for web baseline text, 24px should be your starting point on TV, with the rest of the scale increasing proportionally.

“Typography can become especially tricky in 10-ft experiences. When in doubt, go larger.”

Molly Lafferty (Marvel Blog)

With that in mind, the second piece of advice would be to start with a small 5–6 size scale and adjust if necessary. The simplicity of a TV experience can, and should, be reflected in the typography itself, and while small, such a scale will do all the “heavy lifting” if set correctly.

What you see in the example above is a scale I reduced from Google and Apple guidelines, with a few size adjustments. Simple as it is, this scale served me well for years, and I have no doubt it could do the same for you.

Freebie

If you’d like to use my basic reduced type scale Figma design file for kicking off your own TV project, feel free to do so!

Color

Imagine watching TV at night with the device being the only source of light in the room. You open up the app drawer and select a new streaming app; it loads into a pretty splash screen, and — bam! — a bright interface opens up, which, amplified by the dark surroundings, blinds you for a fraction of a second. That right there is our main consideration when using color on TV.

Built for cinematic experiences and often used in dimly lit environments, TVs lend themselves perfectly to darker and more subdued interfaces. Bright colours, especially pure white (#ffffff), will translate to maximum luminance and may be straining on the eyes. As a general principle, you should rely on a more muted color palette. Slightly tinting brighter elements with your brand color, or undertones of yellow to imitate natural light, will produce less visually unsettling results.

Finally, without a pointer or touch capabilities, it’s crucial to clearly highlight interactive elements. While using bright colors as backdrops may be overwhelming, using them sparingly to highlight element states in a highly contrasting way will work perfectly.

A focus state is the underlying principle of TV navigation. Most commonly, it relies on creating high contrast between the focused and unfocused elements. (Large preview)

This highlighting of UI elements is what TV leans on heavily — and it is what we’ll discuss next.

Focus

In Part 1, we have covered how interacting through a remote implies a certain detachment from the interface, mandating reliance on a focus state to carry the burden of TV interaction. This is done by visually accenting elements to anchor the user’s eyes and map any subsequent movement within the interface.

If you have ever written HTML/CSS, you might recall the use of the :focus CSS pseudo-class. While it’s primarily an accessibility feature on the web, it’s the core of interaction on TV, with more flexibility added in the form of two additional directions thanks to a dedicated D-pad.

Focus Styles

There are a few standard ways to style a focus state. Firstly, there’s scaling — enlarging the focused element, which creates the illusion of depth by moving it closer to the viewer.

Example of scaling elements on focus. This is especially common in cases where only images are used for focusable elements. (Large preview)

Another common approach is to invert background and text colors.

Color inversion on focus, common for highlighting cards. (Large preview)

Finally, a border may be added around the highlighted element.

Example of border highlights on focus. (Large preview)

These styles, used independently or in various combinations, appear in all TV interfaces. While execution may be constrained by the specific system, the purpose remains the same: clear and intuitive feedback, even from across the room.

The three basic styles can be combined to produce more focus state variants. (Large preview)

Having set the foundations of interaction, layout, and movement, we can start building on top of them. The next chapter will cover the most common elements of a TV interface, their variations, and a few tips and tricks for button-bound navigation.

Common TV UI Components

Nowadays, the core user journey on television revolves around browsing (or searching through) a content library, selecting an item, and opening a dedicated screen to watch or listen.

This translates into a few fundamental screens:

  • Library (or Home) for content browsing,
  • Search for specific queries, and
  • A player screen focused on content playback.

These screens are built with a handful of components optimized for the 10-foot experience, and while they are often found on other platforms too, it’s worth examining how they differ on TV.

Menus

Appearing as a horizontal bar along the top edge of the screen, or as a vertical sidebar, the menu helps move between the different screens of an app. While its orientation mostly depends on the specific system, it does seem TV favors the side menu a bit more.

Both menu types share a common issue: the farther the user navigates away from the menu (vertically, toward the bottom for top-bars; and horizontally, toward the right for sidebars), the more button presses are required to get back to it. Fortunately, usually a Back button shortcut is added to allow for immediate menu focus, which greatly improves usability.

16:9 posters abide by the same principles but with a horizontal orientation. They are often paired with text labels, which effectively turn them into cards, commonly seen on platforms like YouTube. In the absence of dedicated poster art, they show stills or playback from the videos, matching the aspect ratio of the media itself.

1:1 posters are often found in music apps like Spotify, their shape reminiscent of album art and vinyl sleeves. These squares often get used in other instances, like representing channel links or profile tiles, giving more visual variety to the interface.

All of the above can co-exist within a single app, allowing for richer interfaces and breaking up otherwise uniform content libraries.

And speaking of breaking up content, let’s see what we can do with spotlights!

Spotlights

Typically taking up the entire width of the screen, these eye-catching components will highlight a new feature or a promoted piece of media. In a sea of uniform shelves, they can be placed strategically to introduce aesthetic diversity and disrupt the monotony.

A spotlight can be a focusable element by itself, or it could expose several actions thanks to its generous space. In my ventures into TV design, I relied on a few different spotlight sizes, which allowed me to place multiples into a single row, all with the purpose of highlighting different aspects of the app, without breaking the form to which viewers were used.

Posters, cards, and spotlights shape the bulk of the visual experience and content presentation, but viewers still need a way to find specific titles. Let’s see how search and input are handled on TV.

Search And Entering Text

Manually browsing through content libraries can yield results, but having the ability to search will speed things up — though not without some hiccups.

TVs allow for text input in the form of on-screen keyboards, similar to the ones found in modern smartphones. However, inputting text with a remote control is quite inefficient given the restrictiveness of its control scheme. For example, typing “hey there” on a mobile keyboard requires 9 keystrokes, but about 38 on a TV (!) due to the movement between characters and their selection.

Typing with a D-pad may be an arduous task, but at the same time, having the ability to search is unquestionably useful.

Luckily for us, keyboards are accounted for in all systems and usually come in two varieties. We’ve got the grid layouts used by most platforms and a horizontal layout in support of the touch-enabled and gesture-based controls on tvOS. Swiping between characters is significantly faster, but this is yet another pattern that can only be enhanced, not replaced.

Modernization has made things significantly easier, with search autocomplete suggestions, device pairing, voice controls, and remotes with physical keyboards, but on-screen keyboards will likely remain a necessary fallback for quite a while. And no matter how cumbersome this fallback may be, we as designers need to consider it when building for TV.

Players And Progress Bars

While all the different sections of a TV app serve a purpose, the Player takes center stage. It’s where all the roads eventually lead to, and where viewers will spend the most time. It’s also one of the rare instances where focus gets lost, allowing for the interface to get out of the way of enjoying a piece of content.

Arguably, players are the most complex features of TV apps, compacting all the different functionalities into a single screen. Take YouTube, for example, its player doesn’t just handle expected playback controls but also supports content browsing, searching, reading comments, reacting, and navigating to channels, all within a single screen.

Compared to YouTube, Netflix offers a very lightweight experience guided by the nature of the app.

Still, every player has a basic set of controls, the foundation of which is the progress bar.

The progress bar UI element serves as a visual indicator for content duration. During interaction, focus doesn’t get placed on the bar itself, but on a movable knob known as the “scrubber.” It is by moving the scrubber left and right, or stopping it in its tracks, that we can control playback.

Another indirect method of invoking the progress bar is with the good old Play and Pause buttons. Rooted in the mechanical era of tape players, the universally understood triangle and two vertical bars are as integral to the TV legacy as the D-pad. No matter how minimalist and sleek the modern player interface may be, these symbols remain a staple of the viewing experience.

The presence of a scrubber may also indicate the type of content. Video on demand allows for the full set of playback controls, while live streams (unless DVR is involved) will do away with the scrubber since viewers won’t be able to rewind or fast-forward.

Earlier iterations of progress bars often came bundled with a set of playback control buttons, but as viewers got used to the tools available, these controls often got consolidated into the progress bar and scrubber themselves.

Bringing It All Together

With the building blocks out of the box, we’ve got everything necessary for a basic but functional TV app. Just as the six core buttons make remote navigation possible, the components and principles outlined above help guide purposeful TV design. The more context you bring, the more you’ll be able to expand and combine these basic principles, creating an experience unique to your needs.

Before we wrap things up, I’d like to share a few tips and tricks I discovered along the way — tips and tricks which I wish I had known from the start. Regardless of how simple or complex your idea may be, these may serve you as useful tools to help add depth, polish, and finesse to any TV experience.

Thinking Beyond The Basics

Like any platform, TV has a set of constraints that we abide by when designing. But sometimes these norms are applied without question, making the already limited capabilities feel even more restraining. Below are a handful of less obvious ideas that can help you design more thoughtfully and flexibly for the big screen.

Long Press

Most modern remotes support press-and-hold gestures as a subtle way to enhance the functionality, especially on remotes with fewer buttons available.

For example, holding directional buttons when browsing content speeds up scrolling, while holding Left/Right during playback speeds up timeline seeking. In many apps, a single press of the OK button opens a video, but holding it for longer opens a contextual menu with additional actions.

With limited input, context becomes a powerful tool. It not only declutters the interface to allow for more focus on specific tasks, but also enables the same set of buttons to trigger different actions based on the viewer’s location within an app.

Another great example is YouTube’s scrubber interaction. Once the scrubber is moved, every other UI element fades. This cleans up the viewer’s working area, so to speak, narrowing the interface to a single task. In this state — and only in this state — pressing Up one more time moves away from scrubbing and into browsing by chapter.

This is such an elegant example of expanding restraint, and adding more only when necessary. I hope it inspires similar interactions in your TV app designs.

Efficient Movement On TV

At its best, every action on TV “costs” at least one click. There’s no such thing as aimless cursor movement — if you want to move, you must press a button. We’ve seen how cumbersome it can be inside a keyboard, but there’s also something we can learn about efficient movement in these restrained circumstances.

Going back to the Homescreen, we can note that vertical and horizontal movement serve two distinct roles. Vertical movement switches between groups, while horizontal movement switches items within these groups. No matter how far you’ve gone inside a group, a single vertical click will move you into another.

Every step on TV “costs” an action, so we might as well optimize movement. (Large preview)

This subtle difference — two axes with separate roles — is the most efficient way of moving in a TV interface. Reversing the pattern: horizontal to switch groups, and vertical to drill down, will work like a charm as long as you keep the role of each axis well defined.

Properly applied in a vertical layout, the principles of optimal movement remain the same. (Large preview)

Quietly brilliant and easy to overlook, this pattern powers almost every step of the TV experience. Remember it, and use it well.

Thinking Beyond JPGs

After covering in detail many of the technicalities, let’s finish with some visual polish.

Most TV interfaces are driven by tightly packed rows of cover and poster art. While often beautifully designed, this type of content and layouts leave little room for visual flair. For years, the flat JPG, with its small file size, has been a go-to format, though contemporary alternatives like WebP are slowly taking its place.

Meanwhile, we can rely on the tried and tested PNG to give a bit more shine to our TV interfaces. The simple fact that it supports transparency can help the often-rigid UIs feel more sophisticated. Used strategically and paired with simple focus effects such as background color changes, PNGs can bring subtle moments of delight to the interface.

Having a transparent background blends well with surface color changes common in TV interfaces. (Large preview) And don’t forget, transparency doesn’t have to mean that there shouldn't be any background at all. (Large preview)

Moreover, if transformations like scaling and rotating are supported, you can really make those rectangular shapes come alive with layering multiple assets.

Combining multiple images along with a background color change can liven up certain sections. (Large preview)

As you probably understand by now, these little touches of finesse don’t go out of bounds of possibility. They simply find more room to breathe within it. But with such limited capabilities, it’s best to learn all the different tricks that can help make your TV experiences stand out.

Closing Thoughts

Rooted in legacy, with a limited control scheme and a rather “shallow” interface, TV design reminds us to do the best with what we have at our disposal. The restraints I outlined are not meant to induce claustrophobia and make you feel limited in your design choices, but rather to serve you as guides. It is by accepting that fact that we can find freedom and new avenues to explore.

This two-part series of articles, just like my experience designing for TV, was not about reinventing the wheel with radical ideas. It was about understanding its nuances and contributing to what’s already there with my personal touch.

If you find yourself working in this design field, I hope my guide will serve as a warm welcome and will help you do your finest work. And if you have any questions, do leave a comment, and I will do my best to reply and help.

Good luck!

Further Reading

  • Design for TV,” by Android Developers
    Great TV design is all about putting content front and center. It's about creating an interface that's easier to use and navigate, even from a distance. It's about making it easier to find the content you love, and to enjoy it in the best possible quality.
  • TV Guidelines: A quick kick-off on designing for Television Experiences,” by Andrea Pacheco
    Just like designing a mobile app, designing a TV application can be a fun and complex thing to do, due to the numerous guidelines and best practices to follow. Below, I have listed the main best practices to keep in mind when designing an app for a 10-foot screen.
  • Designing for Television – TV Ui design,” by Molly Lafferty
    We’re no longer limited to a remote and cable box to control our TVs; we’re using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface that’s much more powerful than your old-fashioned on-screen guide.
  • Rethinking User Interface Design for the TV Platform,” by Pascal Potvin
    Designing for television has become part of the continuum of devices that require a rethink of how we approach user interfaces and user experiences.
  • Typography for TV,” by Android Developers
    As television screens are typically viewed from a distance, interfaces that use larger typography are more legible and comfortable for users. TV Design's default type scale includes contrasting and flexible type styles to support a wide range of use cases.
  • Typography,” by Apple Developer docs
    Your typographic choices can help you display legible text, convey an information hierarchy, communicate important content, and express your brand or style.
  • Color on TV,” by Android Developers
    Color on TV design can inspire, set the mood, and even drive users to make decisions. It's a powerful and tangible element that users notice first. As a rich way to connect with a wide audience, it's no wonder color is an important step in crafting a high-quality TV interface.
  • Designing for Television — TV UI Design,” by Molly Lafferty (Marvel Blog)
    Today, we’re no longer limited to a remote and cable box to control our TVs; we’re using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface that’s much more powerful than your old-fashioned on-screen guide.
]]>
hello@smashingmagazine.com (Milan Balać)