The other half of the job
Making interfaces is cheap now. Deciding what to remove is still hard.
I wrote recently about design debt. This is a follow-up of sorts.
Working with clay is a useful metaphor for how design works. First you build up: you add form, see what could exist, keep possibilities open. The next step is to remove: cut back, refine, take away everything that doesn't belong. Both phases work on the same object, but require different mindsets. One is about generating ideas and seeing what could be. The other is about deciding what matters most and being willing to cut the rest.

Knowing what to remove is the real work - Photo by Courtney Cook on Unsplash
Generative AI made the building phase cheap, and opened it up to more people than before. More people can build. This is amazing. But it also means teams face a choice that’s gotten harder to make. Testing before shipping was always an option; the difference is that when a working interface shows up in hours and looks this good, the pull to just ship it is stronger than ever. Do we ship it, or do we test it?
There’s something odd about the removing phase: it produces nothing you can show.
Building generates flows, screens, components. Removing produces decisions, not deliverables. A screen that doesn’t need to exist. A decision the user no longer has to make. A question that stops being asked because the answer is understood from context. These are hard to show in a review, and hard to defend when someone asks what the team has been doing, and why things are taking so long. Productive work is measured in amount made, not cut.
Reducing is what sharpens the work. It increases app conversion, reduces calls with customer service, reduces maintenance time for engineering. The removing phase was always invisible. Now that the building phase is cheap and fast and open to everyone, that invisibility turned into a liability.
When everyone has a hammer…
I’m starting to see a pattern. A team is working through a real user problem; it takes time. Meanwhile, someone else on the same project spins up an AI-generated version of the interface. It seems to solve the problem. It’s more capable than before, looks well-composed, but more importantly, it looks done. Put it in front of users and it falls apart. The building phase had been done beautifully. The removing mindset never made it into scope.
The most common reason isn’t impatience or oversight. It’s that it’s hard to schedule time for a phase that produces nothing demoable, especially when the AI-generated version looks so polished.
The removing mindset doesn’t belong to a job title. Engineers can ask reductive questions. Product managers can. Founders can. The question “does this need to exist?” is available to anyone in the room. What’s happened is that the building mindset has become accessible to more people, while the removing one has stayed just as hard. It requires someone willing to slow down after the building phase, look at what was built, and start cutting. That’s a different mode.
When organisations cut the people who were doing that work, they assume the building was the whole job. It was just the part that was easy to see.
Not all clay needs to set; the Living Interface
But there’s a version of this where the building and removing phases stop being sequential altogether. Not all products need to be fully built and refined before they reach the audience. Some parts can be shaped and reshaped by the users themselves.
Bluesky just launched Attie, an AI assistant that lets you design your own feed algorithm in natural language. No code, no settings panel: just a conversation about what you want to see. And that’s just the start: the plan is to let users vibe-code parts of the app itself. The interface is yours to reshape. The data underneath is shared across an open protocol, available to any app built on it.
I wrote about this pattern a while back; the idea that the interface belongs to the person using it, while the data belongs to the network. That’s no longer a thought experiment. Products are being built around this idea.
This is where the removing question gets genuinely interesting. When an interface is meant to be reshaped by its user, the removing phase doesn’t disappear. It just moves. Someone still has to decide what the user gets to control, what stays fixed, and where the edges are. That work produces a different kind of object: one that arrives unfinished on purpose, with the right constraints baked in.
And that raises a question we don’t have a great answer to yet. As more interfaces become living interfaces, how do we design guardrails that actually help rather than limit? How do you give someone enough freedom to make the interface theirs, while still guiding them toward what they’re trying to achieve? The removing mindset doesn’t disappear when the user becomes the designer. It just becomes a more interesting problem…
