fbpx logo-new mail facebook Dribble Social Icon Linkedin Social Icon Twitter Social Icon Github Social Icon Instagram Social Icon Arrow_element diagonal-decor rectangle-decor search arrow circle-flat

Tandem Roundtable: When Is ‘Done’?

McKenzie Landorf Design
Taher Motiwalla Design
Amy Johnson Tandem Alum

Tandem’s cross-disciplinary team shares a belief in the value of continuous improvement. In our field, you can always interview another user, make another tweak, or find a way to make a great piece of software even better.

But how do you know when you’re done? At what point does additional effort create diminishing returns? In today’s Tandem Roundtable, product designers McKenzie, Amy, and Taher discuss.


McKenzie: The idea of ‘done-ness’ is an interesting one, and it’s well-pointed at the three of us because as designers, we eventually have to make the decision to stop or determine when a piece of work is done.

Amy: It’s every designer’s worst fear to work too much on something and then have it scrapped for some reason, or going past the ‘done’ stage. As a designer, that’s why we have the Agile process. When you join an Agile product team, something you should do as a team is define what ‘done’ is, whether that be a waterfall method where you have a deadline and you work up until that point and that is done — or whether it’s certain tasks every two weeks, and that definition of done refers to those specific tasks.

Taher: There is the converging/diverging analogy everyone uses. With a diverging approach you have a nice timebox exercise: in a set amount of time, how many options can we throw out there? It’s nice to have that timebox. But when you choose the converging approach, you keep going until you’ve resolved all the major problems that people see. You feel done when there aren’t any major problems left to deal with.

Amy: I agree, especially about timeboxing, because as creative individuals we sometimes don’t put enough structure on our thinking style. We can think outside the box for days, but that might not be efficient! That might get in the way of progress. So timeboxing is something I try to do every day. Before I’m done brainstorming an idea, I set an hour or two to work on it myself or with a pair. Timeboxing is a good tool to get to a ‘done’ state of that stage of the project.

McKenzie: I think timeboxing is a helpful tool because it helps me get to the best ideas. We could think all day and come up with a lot of ideas — but if you know you only have eight minutes, or an hour, that forces your mind to get all the ideas, from worst to best out of your head.

When it comes to user research, how do you set an appropriate goal for the amount of users to shadow or interview? At what point do you feel confident that you have the perspective you need to move forward?

Amy: First you have to determine if this is a qualitative or quantitative type of user research project. What type of feedback are you looking for? Are you looking for data that’s easy to get in an automated way by sending surveys to 50-100 users?

But if you want to get into the weeds of a user experience, that requires qualitative data and I’ve seen successful user research sessions include anywhere from five to fifteen users. Many times during these sessions, you end up finalizing interviews with fewer people than initially are scheduled — some of them drop out, so you need to account for that when planning your research timeline.

Taher: When you’re gathering qualitative data, sometimes you have different variables. If it’s a B2B product that you know certain teams interact with, you need to get insights from each team or information about how each team interacts with the product.

Amy: Different teams, different jobs, people with different user types: they all have their own agendas that should be explored during qualitative research.

Taher: On one client project, we were making a product and we knew we needed some input from both the main team that uses the software (the events team) and then the meteorology team, and then the people using it from different areas. We knew we had to cover at least those bases before being ‘done.’

McKenzie, has this come up for you in your work with MEPCOM? Trying to figure out how many users is enough?

McKenzie: With the MEPCOM project, we did a huge round of testing in the fall of 2020, and it was all remote. We started with an initial batch of people, and ended up adding two more similarly sized batches because we realized that we didn’t have enough data. We couldn’t see if there was a pattern with only two or three data points. When you start seeing patterns emerge in the data, that’s one way of understanding if you’re getting to a point of being ‘done’ with that round of testing.

In the different user groups, we had representation from three different sections of the product: Medical, Aptitude, and Processing. So it was important that we get an even number of folks from each of the three sectors so it was a balanced sample. We ended up interviewing about 45 people!

During prototyping, we typically cycle through several iterative rounds before considering a design ‘done.’ How do you know when you’ve iterated enough?

McKenzie: Timing, either the timing of a sprint or a deadline we are given from the client. You’re done, because you’re out of time!

Taher: With client work we have constraints to lean into: I’ll complete as many iterations as I can make in the allotted time. But in my personal work where I set my own deadlines, it’s such a tricky question. I don’t have any hard deadlines, but at the same time, there are diminishing returns over time: how many changes can I make until there isn’t really a difference to users in the new iteration of a screen?

Eventually it comes to a time where you have to test your prototype to see if you’re right in your assumptions about how a user will react. The feedback from that user testing then gives you new life on the diminishing returns curve.

McKenzie: When working with a client, if we find ourselves in conversation loops — where we go back to an older version of what we had designed and realize “no, we already tried that and determined it won’t work” — that’s a sign that we’re overdone; we’re setting an unachievable ‘done’ measure. If you find yourself in a conversation loop, you should to evaluate whether you need to conduct more user testing or dig into requirements more in order to move forward.

What parts of custom software development should be on a continuous basis, and truly are never done?

McKenzie: With our long-standing clients, we redesign software as requirements change and people’s thoughts change: nothing is ever done. Any custom software should be a work in progress. Accessibility, DEI — best practices around those things continue to change, so how could an application ever be truly done? How could it ever be the best or even good, if it’s not continually evolving?

Amy: You can always strive for longevity, but nothing we do on the internet is ever done. And I think that’s the nature of this medium. There’s a definite ‘done’ in print design, and I think that’s what hangs up a lot of design students as they think about specializing in web design — this constant feeling of churning and iteration is unsettling to some people, but at the same time, a good designer is able to be comfortable with that.

Taher: Software is never done, but it can be ‘done for now.’

McKenzie: I’m reading this book right now called Every Good Endeavor, and it describes how rest relates to work. It’s interesting because it talks about how you don’t have to be burnt out or tired before you take rest: rest can be just stepping back to marvel at the work you’ve done so far. It’s nourishing to step back and take a look at what you’ve accomplished, even if it isn’t ‘done.’

Let’s do something great together

We do our best work in close collaboration with our clients. Let’s find some time for you to chat with a member of our team.

Say Hi