97 UX Things

Know These Warning Signs of Information Architecture Problems (feat. Kathi Kaiser)

August 24, 2021 Kathi Kaiser & Dan Berlin Season 1 Episode 12
97 UX Things
Know These Warning Signs of Information Architecture Problems (feat. Kathi Kaiser)
Show Notes Transcript

Kathi Kaiser discusses finding and addressing information architecture problems in your design.

Sponsored by Watch City Research
Watch City Research is your trusted UX research partner

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Dan Berlin:

Hi everyone and welcome to another edition of the 97 UX Things podcast. Dan Berlin here, your host and book editor. I'm joined this week by Kathi Kaiser who wrote the chapter "Know These Warning Signs of Information Architecture Problems". Welcome, Kathi.

Kathi Kaiser:

Thanks Dan.

Dan Berlin:

We love to start off with introductions. Can you tell us a little bit about yourself?

Kathi Kaiser:

Sure. I am co-founder and partner at Centralis. We are a user experience research and design consultancy based in the Chicago area. And these days, I spend most of my time running our business operations and working with clients to help them set their research agenda both in the short term and the long term, figuring out what methods and projects make the most sense for the research questions they are facing.

Dan Berlin:

And are there any research methods in particular that you focus on?

Kathi Kaiser:

We run the whole gamut. We like to have a broad toolkit because the nature of the challenges do vary. We do evaluative research focused on products like usability testing, for example. And also generative research, which is more foundational, based on use needs, ethnographic studies, u er interviews, card sorting, a d we use both qualitative and q antitative methods.

Dan Berlin:

And can you tell folks about your career trajectory? How did you discover UX? And how did you wind up where you are today.

Kathi Kaiser:

I had a pretty conventional path into user experience, if there is such a thing in such a varied field. I have an undergraduate degree in psychology and a master's degree in social science. Coming out of graduate school, I worked for Doblin Group, an innovation planning firm. And at Doblin, we had the opportunity to use social science methods to study people and figure out how to create innovative products and services for them. Most of my work at Doblin was on physical products in the analog world. But this was the late 90s. And the internet was really coming into its own as a transactional platform. So it was a pretty easy transition to bring that social science background into the technological space, and start working on problems that became known as user experience. While working at a couple of digital marketing agencies, it became clear that there was an opportunity for a firm that focused solely on user experience. It was a new and growing field and there was just so much interest in understanding how people do things and what they need and how design can be matched with that need to provide a usable and fulfilling experience. So my co-founder and I left our jobs and started Centralis in the spring of 2001. We actually started in the carriage house above his garage, which is exciting. And happily, we were successful enough that we were able to get a real office before winter came because there was no heat. It's been 20 years and it's been a lot of fun along the way.

Dan Berlin:

That's really awesome. And 20 years in user experience research, that means you were there on the ground

Kathi Kaiser:

Yeah, it was crazy back in the day having to really invent things as we went. Trying to figure out how do you valuate usability when you need o travel, for example. So we ad a portable usability lab hat involved monitors with etachable bases, because you ad to bring a monitor because o one used a laptop, that was oo unusual. And then, when obile devices became so mportant, how do you record hat's happening on the screen, nd also the gestures and floor, getting this started for the rest of us. That's awesome. articipants faces. There was lot of MacGyvering t at happened to come up with he best setup and the best tool to do our work.

Dan Berlin:

Yeah. And these are CRT monitors you're talking about right, not flat screens?

Kathi Kaiser:

Well, actually, by then flat screens were just coming out. So we were able to find some. It's too much risk with a CRT, to ship it. Back in the day when we were a small company, just trying to meet our clients' needs and keep the lights on, we didn't have a lot of wiggle room in terms of risk. So we were often doing a project in one city and then hopping on a plane and doing a project in another city the next day. And so in addition to figuring out what technology do we need, how do we travel with it? How do we keep it up to date and running? And how can we get the the amount of work done that we needed to do because the exciting thing was that there was a crazy amount of demand. And there's still so much interest in understanding how people are using products and services or how they're doing the tasks of their daily lives and their work. And so there's no shortage of difficult products to use out there. We're just doing our part to try to make the world an easier place.

Dan Berlin:

Great. Great. Well, thanks for that. Digging into your chapter "Know These Warning Signs of Information Architecture Problems." Can you tell us a little bit about that?

Kathi Kaiser:

Sure. I chose this topic because in our years of UX research, we have identified information architecture problems, in particular, as ones that are thorny to diagnose and research, and yet so important in improving the design. If you've got a problem within a page, it's usually pretty straightforward in a usability test session to identify what's happening. Something is mislabeled, it's out of view, or it's not being presented in a way that users would expect to find it. The user can help you unpack that and figure out a solution. But when the problem is how the information is organized, the impacts happen throughout the experience between the screens. And user behavior isn't always as straightforward. In fact, it shows up in these sort of odd behaviors that you think "What is happening here? Why are people doing this?" And I wanted to share some of the things that we've observed so when people see these behaviors in their research, they can go, "oh, maybe the problem is the categories we're using to organize our content, or maybe the problem is our navigation system."

Dan Berlin:

Yep. And that's great, because one of the chapters in our book is,"Observing Behaviors is the Gold Standard", and observing behaviors for overt problems is easy enough, but when it comes to organizational and IA problems, those behaviors may not be as directly observable. Can you tell us a little bit about those behaviors?

Kathi Kaiser:

Sure. Some of the things that we see that suggest problems with information architecture are driving in circles, for example. This is when a user is working their way towards completing a task and yet, they are moving through pages in a random, haphazard way, and often revisiting pages that they have already considered. They go from step one to two to three, and back to one again. They may not even realize that they've already seen that page, that they are not progressing towards accomplishing their goal. When we see this, it is a clue that they do not have a good mental map of how the information on the site is organized. They don't understand the different categories, the different rooms that they can visit to get information that helps them progress towards their goal. And it can be a very frustrating and inefficient experience for them. Oftentimes, we'll see this happening over the course of, I don't know, five or 10, 15 minutes sometimes. Then they'll just throw their hands up and go back to the home screen or the start screen of the experience. That's another clue, when people feel like they need to give themselves a clean slate and start over, that the problem is that they can't internalize the structure of your information and therefore they can't use it to accomplish their goal.

Dan Berlin:

Yep, that's great. Those are very great takeaways for folks that they can observe in their research. When these problems arise. How do you get at the the user's mental map when these problems do arise so that you can fix these problems?

Kathi Kaiser:

Well, the key is to use moderated sessions. I know that sounds like perhaps a basic no brainer. But as I was saying earlier, there's such a range of methods and techniques available to UX researchers these days, that it's really critical to know what to choose for the problem at hand. And I know that many folks are relying on automated tools. We do quantitative studies ourselves. There are session videos now. There are products that will just record a bunch of live sessions on your site, and you can analyze those. And those types of tools have their place. But the key with diagnosing these information architecture problems is that you have to get at what's going through the user's mind. And the best way to do that is in a one-on-one moderated session. So, it's a standard usability protocol. You give them a task, you ask them to think aloud, and you ask these probing questions as minimally as possible and at key moments. Our favorite probes are things like "you said 'huh'?", or, "thoughts?" They're very vague and open ended. Just things to get the user talking, because then you'll hear wonderful nuggets like, "Oh, right, this is the page that I want that I clicked on this link because it promised me X, Y, and Z." And you know, because you're familiar with design, that's not what that link was meant to indicate at all, right? And that they've actually ended up in a different place altogether. So it's having that interaction with the participant. Not that moderated small sample testing will answer all of your questions, it won't. But it will really help you get to the why, which is so important for figuring out how to fix the problems that you're seeing.

Dan Berlin:

I love that. That's such a great point that moderated sessions get at the"why". And there are all these great tools out there that have their time and place, but you're not going to uncover some answers without asking people directly and asking "why" multiple times so that you can get at root causes. How about some of those questions? What are some questions that we can ask participants to get at their their mental model or to uncover IA problems?

Kathi Kaiser:

Questions in terms of probing questions during sessions?

Dan Berlin:

Yeah, probing questions or even questions that you ask at the beginning of a session in terms of how they think about things, maybe before they see an artifact?

Kathi Kaiser:

Sure, well, there are different situations of use, I suppose. You may be working with a current user, someone who's very familiar with the product. In which case, you would want them to demonstrate what they do on a regular basis and have them nominate the tasks and show you how they go about it. If you're working with someone who is not a user of the system, or it's a new product that doesn't exist yet, then you want to do some hypothesis testing. We work with a lot of user experience specialists who are just coming into the field and they tend to have a lot of enthusiasm for working with users, a lot of excitement. They know that they need to establish rapport and build empathy, but one area where we think training programs could do perhaps a bit more to prepare people for the field is to help them understand how to generate and test hypotheses. We certainly want to know how people approach an interface on their own, given a general goal. It's very interesting, it's sort of the fly on the wall scenario. But if your question is do we have the right categories here, have we set up the structure of this application in the best way, then you should have tasks where the target is in each of those areas. And you can combine it with a more open-ended approach, you can do half sessions one way, half sessions, the other. But you want to make sure that you're giving people that specific target and then seeing how they get there. Do they go right there? Great. And then it's probably a decent category that's well labeled. Do they consistently visit a different section because they're interpreting that section slightly differently than you thought? Okay, that's data. You can figure out what changes do you need to make to accommodate that. But if people don't have the opportunity to look for something in the categories you've created, then you won't know if they're effective in helping people get there. So it's important in the test prep phase to identify, okay, what are our hypotheses? Where do we think people might struggle? Where do we hope they will succeed? It's equally important to test things that you believe in and things that you suspect are problematic, and then set up tasks that give people the opportunity to go to those areas. And if they do, great. And if they don't, well, then you know what work you have left to do.

Dan Berlin:

Great. What else? Are there other warning signs of IA problems that we should be thinking about?

Kathi Kaiser:

Sure, a baffling behavior sometimes, especially when you're facilitating the session, and you just want to ask the person, why are you doing that? And of course, you can't. When they repeatedly click on a link for the page they're already on. And sometimes it's a poorly placed in line link, that's just bad content design; why would you inline link to the current page? Sometimes those things slip through. But more often, it's a link in the navigation. So they are on a content page, they're consuming the content, it's maybe a partial answer, or not quite what they're looking for. And then they see the link in the navigation to that page, and there's something about the way that's labeled that is really compelling. And they're like,"Oh, I'm going to go there, that will have my answer." And especially if they've scrolled down, if clicking on that refreshes the page, sometimes they don't even recognize that it's exactly where they had been. And so they start reading all over again and processing the page all over again. And then, I have to say, it's sad but true, they get to that same point and they think, "Oh, wait, no, this link on the side, this is what I want," and they do it all over again. So that could be a problem with the page itself, like it's not the right cut on the information, it's not comprehensive enough, or it's too narrow. Or it could be the label is misleading and they're expecting a different type of information than is actually provided on the screen.

Dan Berlin:

Yep. Is that a case where we would ask users to, quote, unquote, design for us and ask them what things should be labeled to best find them?

Kathi Kaiser:

I mean, it never hurts to ask, we have found over time that participants are pretty bad at labeling things. They often choose things that just aren't practical. They'll choose a five word phrase for something that you're trying to fit in a left navigation bar. But it's really not asking them so much for the answer as helping them illuminate what the answer needs to include. So you can say, "What would you call this? How would you characterize the information on this page? If you had to pick a phrase to describe it, what would you say?" And when they give you an answer and you say, "Okay, well, what about that is descriptive of what we see here? Again, tell me more about that," and unpack it. And it's all of that context from which the designers can draw a label or a couple of labels to experiment with.

Dan Berlin:

Yep. And maybe you find trends amongst the participants when you ask that question, across the participants, and maybe there is a trend that they're circling around in those five-word descriptions.

Kathi Kaiser:

Yeah, absolutely. I would say the one trend to avoid at all costs is the word"resources."

Dan Berlin:

<laughter> The junk drawer?

Kathi Kaiser:

Yes, yes, the internet's junk drawer. We have worked on so many sites that have this amorphous category where we call it a black hole, actually. Because people think that everything is in there, because everything is a resource. It's just not informative. It's a non-label label. It's really important to break that apart and label what's really in here, and to group things by the nature of their content, not by a vague label, like resources.

Dan Berlin:

Are there other words like that to look out for?

Kathi Kaiser:

That is by far the biggest culprit.<Laugher> and Quick Links. Oh yes, links and q

Dan Berlin:

Links. ick links are a problem. " opics" can go either way, honestly. I've seen "topics" us d in the junk drawer fashion, here if you don't have a good axonomy within your topics, hen it's just a laundry list of bunch of stuff on the site. nd you might as well call it tuff. But in other instances, f you work internally to define Well, is there a set of elevant topics here, into which e can group all of this nformation and they're all at a imilar level, and they're the ame type of thing", then I've een that work pretty well. It an help people they may not now what they're going to get hen they go in there. But if hey see the same types of hings it's all fruit, for xample, or it's all vegetables, hen it can help them at least et a sense of the range of nformation provided in that rea. Great. What else? Anything else about your chapter that you're hoping to convey here?

Kathi Kaiser:

Well, I think it's really important for people to be on the lookout for structural problems in an interface and it's a hard thing to do because it often means that there's at least some going back to the drawing board required. Oftentimes, even these days, it's less common than it once was, but even these days, people look to usability testing as a rubber stamp. Like, we want to show that we did it right. We want to do some tweaks. And this is when you get the request to do testing a week before launch. And, well, probably there are some tweaks, there always are. But what you really should be doing is... don't squander your time with users. If they help you identify a fundamental problem in the way you've organized things, you need to address that if not in that immediate launch, then as a fast follow up, because pretending the problem doesn't exist just doesn't do anyone any good. So I would say, look for these signs as a way to determine, do you have maybe long term issues in your design that you need to address. And to not be afraid of those because they take more time, they're more fundamental, but they're not unsolvable. Right? We have great methods in our toolkit for addressing IA issues. If you are trying to understand how users think about a body of content, you can do a card sort where you give them a subset of that content and they make their own categories. And when you look across participants, statistical clusters will emerge that can guide your ultimate design of a new IA. Once you have a proposed IA, you can run a tree test to see how effective it is. You can, again, use stats to compare the efficacy of one tree versus another. We have the tools to develop strong structures in our applications and to evaluate those structures so we shouldn't be afraid of the problem. And in fact, the the longer you don't address it, the more problematic it becomes over time, and then you end up with a Frankensite that is impossible to use. So yeah, I would encourage people if they see these types of issues to hop on that right away and look to resolve it as soon as possible.

Dan Berlin:

Yeah. And that's a great way to turn off new users. If they can't use it from the onset, because they can't navigate around, then, they probably aren't going to come back.

Kathi Kaiser:

Yeah. And yes, they have trouble finding things, that's sort of the the most immediate implication. But what a bad information architecture is saying to the user is, we don't understand you and you need to work to understand us. And people just aren't going to do that. The alternative is, if you have a structure that is easy for users to process and it fits with their existing model of the domain, then they feel welcome, like, "Oh, this site gets me." Even if they don't recognize that explicitly, they just feel comfortable. And that's just setting the foundation for a much more successful relationship with the users.

Dan Berlin:

Yeah, well put. And thank you for that. The other point that you made I want to emphasize is the usability study at the end of your design process. I think the red flag there is when it's called a User Acceptance Test and that's your one check in with users, then you're checking in too late.

Kathi Kaiser:

Yeah, a study at that point in time in and of itself is not a bad thing. It is never bad to bring users into the process. But hopefully, it's at the end of a long series of iterative touch points with users from understanding how they do the behavior to begin with, to testing an early prototype and getting feedback, to bringing them in once you have more of the visual aspects and the functional aspects all worked out. But if you skip all that and hope to rubber stamp it at the end, you're going to rubber stamp your problems and that's just not going to help anyone.

Dan Berlin:

Yeah, and that research doesn't necessarily have to add to project time that much if it's well integrated into the process. That's the one thing we hear is that "Oh, research adds so much time." It doesn't have to.

Kathi Kaiser:

We go to great lengths to work with our clients processes to make sure we get users in when they need to be. People vary, organizations vary, and how much documentation they want about research, for example. You don't need to draft a huge report and presentation and document everything. I mean, it's handy to have that in some contexts, it makes sense to do that. But it's also great to have the designers observe the sessions live and then debrief with them after the fact and come up with a quick list of the things we need to change. And then the next week, you're testing the next version. We have standing relationships with some of our clients where we're using these iterative cycles to fit in with their Agile processes. And yeah, it doesn't add to the ultimate timeline, but it does add to the benefit of the product because it gets that user voice in there when it's needed.

Dan Berlin:

Yep. So Kathi, we like to wrap up with a tip for listeners. Do you have a UX tip for folks either breaking into the field or who are continuing with their career?

Kathi Kaiser:

Yes, I do. And in reflecting on this, I think it's rather sad that I need to say this, but I think it's important to say it. I would encourage everyone in their roles in the UX field to ensure that they actually get to work with users. I see a lot of job postings these days, I talk with a lot of professionals in the field, who feel like they are UX specialists in name but not in practice. They don't have the opportunity to observe users in real time to design and execute studies to iteratively test their work. The term user experience, you know, what is the user experience? I think historically it has meant what the user experiences. But it has come to mean, in some contexts, the interface. That we hire people to design the user experience. Well, you design the interface, and then the user has their experience, right. And you need to study that to make sure those things align. So as people are entering the field, or as they're looking at making transitions within the field, I think it's important to always ask the question, "How often am I going to be working directly with users? What methods will I have available to me to connect with users? How is user research used in the organization? Is it viewed as a welcome input that's necessary? Or is it viewed as a threat to timelines and budgets?" And really enforcing that need to bring that user into the process. It makes your job easier and it makes the product at the end of the day that much better.

Dan Berlin:

Yeah, that's a great point. And also I hear a lot these days of companies who are reticent to talk to users, because their lawyers are scared one way or the other. And there are ways to work through that and yes, you have to talk to the users for UX.

Kathi Kaiser:

The irony is that we view research as risk management. I mean, if you're going to have a problem with your product, the domain of risk management is identifying, anticipating potential problems, and mitigating risk. And you can't do that unless you know how people are going to use your product and where they're going to run into trouble. So yeah, there's sometimes logistical challenges in actually connecting with users, like their privacy concerns, and things like that. But those are well worth solving. And they are solvable because they may identify much larger risks in the design of your product that could cause greater problems later on. So we always encourage people. It's always better to know than to not know. Because if you know, then you can respond and you can adapt and adjust.

Dan Berlin:

Yeah. I love that. For folks who were struggling with their corporate lawyers and getting access to users... risk management. I've never thought of it that way. And I love that. Thank you for that. So Kathi, thank you so much for taking the time to chat with me today. This has been a lot of fun chatting with you today. And I hope you enjoyed it.

Kathi Kaiser:

Yes, it's been my pleasure.

Dan Berlin:

So Kathi wrote the chapter "Know These Warning Signs of Information Architecture Problems" in the book 97 Things Every UX Practitioner Needs to Know. This is the 97 UX Things podcast, I hope you enjoyed this episode, and have yourself a great rest of the day. You've been listening to the 97 UX things podcast, a companion to the book 97 Things Every UX Practitioner Should Know published by O'Reilly and available at your local bookshop. All book royalties go to UX nonprofits, as well any funds raised by this podcast. The theme music is Moisturize the Situation by Consider the Source, Joshua Berlin is the podcast transcript editor, and I'm your host and book editor Dan Berlin. Please remember to find the needs in your community and fill them with your best work. Thanks for listening.