97 UX Things

If Designing Survey Questions Were Easy, There'd Be No Garbage Data (feat. Annie Persson)

October 26, 2021 Annie Persson & Dan Berlin Season 1 Episode 20
97 UX Things
If Designing Survey Questions Were Easy, There'd Be No Garbage Data (feat. Annie Persson)
Show Notes Transcript

Annie Persson discusses how to make the most out of your survey questions.

Sponsored by Watch City Research
Watch City Research is your trusted UX research partner

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Dan Berlin:

Hi everyone. Welcome to another edition of the 97 UX Things podcast, Dan Berlin here- your host and book editor. I'm joined this week by Annie Persson who wrote the chapter:"If Designing Survey Questions Were Easy, Thered Be No Garbage Data". Welcome, Annie.

Annie Persson:

Hi, Dan, thank you for having me.

Dan Berlin:

Thanks for coming on the podcast. Can you tell us a little bit about yourself, please?

Annie Persson:

Sure. My name is Annie Persson and I am an Experience Design researcher. I have been working in the industry for a little over 15 years now. In that time, I've worked for small and large software companies, covering products ranging from technical illustration software, data management systems, e-commerce, and education technology.

Dan Berlin:

Nice. You mentioned concentrating on research, are there methodologies that you particularly like?

Annie Persson:

I love all of them. In fact, my favorite is, interviewing and speaking to people- I absolutely love the people-interaction. I also really enjoy the the song and dance of moderating usability testing sessions. There's so much that you're juggling in one session and I really enjoy the challenge that comes along with that.

Dan Berlin:

Agreed wholeheartedly. That's also my favorite part, and you mentioned exactly my favorite part of juggling and keeping it all straight and doing it right, that can be so satisfying.

Annie Persson:

Yeah, it really is.

Dan Berlin:

Can you tell us about your career trajectory? How did you discover UX? How did you end up where you are today?

Annie Persson:

For me it actually all began in my undergraduate years at Colorado State University. It was the first year that the university was introducing Technical Communication as a specialization in the Technical Journalism program. What's interesting about that is, and this may be surprising to some or maybe not, but for me I was surprised by the fact that there was actually only one course that was specifically on writing. The bulk of my curriculum was actually focused on design for print and digital mediums, usability, and web development. At the time, I didn't really realize that I was essentially getting a UX education before the term User Experience or Information Design was really being used as a title for programs. When I graduated from Colorado State and entered into the workforce, it was about three years into my career as a technical writer, that I was asked to lead a computer-based training program. As a part of that, my employer sent me to an e-learning workshop. I remember when I was in that workshop, just falling in love with the storyboarding process, and the sketching, and understanding the behaviors of how we learn. It was from that, that I decided to pursue a Master's degree in Instructional Design. It was from acquiring that degree that led me into Education Technology. I would say probably the bulk of my career has been in the EdTech space. I was really, really fortunate to spend a good portion of that time at Pearson Education, where I was fortunate to be on some really progressive project teams. It was on those teams that I was able to not just be focused in writing online help, because that was probably 20% of really what I did. Instead, I was designing knowledge-based systems and mining data for user insights, conducting usability studies for our Student Advisory Board. Then I got to work directly with customers and be a part of our Design Partner Program, which is when I designed and distributed my very first survey, which led me to wanting to write about my chapter, because- oh, did I learn a lot from that experience! I considered myself a good communicator and when I got my results back, I really had a moment of- oh, well that's not what I wanted!

Dan Berlin:

It can be really eye-opening, doing your first survey when you just dive in like that, absolutely. I'm guilty as well, many years ago- I'm right there with you. It's a nice segue into your chapter-"If Designing Survey Questions Were Easy, Thered Be No Garbage Data". So fill us in, how can we make better surveys?

Annie Persson:

Essentially, I wanted to write this chapter as a quick reference of best practices that anyone can follow when designing surveys, because there are so many free tools available now and so it's very accessible to us. Creating surveys can become a very quick and easy and economical means to capture data. On the flip side of that, it also makes it very easy to get it wrong. So, my goal was to create a list of best practices that you can follow. This is whether you're new to survey design, or just a seasoned practitioner that needs a refresher, so that we can all avoid some of those common pitfalls that generate what I refer to as garbage data.

Dan Berlin:

That's so important, because it's so easy to get that garbage data with a mis-worded or leading question. What are some of those best practices?

Annie Persson:

One thing I think that is very common for people to do is, we're all inherently aware of time, and wanting to get the most out of a survey as possible in the shortest amount of time as possible. As a result of that, there's a tendency to put too much into one question, and to combine a question using'and'. One best practice is to make sure that you don't combine what would really be two questions into one, and that you make sure to separate that out. When you start to do that, you will start to see the number and amount of questions that you have grow. So, it's really important that you begin the process by first identifying for yourself- what is it that I want to capture data on? Once you identify what that is, you start to design those questions specifically around those topics. Another thing that I've noticed a lot that's common is overlapping ranges. So for example, if you were asked- how many times a week do you walk your dog? You might say- 'Oh, 1 to 2 times a week, 2 to 4 times a week, or 4 to 5 times a week.' The problem with that construct is that, what if, you walk your dog 2 times a week? If your option is 1 to 1 times a week, or 2 to 4 times a week, whoever has the answer of 2, doesn't really have an option to select. So, whatever responses you get back, you can't work with. It's just garbage data at that point, right?

Dan Berlin:

Yeah. It may seem really obvious to folks- don't have overlapping ranges! But you see it often- where there ARE those overlapping ranges, and we want to make sure that doesn't happen in our research.

Annie Persson:

Yes, absolutely. Depending on what the question is, you can either use numerical figures and specify those ranges without overlapping the numbers. But sometimes, if you're trying to capture more attitudinal data, you can also just have a text field, and let people enter in that information for you so they don't have to feel restricted by any specific range that you're providing.

Dan Berlin:

Yep. Can you tell us a little bit about the analysis of that qualitative data that you're getting during surveys and how you go about pulling out insights from those?

Annie Persson:

Absolutely. There's a couple ways to approach that but ultimately what you want to do is, you want to code whatever feedback you get. If you're trying to get a sense of something that's aesthetic, or anything that can be just binary, you can assign a'1' or a 'No' to it. Sometimes- I find this specifically like with doing something with Microsoft Product Reaction Cards- you'll have a sense of:'These are all of the positive adjectives and these are all of the negative adjectives'. But then, you may have a group that is a little bit gray, that you'd put in your 'Maybe' column. So if you're doing something that's a little bit more attitudinal, or you're trying to get an opinion around an aesthetic, and you're having people enter in their own thoughts- you just would need to go and code it accordingly by what you've identified as being positive/negative/somewhere-in-between. What's really helpful about that as well is, if you have another researcher to work with, you can help eliminate that bias by having more eyes than your own on it to look through. Another effective measure with that, and this is definitely in the context of using the Microsoft Product Reaction Cards, is to include your team in that process of identifying- what are those positive words, and whether there's negative words- so that you're the researcher and not the one who's generating that list, but there's an actual collaborative effort in creating that.

Dan Berlin:

You mentioned the compound questions at the beginning, in terms of making sure that your question really asks only one thing. Aside from the word 'and' in a long sentence, are there other things people should be looking out for to catch those compound questions?

Annie Persson:

Well, there's a couple of things you can do. When you've identified your topics, you can approach it from the point of the grouping them by themes. That's one way to approach it. When you have them grouped by themes, and you've identified the individual questions, there may be a question that's relevant, based off of how they answer- whether it's yes or no, or whatever it is. When you're in a scenario such as that, then you want to incorporate 'branching', so that you can kind of then segue into something that's basically related, based off of their previous response. That's a really good way to capture something that maybe you originally had combined into one and to break it out.

Dan Berlin:

That's a great point- branching is an overlooked feature of surveys. I love the idea of just breaking it out and branching based on a compound question.

Annie Persson:

I love branching. I think it's a great feature as well.

Dan Berlin:

How about some other best practices? You mentioned the compound questions and overlapping ranges- what else?

Annie Persson:

Another thing that is really important is, you know, to use very simple and familiar words. You have to keep in mind that people have limited attention span. When you're creating these surveys, not only do you want to make sure that you are designing it in such a way that you're taking up, at least 10 to 12, or more around maybe 8 to 12 minutes of somebody's time, but you also want to construct it so that they don't have to work that hard at understanding what you're trying to ask. A way to accomplish that is to use simple and familiar words- to be as specific and concrete with the terminology, and to make sure that there isn't any degree of ambiguity in the meaning of the language. More times than not, if you have that, then they're going to be interpreting it that way as well. One of my favorite measures to ensure that you have the quality, or level of quality of questions in order to get the quality of responses that you want, is to conduct a pilot test. Pilot tests are hands down the best eye-opening way to understand how people cognitively work through a survey. You can do this with your team, you can even recruit family members if you want to. The whole idea around it, is to basically have somebody talk out loud about how they're reading the question and their thought-process with how they understand the options presented to them. What's interesting about that is you're not just getting a sense of how they actually understand what you're trying to communicate, but you're also getting some insight into the interaction design element of those answer options. That's actually very valuable to know as well.

Dan Berlin:

Yep. Sounds like you're essentially usability testing your survey.

Annie Persson:

Essentially, yeah!

Dan Berlin:

It's a great use of mixed methods there essentially. You mentioned making the writing as easy to read as possible. Do you ever use Flesch-Kincaid or a certain reading level to make sure that it's as readable as possible?

Annie Persson:

That is a good question. I personally haven't- I've just relied on pilot testing. But, in my former days when I was doing UX writing, I was definitely looking at leveraging tools that would measure what the reading level is, because you always wanted to target a specific reading level to meet mass audiences and literacy. But you absolutely can do that- that's an excellent point. I'd absolutely think that'd be a great thing to do to ensure that you are reaching a reading level that's appropriate for the masses.

Dan Berlin:

Do you know of any ways that folks can do that easily?

Annie Persson:

There are tools that are out there that do that. I even know something as simple as Microsoft Word does do that for you. I do not recall to memory if Qualtrics has something like that.

Dan Berlin:

I always forget that Word will do that for you. It's a buried feature, but you can get that.

Annie Persson:

Yeah, it's really fascinating. If you poke around and look at it, it even includes, I believe, the voice and tone of your writing, which I think is really interesting. I'd love to know how they measure that.

Dan Berlin:

So, Microsoft Word is rating your voice and tone-

Annie Persson:

-yes, I know right?

Dan Berlin:

Getting back to surveys, what are some other best practices that you want to convey here?

Annie Persson:

Another one to think about is- creating your response options. If you're going to have a more quantitative study, you would definitely want to keep your responses all closed. So your r sponses would be somethin to the extent of employing ikert scales, you might want to o multiple choice- but you don' want to mix in too many ypes of response options. So f you have a ranking sca e and a Likert scale, and multip e-choice- you're really just adding to the person who's aking this to have to think thro gh different response options. T e easier and simpler you c n make it on them, the better in the quality of the answers that ou're getting. Another important thing to be conscious of, I hink, is just making sure tha you are very wary of the time If you wanted to include ope-ended questions, you have to c nsider that- in one minute, what can people answer? Maybe they can answer 3 to 5 closed-en ed questions in that one minute timeframe, but it could take u to a full minute for somebody o answer an open-ended question. So, you need to factor in all those elements, which ag in goes back to pilot testi g, because that's a really grea way to get a sense of- 'okay, I ve designed this survey, I' guessing from the amount of q estions I have and the mixture of open-ended questions I've i cluded, that it'll take about 10 minutes.'

Dan Berlin:

Right.

Annie Persson:

Well, you're going to pilot test that and you might find it takes people only maybe 8 minutes to complete, or you might be 12 or 15. So that's another way to gauge where you need to maybe scale back.

Dan Berlin:

Yep. Good point about the difference between how many questions a person can tackle within a minute timespan, whether it's essentially quant or qual. Shoutout for saying'Likert' correctly!

Annie Persson:

Haha, right?

Dan Berlin:

It has to be said in this particular podcast that it is 'lick-urt' and not'lie-kurt'. Shoutout to Tom Tullis for teaching generations of Bentley students that, because he actually contacted the Likert family in order to find out the correct pronunciation. So I think a bunch of us are particular about that today.

Annie Persson:

Yes we are, and thank you for mentioning it. It is something that I always keep to mind all the time whenever I hear it said. It's a really fun factoid to carry with you for sure.

Dan Berlin:

Yes. That said, though, tell us about labeling Likert scales. Do you have any thoughts on best practices for labeling these to convey what they mean?

Annie Persson:

Yes. It's a fading literature out there. In fact, I think there was recently something just published that was counter to a practice I personally like to employ- which is to keep it between 5 and 7 options for a Likert scale. But, I did recently see an article published saying that having it up to 10, even as much as 20 really doesn't impact it. That's the nature of our industry, which is great, is that there's always going to be conflicting literature out there. It's important for us to continue testing all of these practices. For the purposes of today's discussion, I would suggest staying within the 5 to 7 range for any Likert scale that you're going to use.

Dan Berlin:

Agreed. 20 sounds like a lot.

Annie Persson:

Right? I think another part that I think is important for people to know and I see this happen a lot is- sometimes participants want to please right? They want to do what's socially acceptable. You don't want to give people the option to say- 'I don't know'. There's definitely a lot of research out there that shows that people will select it for either ambivalence or self-protection. So another way to approach that is, to instead give the option of 'Not applicable', or just, 'Don't Use'. That's a nice way, because then people won't skip over the question and you've resolved that possibility of them just answering to satisfy or please.

Dan Berlin:

Right, and, or that they don't have a better choice. I've seen surveys where there is no 'No' option, or 'I don't do this'. People forget about that when designing surveys.

Annie Persson:

Absolutely, they do. This is an interesting example- because you mentioned, Tom Tullis- I actually was just in a conversation with his daughter, Cheryl. She was sharing with me something that she noticed from a survey, which was identifying gender. Now that we're in this awesome age of being so much more inclusive, and gender-aware, this particular survey did not just list out multiple-choice, the options for you to select from, but they also allowed for you to select more than one option. I thought that was incredibly awesome. I have never encountered that myself personally. Kudos to Cheryl, for sharing that. I think that's a really cool approach to think about when you're constructing your survey.

Dan Berlin:

Right, it's not necessarily a radio button that we should be thinking about anymore. It is that multi-select checkbox.

Annie Persson:

Absolutely. Just to build on that, one thing that I personally like to do in surveys is- to take them! Any opportunity I have to take one, I actually will click that button and proceed- because I'm curious and I want to see how did they design this. It's a really great way to either identify good practice or bad practice. Unfortunately, I think the survey design world can get a bad rep because there are, unfortunately, so many bad surveys out there. A lot of people may not take the time to conduct a pilot test and so, unfortunately the results or the outcome of that is that we tend to just sort of ignore them. Because we've all maybe encountered a lot of bad surveys, the response rate suffers as a result of that. Hopefully, this chapter will provide some of those best practices and tips for people to be mindful of, so that we can hopefully make a difference and start a trend of creating more better designed surveys, so that folks don't feel so dismissive anytime a prompt comes up and just wants to take a few moments of your time.

Dan Berlin:

I wish I had something like this that was quick and useful when they first started survey-building years ago, so thank you for all of this.

Annie Persson:

Yes, absolutely.

Dan Berlin:

In our final segment here, we like to get a career tip for our listeners, whether they're first breaking into UX or have lots of experience, do you have a career tip for folks?

Annie Persson:

Yeah, I do. My advice is to be the CEO of your career development. What I mean by that is, you don't want to limit your experiences to the projects that are either from school, if you're still studying UX, or from your actual current job. There's going to be limitations and you don't need to limit your experience by what you're doing in your work or school, because UX is all around us, right? You can create a use case from opportunities that are presenting themselves to you. Just from the experience you have, from your home appliances, to going through a drive-in or a restaurant, or even just grocery-store shopping. At the end of the day, it's all about your thought process and how you've approached either research, UX writing, or Design. I think it's so important to expand outside of what you currently do, and to continually develop yourself and look for those opportunities. I only say that because I know for me personally, having been in Educational Technology for some time, I didn't want to become saturated in that industry. So, I was always seeking and looking for opportunities outside of that- so I had more breadth as a researcher. I could take examples from different industries and different approaches and bring that into how I think about my problem solving.

Dan Berlin:

Absolutely. There's so much we can learn from research and design in different domains and applying it to our own work in the future. That's what it's all about is- as you said, design is all around us, good and bad- learning from that and applying it.

Annie Persson:

Absolutely. We become more critical the more we're in this industry, don't we? Because we're aware!

Dan Berlin:

I always say that- this this job ruins you.

Annie Persson:

Right. But you're surrounded by the best people in the world- that's what I love about our community- is that everybody is so generous about giving. Whether it's just giving time to mentor, to share ideas and how they approach things. I think that it's important to give back and to develop those relationships. It's absolutely the best industry you can be in.

Dan Berlin:

I agree wholeheartedly. We're very lucky to be in an industry where people are so giving. That's what we're doing- we help people and so we help each other!

Annie Persson:

Yes, absolutely. That's what we do, you're right.

Dan Berlin:

Thank you so much, Annie. This has been a wonderful conversation. Thank you for joining me today.

Annie Persson:

Thank you, Dan- this has been wonderful. I was really excited about talking to you today. So, thank you so much!

Dan Berlin:

My pleasure. This have been fun. My guest today has been Annie Persson, the author of "If Designing Survey Questions Were Easy, Thered Be No Garbage Data". I'm your host, Dan Berlin. Thanks for listening. You've been listening to the 97 UX Things podcast, a companion to the book '97 Things Every UX Practitioner Should Know' published by O'Reilly and available at your local bookshop. All book royalties go to UX nonprofits as well any funds raised by this podcast. The theme music is 'Moisturize the Situation' by 'Consider the Source'. And, I'm your host and book editor, Dan Berlin. Please remember to find the needs in your community and fill them with your best work. Thanks for listening.