top of page
Writer's picturedanielclarke1981

Week 2: User Research

Updated: May 3, 2023

There was a lot of content this week and a lot for me to unpack regarding user research.

Frank Spillers on User Research


The first video was by Frank Spillers from Experience Dynamics. In the video, Spiller references the confusion that continues to persist when discussing User Experience with the uninitiated where UXD is directly interpreted as User Interface (UI) design. This confusion and misrepresentation of the term has been around for ages and puts an overwhelming emphasis on the visual part of the experience and elements the user can interact with directly on screen. Although this is a crucial element of many UX processes, it is only a fraction of the discipline as a whole. I, myself have been a victim of this from time to time in the past. As a web designer many years ago, back-end developers would sardonically call me a "pixel nudger" or "the man with the crayons". This dismissive attitude at the time didn't frustrate me as much as it should have, as I didn't appreciate the breadth of the discipline myself at the time. I find it important now to advocate for the discipline and waste no time defending it to others who have a simplified view of its application in the field of digital design.


Spillers also discusses IXD (Interaction design) in regard to UXD. This area particularly interests me as it ties into two of Norman's Three levels of processing - Reflective, Behavioural, and Visceral (Norman 2013: 50) - with Interactions dealing with behavioral and visceral. Interaction design was one of the main things that attracted me to UX, after reading portions of Coopers About Face (et al. 2014), a behemoth of a book regarding interaction design. Spillers's video also touches on other elements that remind me of this book, particularly when he mentions context research and Goal-Centered design which is a process similar to Cooper's Goal-Directed Design (Cooper et al. 2014: 23) (fig 1)

Figure 1. Clarke . 2023. Cooper's Goal-Directed Design
Figure 1: Clarke 2023. Cooper's Goal-Directed Design

Looking at Cooper's process again after all the course content I have received over the last year, I can see its shortcomings in regard to Agile processes and it reads more like a process akin to the Waterfall methodology of software development. Each process is very regimented and leads on to the next, rather than Lean UX or Lean Startup where the process feeds back on itself. I appreciate that Cooper is a powerhouse in the world of UX so I shan't be totally dismissive, but I'm beginning to see where his process may not be practical and where is best to use alternative methods. Cooper itself is a creative agency, where the interaction with the client may be transactional or on a project-by-project basis with deliverables and fixed project scopes (the triple constraints, cost, time and scope may all be pre-defined). In this context, Goal-directed design delivers a user-centered approach while reducing the opportunity for future learning, which may be perfectly acceptable to the client but ultimately may not deliver continual product evolution and better experiences for users when new insights present themselves post-handover.


After the video, we were asked to think of a time when we had used a product or service that did not seem to be solving the right problem. I decided to launch into a rant about my recent experience with Amazon. I admit that I went off on a tangent here and it's important that I acknowledge this. Instead of staying on point and attempting to understand the problem was it trying to solve it or how more user research may have helped, I vented. The issue stemmed from my being charged £95 for an accidental click and I wanted to highlight this as a dark pattern but didn't really suggest ways it could have been negated. LJ rightfully pointed out that the phrase 'dark pattern' is not considered inclusive and I should be using 'deceptive design' instead. This highlights the importance that inclusive design is not just about the things we actually design, but the way we discuss the issue about a design, so although I missed the main point of the exercise I was still able to take away something important from it.


Video 2.1 Intro to User Research


The second video was by Clementine Brown and was an introduction to User Research. The first portion discussed the differences between Generative and Evaluative, Attitudinal and Behavioural, and Quantitive and Qualitative Research.

Looking at my own experience off the back of UXO740, my primary research was both Generative (finding opportunities to define the problem and innovate) and Evaluative (testing your solution and evaluating existing designs), but it was predominantly Attitudinal (what people say, measuring stated beliefs) rather than Behavioral (what they do, based on observation). Behavioral research reveals additional insights as it's based on actual events rather than through the prism of interpretation. On Norman's levels of processing, insights from Behavioural research would be Visceral and Behavioral (no surprises there), whereas Attitudinal would be more in line with Reflective processing.


The video also spoke about featuritis or feature-creep, where after a certain amount of features are introduced the users' satisfaction with the experience is diminished. I can see how this would be related to Hicks Law, where the time it takes to make a decision increases with the number and complexity of choices available (Yablonski 2020: 23). This has tons of ramifications with cognitive load and analysis paralysis and user satisfaction, but also means that users may only use a few features. Instead of adding a multitude of features, the core offering should be taken into consideration and refined and improved over iteration. User research should be used in this instance to mitigate this creep in the first place by addressing real needs.


The video finished with an overview of the types of qualitative and quantitative research that can be undertaken:


Quantitative Research Methods

  • Surveys.

  • A/B testing

  • Product or web metrics

  • Customer support metrics

Qualitative Research Methods

  • Contextual Inquiry - focusing on the user's interaction within their context/environment.

  • User Interviews

  • Usability testing

  • Card Sorting

  • Diary Studies

  • Task Analysis - Designers walk through the steps by observing actual users in a usability test

I must admit that Task Analysis looks like a very labour-intensive process, painstakingly documenting each minor interaction the user has with a product. How neurotic should we be about each click and pause the user takes and are we sure the user is pausing because of confusion or is the pause an important part of the process? Is frictionless really the be-all end-all of UX or do we want moments where the user gets to engage at a deeper level? I appreciate that some processes should go through this process, tasks can be unnecessarily complex, and reducing steps can reduce a user's cognitive load. You can also identify pain points and possible alternatives, but I feel that context is key when analysing the interactions observed and not all tasks require that level of granularity,


Video 2.2 Contextual Inquiry


Video 2.2 went into a deeper dive into Contextural Inquiry as a method. This reinforced the benefit of both user interviews and observations. There was an emphasis on observing the whole interaction and watching out for subtle inflections in interviews or signs of confusion or hesitations. There was also a reference to building trust with the interviewee. This was something I tried to do during UXO740, where I reassured the participant that they were not being assessed but their insights were crucial to the study.


The content also referenced why we can't just ask people what they want from a solution or product. This reminded me of Eric Rise, who says the idea is not to capitulate to what users thought they wanted or tell them what they ought to want (Ries 2011: 50). Instead, we should respect their insights and use them to inform our design decisions. Users aren't experts at coming up with ideas but will have a go at trying, which is another thing I found when interviewing participants during UXO740.


One final observation of this video was the use of language when interviewing participants, which was the use of open-ended questions, and silence. I feel this might be a struggle for me! I am naturally a chatty individual who finds uncomfortable silences unbearable so I may struggle to leave pauses in the interviews to encourage participants to share more insights. As for open-ended questions and the use of Six Sigma 5 whys, I may also need to actively remember to do this as I feel I will be less inclined to do this naturally as I'll want the conversation to move forward at a pace. I need to remember that there may be insights that could arise from things the participant reveals that may not be on my radar and could shift inquiries in an unexpected but fruitful direction. It's going to be a delicate balancing act to ensure the participant is comfortable enough to share, but leave uncomfortable moments so they fill the vacuum with insights!

Video 2.3 Synthesising User Research


After gaining insights, it's time to put them to work. This is the bit I was looking forward to. Seeing the patterns in the data. The content in this video reminded me of GDO710 - particularly the creativity exercises in week 2 and opposite thinking. It's less to do with the actual process but with trying to see patterns from disparate insights. This came in the form of Affinity Mapping.


Affinity Mapping

I'd heard about Affinity mapping (and a similar technique called Empathy mapping), before but I have never done one. With affinity mapping, it's best to get at least 5 participants to discuss a topic or theme and then the insights onto post-it notes. It's even better if it's a team activity as different people may see different insights from the same study. From there you need to look for themes and trends and group the related insights:

  • Attitudes

  • Sentiments

  • Problems

  • Requests

  • Behaviours

  • Needs

  • Goals

While digesting the content, part of me couldn't help but be apprehensive. What if I couldn't see patterns in the insights I'll collect this week? Or that I'll have too few insights to really know which were common themes and which were isolated to that individual. I feel pressure to ensure the question I formulate are open-ended enough to allow for exploration of the volunteer space, but also ensure that any answers I get will be relevant and potentially lead to themes. I feel like some of it will be down to luck.


"I" Statements, Problem Statements and How Might We...?

The content goes on to discuss "I" statements and Problem statements to help frame the themes into useable insights. "I" statements help you create the themes into something more compelling than the basic. By stating a sentence with "I" (i.e. I am . . ., I don't . . . , I like to . . .) you can use them to test whether the insights sound credible based on your interviews. These are not direct quotes but something you feel would be indicative of a persona based on the research. I affect this is the genesis of the persona.


Problem statements are a high-level description of the core unsolved problem(s) the intended users of your site, app, or system currently experience (Brown 2021). This can be expanded to sites, apps, or systems they use which aren't yours or a theme or topic. They usually look like this:


_____[ user ]______ needs _____[ user need ]______ because ______[insight/why?].


They need to include the user, and their goals and need to be from their perspective. They need to include insights from the research but not provide solutions to them.


Finally "How might we" is a question that helps explore ideas or potential solutions fo for the problems we need to solve. From the problem statement, we need to turn the negative into a positive opportunity. To do this we should:


  1. Revisit the problem - by looking a the problem statement

  2. Define the Audience - who are we designing for?

  3. Break the problem or challenge into smaller pieces.

  4. Review - are the statements aligned with our research and audience?


So for example for the problem statement:

A mother of 3, waiting at the airport gate, needs to entertain her playful children because "annoying little brats" only irritate already frustrated fellow passengers.

Examples of How might we questions for this example could include:

  • How might we amp up the good?

  • How might we remove the bad?

  • How might we explore the opposite?

  • How might we question an assumption?

  • How might we go after adjectives?

  • How might we break POV into pieces?

Reflections

There is a lot of material this week, I think I'm going to get the "I" statements, problems statements and How might we question mixed up a little. At this stage, without trying them first I can't contextualise them, but this was a similar issue I had with the techniques in the first module and it worked out fine once I tried them. It's interesting to me that I'm reluctant to get stuck into the practical this week. Again I've had setbacks with timeframes and that's put me in a spin. As a Kinesthetic, you'd think I'd be really keen to get my hands dirty with some new methods, but I find the knowledge that these blogs and the challenge activities are partly assessed somewhat paralysing. I know I need to do them, but what if I mess up or misinterpret the point of them? I know practice makes perfect, but I'm also coming off the back of three modules where I have achieved higher grades than I had ever expected. I feel the burning pressure to ensure that this module is as good as the last ones and I've not had the best start. It goes to show that when things are going well it only takes a nudge in the wrong direction to derail my confidence. Therefore I'm going to set a SMART goal that I will start the challenge activities as soon into the week as possible moving forward:


I will ensure that I tackle the challenge activities as soon as I possibly can each week and have something to show in the Spark forum no later than Wednesday of that week.



References

 

BROWN, Clementine. 2021. ‘Week 2: Synthesising User Research.’ Canvas Falmouth University [online]. Available at: https://learn.falmouth.ac.uk/courses/283/pages/week-2-synthesising-user-research?module_item_id=29877 [accesse 01/02/23]


COOPER, Alan, Robert REIMAN, David CRONIN and Christopher NOESSEL. 2014. About face: the essentials of interaction design. 4th edn. Indianapolis: Wiley.


NORMAN, Donald. 2013. The Design of Everyday Things - Revised and Expanded. 2nd edn. New York: Basic Books.


RIES, Eric. 2011. The Lean Startup. London: Portfolio Penguin.


YABLONSKI, Jon. 2020. Laws of UX, Using Psychology to Design Better Products & Services. 1st edn. Sebastopol: O'Reilly



Figures

 

Figure 1: CLARKE, Daniel. 2023. Cooper's Goal-Directed Design



Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page