Wells Fargo IA HR Content Migration Project

PROJECT SCOPE

  • Client: Wells Fargo

  • Timeframe: 3 Months

  • My Role: UX Researcher

  • Team: Senior Designer, Internal HR, developers, Deloitte implementation partners

  • Methods: Treejack Testing

  • Tools: Figma, Miro, Airtable, Teams, Optimal Workshop

Background

An updated intranet was slated to launch for the Benefits section of the Wells Fargo HR content migration project. The migration team and implementation partners were interested in understanding how team members would locate information based on a revised information architecture structure using user feedback provided from an early card sort that provided important feedback on content organization. Based on the needs and scope, treejack testing was determined to be a useful research method to acquire feedback on our website architecture. Although tree testing yields quantitative data, the conclusions are by no means black and white. Task success rates are just the first step, and must be interpreted within the context of how much users struggled to get to the right answer (directness), and where they expected the right answer to be (first clicks).

Methodology

We identified 32 internal team members to take an online Treejack testing using optimal workshop. Team members were presented with 7 tasks and a predefined tree (IA). Each team member was instructed to click through a structure based on, describing a scenario and motivation for user to find, which would help to complete their task.

Our testing included tasks which targeted:

Key website goals and user tasks

  1. Success rates in primary navigation compared to secondary tasks, and establishing a reference point for future testing.

Potential problem areas

  1. New categories proposed by stakeholders and team members

Detailed Results

Metrics

Success rate: The percentage of users who found the right category for that task

Directness: The percentage of users who went to the right category immediately, without backtracking or trying any other categories

Time spent: The average amount of time elapsed from the beginning to the end of the task

Path measures:

  • Selection frequencies for each category

  • First click: the category most people selected first

  • Destination: the category most people designated as their final answer

In order to calculate the success rate, you must assign at least one ‘correct’ answer for each task. The success rate for that task indicates the percentage of users who found the correct location in the tree and identified it as the right place to complete that task. Any trials in which users selected a different final location are reported as failures. For example, if, when asked to find information about the New Mexico state library, 67 out of 100 participants selected the correct location, the success rate for that task is 67%.

Deliverables

Reflections & Next Steps

  •  Product development recommendations based on treejack testing

  • Working with designers to create a finalized wireframe from the results.

  • • Research plan & Project Brief

  • Research Script

  • • A/B Testing on moduls

  • • Curation of content for MVP (adapted interviews, research documentation, creative elements)

  • • Product development report

Hindsights

 With hindsight, we would have done some things differently, but insofar as speed, iteration, and agility are fundamental aspects of UX work, it is hard to imagine a process where you know everything you wanted to investigate from the outset. It was easier in the Discovery Phase to chart out a course, because we were working with a static set of information—the site as it existed at a particular time.

As a result, we felt the Discovery Phase went well, but could have been less comprehensive. As is always the case with assessment, you need to think about what you are going to do with the data, not just collect data because you can.

Once we started testing out new ideas, especially prototypes, we did not have a whole course charted out— we saw problems, or now had new portions of the site in a testable state, and so we tested. We fixed what we could, or tried fixes, and tested again.

Previous
Previous

General Ledger Usability for Pulse

Next
Next

Getting Reel: An Analysis of Facebook & Instagram Reels