Improving the IA of the Smart Pension member app
Joe Russell
|
2023
This article was originally published on Optimal workshop and the smart.co Innovation blog
In this article we explain how we used tree testing to measure and improve the effectiveness of our information architecture
This article was originally published on Optimal workshop and the smart.co Innovation blog
In this article we explain how we used tree testing to measure and improve the effectiveness of our information architecture
This article was originally published on Optimal workshop and the smart.co Innovation blog
In this article we explain how we used tree testing to measure and improve the effectiveness of our information architecture
Information architecture involves making content and features easier for users to find. It has nothing to do with how a design looks aesthetically - it’s all about language, labels and taxonomies.
In recent years the practice of information architecture has fallen out of fashion, which is a shame as you can’t design something successfully without it. If a user can’t find a feature, it’s game over - the feature may as well not exist as far as they’re concerned.
We were recently invited to help out with the redesign of our platform’s scheme member web app. It was straightforward enough to come up with a new information architecture - but how could we be sure that the new design was better than the old one?
Information architecture involves making content and features easier for users to find. It has nothing to do with how a design looks aesthetically - it’s all about language, labels and taxonomies.
In recent years the practice of information architecture has fallen out of fashion, which is a shame as you can’t design something successfully without it. If a user can’t find a feature, it’s game over - the feature may as well not exist as far as they’re concerned.
We were recently invited to help out with the redesign of our platform’s scheme member web app. It was straightforward enough to come up with a new information architecture - but how could we be sure that the new design was better than the old one?
Information architecture involves making content and features easier for users to find. It has nothing to do with how a design looks aesthetically - it’s all about language, labels and taxonomies.
In recent years the practice of information architecture has fallen out of fashion, which is a shame as you can’t design something successfully without it. If a user can’t find a feature, it’s game over - the feature may as well not exist as far as they’re concerned.
We were recently invited to help out with the redesign of our platform’s scheme member web app. It was straightforward enough to come up with a new information architecture - but how could we be sure that the new design was better than the old one?
Tree testing as a research method
Tree testing is a type of quantitative research where you invite users to carry out tasks using your navigational system. You record when they complete the tasks successfully or not, and then do some data analysis to work out if there are any problems in your navigation.
Like all research methods, tree testing has some limitations. It abstracts away your app or website into just a navigation tree. In that way it’s very artificial, but it also, usefully, forces you to focus on one thing - the information architecture.
We used a tool called Treejack which made the tree testing pretty easy.
Tree testing as a research method
Tree testing is a type of quantitative research where you invite users to carry out tasks using your navigational system. You record when they complete the tasks successfully or not, and then do some data analysis to work out if there are any problems in your navigation.
Like all research methods, tree testing has some limitations. It abstracts away your app or website into just a navigation tree. In that way it’s very artificial, but it also, usefully, forces you to focus on one thing - the information architecture.
We used a tool called Treejack which made the tree testing pretty easy.
Tree testing as a research method
Tree testing is a type of quantitative research where you invite users to carry out tasks using your navigational system. You record when they complete the tasks successfully or not, and then do some data analysis to work out if there are any problems in your navigation.
Like all research methods, tree testing has some limitations. It abstracts away your app or website into just a navigation tree. In that way it’s very artificial, but it also, usefully, forces you to focus on one thing - the information architecture.
We used a tool called Treejack which made the tree testing pretty easy.
Treejack provides a recruitment service where they source participants from a provider called Cint. It’s much cheaper than conventional lab user research which can cost about $100-200 per person all-in. With Treejack, the cost for us was just $10 per person. We recruited 200 participants, 100 tested the old design and 100 tested our new design. We got the results within about 4-5 hours, which was impressively fast. One of the downsides of recruiting members of the public to do paid tasks online is that some people are going to fill in any old nonsense just to get to the end. It means you get “noise” in the data. To mitigate this, we excluded anyone who abandoned the tasks part-way through, and anyone who took less than 90 seconds to do them all.
Treejack provides a recruitment service where they source participants from a provider called Cint. It’s much cheaper than conventional lab user research which can cost about $100-200 per person all-in. With Treejack, the cost for us was just $10 per person. We recruited 200 participants, 100 tested the old design and 100 tested our new design. We got the results within about 4-5 hours, which was impressively fast. One of the downsides of recruiting members of the public to do paid tasks online is that some people are going to fill in any old nonsense just to get to the end. It means you get “noise” in the data. To mitigate this, we excluded anyone who abandoned the tasks part-way through, and anyone who took less than 90 seconds to do them all.
Treejack provides a recruitment service where they source participants from a provider called Cint. It’s much cheaper than conventional lab user research which can cost about $100-200 per person all-in. With Treejack, the cost for us was just $10 per person. We recruited 200 participants, 100 tested the old design and 100 tested our new design. We got the results within about 4-5 hours, which was impressively fast. One of the downsides of recruiting members of the public to do paid tasks online is that some people are going to fill in any old nonsense just to get to the end. It means you get “noise” in the data. To mitigate this, we excluded anyone who abandoned the tasks part-way through, and anyone who took less than 90 seconds to do them all.
Our findings
If you’re interested in doing some tree testing yourself, you might like to take a look at our raw data. Here are the raw findings from study 1 (using the old navigation) and from study 2 (using the new navigation). 100 participants were given 13 different tasks in each study (like “You want to change your password. Find the place to do this.”). When we looked at the average task success rate, we were really happy with what we found.
Our findings
If you’re interested in doing some tree testing yourself, you might like to take a look at our raw data. Here are the raw findings from study 1 (using the old navigation) and from study 2 (using the new navigation). 100 participants were given 13 different tasks in each study (like “You want to change your password. Find the place to do this.”). When we looked at the average task success rate, we were really happy with what we found.
Our findings
If you’re interested in doing some tree testing yourself, you might like to take a look at our raw data. Here are the raw findings from study 1 (using the old navigation) and from study 2 (using the new navigation). 100 participants were given 13 different tasks in each study (like “You want to change your password. Find the place to do this.”). When we looked at the average task success rate, we were really happy with what we found.
After an initial round of self congratulatory high fives, we realised that although the averages looked good, there was more under the surface that we needed to pay attention to. Some of the tasks didn’t show any improvement from old to new. Here are some of the most interesting findings.
After an initial round of self congratulatory high fives, we realised that although the averages looked good, there was more under the surface that we needed to pay attention to. Some of the tasks didn’t show any improvement from old to new. Here are some of the most interesting findings.
After an initial round of self congratulatory high fives, we realised that although the averages looked good, there was more under the surface that we needed to pay attention to. Some of the tasks didn’t show any improvement from old to new. Here are some of the most interesting findings.
Finding 1: Burger menus are effective… at hiding things
While now a commonly used and recognised design pattern, burger menus are usually a compromise – somewhere to put navigation that doesn’t fit. Designers know this, but this particular test showed they can also have a really negative impact on findability. This gets worse when you have multiple menus with ambiguous contents. We call this the “lucky dip” problem. Each menu is like a bucket with mystery contents. You don’t want to force users to search through every bucket to find what they’re looking for. It’s time consuming, and they might find it so frustrating that they just give up.
Finding 1: Burger menus are effective… at hiding things
While now a commonly used and recognised design pattern, burger menus are usually a compromise – somewhere to put navigation that doesn’t fit. Designers know this, but this particular test showed they can also have a really negative impact on findability. This gets worse when you have multiple menus with ambiguous contents. We call this the “lucky dip” problem. Each menu is like a bucket with mystery contents. You don’t want to force users to search through every bucket to find what they’re looking for. It’s time consuming, and they might find it so frustrating that they just give up.
Finding 1: Burger menus are effective… at hiding things
While now a commonly used and recognised design pattern, burger menus are usually a compromise – somewhere to put navigation that doesn’t fit. Designers know this, but this particular test showed they can also have a really negative impact on findability. This gets worse when you have multiple menus with ambiguous contents. We call this the “lucky dip” problem. Each menu is like a bucket with mystery contents. You don’t want to force users to search through every bucket to find what they’re looking for. It’s time consuming, and they might find it so frustrating that they just give up.
You can see this principle at work in the video above. Users were given the task “You want to find out if you have any messages from your pension provider. Find the place to do this”. In the old navigation, the user had to open an overflow menu (called “Menu”) in order to find the item that contained their messages. To make it even harder, that item was called “Letters” - an old fashioned term from UK pension regulations that’s rarely used in apps or websites. As a result, only 42% of participants found it. In the new navigation structure we sensibly decided not to have an overflow menu and instead we just listed all the features on the home page. This meant that users only had to glance down the list of items to see “Inbox: unread messages (2)”. This made it easy to see, and the new label made it easy to understand - so this navigation structure got a 90% success rate.
You can see this principle at work in the video above. Users were given the task “You want to find out if you have any messages from your pension provider. Find the place to do this”. In the old navigation, the user had to open an overflow menu (called “Menu”) in order to find the item that contained their messages. To make it even harder, that item was called “Letters” - an old fashioned term from UK pension regulations that’s rarely used in apps or websites. As a result, only 42% of participants found it. In the new navigation structure we sensibly decided not to have an overflow menu and instead we just listed all the features on the home page. This meant that users only had to glance down the list of items to see “Inbox: unread messages (2)”. This made it easy to see, and the new label made it easy to understand - so this navigation structure got a 90% success rate.
You can see this principle at work in the video above. Users were given the task “You want to find out if you have any messages from your pension provider. Find the place to do this”. In the old navigation, the user had to open an overflow menu (called “Menu”) in order to find the item that contained their messages. To make it even harder, that item was called “Letters” - an old fashioned term from UK pension regulations that’s rarely used in apps or websites. As a result, only 42% of participants found it. In the new navigation structure we sensibly decided not to have an overflow menu and instead we just listed all the features on the home page. This meant that users only had to glance down the list of items to see “Inbox: unread messages (2)”. This made it easy to see, and the new label made it easy to understand - so this navigation structure got a 90% success rate.
Finding 2: Live excerpts of dynamic content can really help
In the new navigation, we added little excerpts of real data from the user’s account to clarify the labels. For example, retirement age is a very short string of characters (“65 years old”), so it made sense for us to just put it in the navigation label rather than forcing users to click something to find out. In some cases, it turned out this extra snippet of information seemed to really help with findability. For example, we gave participants the task “You want to find out what percentage of your salary you currently put into your pension. Find the place to do this”. You can see the two navigation designs in the video below. Participants struggled with the old navigation - only 10% of them getting it right. Conversely, in the new navigation, 80% got it right.
Finding 2: Live excerpts of dynamic content can really help
In the new navigation, we added little excerpts of real data from the user’s account to clarify the labels. For example, retirement age is a very short string of characters (“65 years old”), so it made sense for us to just put it in the navigation label rather than forcing users to click something to find out. In some cases, it turned out this extra snippet of information seemed to really help with findability. For example, we gave participants the task “You want to find out what percentage of your salary you currently put into your pension. Find the place to do this”. You can see the two navigation designs in the video below. Participants struggled with the old navigation - only 10% of them getting it right. Conversely, in the new navigation, 80% got it right.
Finding 2: Live excerpts of dynamic content can really help
In the new navigation, we added little excerpts of real data from the user’s account to clarify the labels. For example, retirement age is a very short string of characters (“65 years old”), so it made sense for us to just put it in the navigation label rather than forcing users to click something to find out. In some cases, it turned out this extra snippet of information seemed to really help with findability. For example, we gave participants the task “You want to find out what percentage of your salary you currently put into your pension. Find the place to do this”. You can see the two navigation designs in the video below. Participants struggled with the old navigation - only 10% of them getting it right. Conversely, in the new navigation, 80% got it right.
Finding 3: sometimes “technically correct” is the worst kind of correct
The pension industry has lots of archaic terminology from laws and regulations. For example, in the old navigation, we had an item labelled “Manage membership” which allowed people to take a break from putting money into their pensions. The phrase “manage membership” is a bit strange, but it’s technically correct in terms of UK pension terminology.
In the new design we got rid of it and changed it to “Stop paying in: cease membership of your pension scheme”. We hoped that “Stop paying in” was good plain English. It turns out we were only partly right.
We gave participants the task “Imagine you have debt problems and you want to have a break from paying into your pension for a while. Find the place to do this.” You can see the two navigation designs in the video below. It turns out- the task success rate for the old design was 4%, while it was 60% for the new design.
Finding 3: sometimes “technically correct” is the worst kind of correct
The pension industry has lots of archaic terminology from laws and regulations. For example, in the old navigation, we had an item labelled “Manage membership” which allowed people to take a break from putting money into their pensions. The phrase “manage membership” is a bit strange, but it’s technically correct in terms of UK pension terminology.
In the new design we got rid of it and changed it to “Stop paying in: cease membership of your pension scheme”. We hoped that “Stop paying in” was good plain English. It turns out we were only partly right.
We gave participants the task “Imagine you have debt problems and you want to have a break from paying into your pension for a while. Find the place to do this.” You can see the two navigation designs in the video below. It turns out- the task success rate for the old design was 4%, while it was 60% for the new design.
Finding 3: sometimes “technically correct” is the worst kind of correct
The pension industry has lots of archaic terminology from laws and regulations. For example, in the old navigation, we had an item labelled “Manage membership” which allowed people to take a break from putting money into their pensions. The phrase “manage membership” is a bit strange, but it’s technically correct in terms of UK pension terminology.
In the new design we got rid of it and changed it to “Stop paying in: cease membership of your pension scheme”. We hoped that “Stop paying in” was good plain English. It turns out we were only partly right.
We gave participants the task “Imagine you have debt problems and you want to have a break from paying into your pension for a while. Find the place to do this.” You can see the two navigation designs in the video below. It turns out- the task success rate for the old design was 4%, while it was 60% for the new design.
This was puzzling - the new design was a clear winner but at 60%, the score still wasn’t that good. Luckily, in another piece of research we got a useful insight as to the potential reason why. The search traffic on our help centre website shows that a lot of people search for the term “opt out”, while hardly anyone searches for “cease membership”. In our test, we think that people might have been looking for “opt out” but not finding it, even though technically speaking it’s not the right term. In official pension terminology, “opt out” is something you can only do in the first 3 months of joining. After that you can’t “opt out”, but you can “cease membership”, which is a similar thing but not quite the same. Confusing isn’t it!
So in our next test, we’re going to change the label to “Stop paying in: opt out or cease membership”. At least this way, if people are looking for “opt out”, they’ll be able to see it. Then, on the next page we can explain their options more clearly.
This was puzzling - the new design was a clear winner but at 60%, the score still wasn’t that good. Luckily, in another piece of research we got a useful insight as to the potential reason why. The search traffic on our help centre website shows that a lot of people search for the term “opt out”, while hardly anyone searches for “cease membership”. In our test, we think that people might have been looking for “opt out” but not finding it, even though technically speaking it’s not the right term. In official pension terminology, “opt out” is something you can only do in the first 3 months of joining. After that you can’t “opt out”, but you can “cease membership”, which is a similar thing but not quite the same. Confusing isn’t it!
So in our next test, we’re going to change the label to “Stop paying in: opt out or cease membership”. At least this way, if people are looking for “opt out”, they’ll be able to see it. Then, on the next page we can explain their options more clearly.
This was puzzling - the new design was a clear winner but at 60%, the score still wasn’t that good. Luckily, in another piece of research we got a useful insight as to the potential reason why. The search traffic on our help centre website shows that a lot of people search for the term “opt out”, while hardly anyone searches for “cease membership”. In our test, we think that people might have been looking for “opt out” but not finding it, even though technically speaking it’s not the right term. In official pension terminology, “opt out” is something you can only do in the first 3 months of joining. After that you can’t “opt out”, but you can “cease membership”, which is a similar thing but not quite the same. Confusing isn’t it!
So in our next test, we’re going to change the label to “Stop paying in: opt out or cease membership”. At least this way, if people are looking for “opt out”, they’ll be able to see it. Then, on the next page we can explain their options more clearly.
Conclusions
Before we did the tree test research, we were confident that the information architecture in the new design was better than the old design. When the data came in, we realised that while it was better overall, there were still some areas that needed more work. It was incredibly useful to have quantitative data from 200 people to show us where to focus our efforts.
Conclusions
Before we did the tree test research, we were confident that the information architecture in the new design was better than the old design. When the data came in, we realised that while it was better overall, there were still some areas that needed more work. It was incredibly useful to have quantitative data from 200 people to show us where to focus our efforts.
Conclusions
Before we did the tree test research, we were confident that the information architecture in the new design was better than the old design. When the data came in, we realised that while it was better overall, there were still some areas that needed more work. It was incredibly useful to have quantitative data from 200 people to show us where to focus our efforts.
As we analysed the data we started to notice the shortcomings of tree testing. In our new design, the user interface has various things that intend to help users with findability: icons, cards, explanatory text, help, a chatbot and so on. Tree testing doesn’t acknowledge the existence of any of that - so it isn’t a replacement for other research methods like qualitative user research or analytics. That said, it was amazing in the way it gave us quantitative findability data so quickly and easily.
We’ve decided that tree testing deserves a place in our research toolbox. Having read this article, maybe you’ll feel the same.
As we analysed the data we started to notice the shortcomings of tree testing. In our new design, the user interface has various things that intend to help users with findability: icons, cards, explanatory text, help, a chatbot and so on. Tree testing doesn’t acknowledge the existence of any of that - so it isn’t a replacement for other research methods like qualitative user research or analytics. That said, it was amazing in the way it gave us quantitative findability data so quickly and easily.
We’ve decided that tree testing deserves a place in our research toolbox. Having read this article, maybe you’ll feel the same.
As we analysed the data we started to notice the shortcomings of tree testing. In our new design, the user interface has various things that intend to help users with findability: icons, cards, explanatory text, help, a chatbot and so on. Tree testing doesn’t acknowledge the existence of any of that - so it isn’t a replacement for other research methods like qualitative user research or analytics. That said, it was amazing in the way it gave us quantitative findability data so quickly and easily.
We’ve decided that tree testing deserves a place in our research toolbox. Having read this article, maybe you’ll feel the same.
© 2024 Joe Russell, all rights reserved.
© 2024 Joe Russell, all rights reserved.
© 2024 Joe Russell, all rights reserved.
© 2024 Joe Russell, all rights reserved.