But there could be some aspect of the design that would make it perform worse in realistic conditions. We can ask: who was moderating those tests? How much experience did the moderators have?
Even small confounding variables could produce an invalid result. For example, imagine if all of the participants who tested the new version of the product did so in the morning on a Monday, and all of the participants who tested the old version did so in the evening on a Friday. There could easily be something about the timing of the tests that influenced the participants to perform better or worse.
Do we have statistical significance? For the quantitative research, was the difference between the two designs statistically significant? In order words, were the faster task times in the new version reliable and not likely due to random change? How was time on task analyzed? In many studies, the time on task includes only successful attempts.
The new design was faster than the old one, but were the success rates comparable? What the types of errors did people run into?
We should look not just at time on task, but other metrics that were collected during the quantitative study, to see if they all suggest that the new product is better. But as any experienced UX professional will tell you, that sounds easier to do than it really is.
This is part of the reason why a triangulation strategy is so necessary. Then we can use that information to interpret what our users say. We need to look at why these people might be responding so negatively to an objectively better product, while the task times in the quantitative study seem to be better. Imagine that thousands of employees perform this task thousands of times per year — at the company level, those efficiency gains add up quickly, and could result in cost savings.
Or maybe they do realize the new system is faster, but those small gains may not seem worth the difficulty of a new workflow. The users of this complex enterprise product have been using it almost every day for work. Some of them have been using essentially the same version of the application for many years. They know how it works.
By changing things, the design team is asking the end users to invest effort to become proficient with the new version. When a new interface is introduced, there will sometimes be an initial loss of productivity. Learning a new interface for a complex task takes time and is less efficient than simply doing the task with the old, familiar interface. Even though in the end the new interface may prove better, 1 people have no way of knowing that when they first start using it; 2 in the beginning, the experience can be worse.
My advice to this team lead was to first consider these reasons behind the user feedback, and then step back and look at the larger picture. When weighing conflicting findings, we have to consider the tradeoffs. We always want users to be effective, efficiency, and happy with the products they use.
This new version of the product is very likely to be implemented, regardless of how users feel about it. That could be a potential problem, though — if users hate this new version enough, it could lead to decreased job satisfaction or employee turnover. The team could try qualitative beta testing with new hires, who had minimal exposure to the previous system, and see if their feedback differs.
New hires will not have the same attachment to the old system as more experienced employees and may be less susceptible to affective reactions to change. On the other hand, new hires are also less likely to have as much domain knowledge as people who have been using the system for a while, so they may ignore some important aspect. Or, the team could conduct a systematic learnability study, with multiple rounds of quantitative usability testing that track task time, task completion, and satisfaction over time. This study will give an accurate and complete picture of how user performance and satisfaction changes as people gain experience with the new product.
If the new design is truly better than the old one, the team should expect both the satisfaction and the performance measures task time and task completion increase over time and eventually reach comparable or better numbers than the current design.
The study will give a good idea of how much exposure to the new design people need in order to overcome their initial negative reaction. We did one such study for a consulting client. While the details have to remain confidential, I can say that it took a full year before users performed better with the new design than with the old, which they had used daily for a decade.
In the long run, the new design was indeed much better, but the decision to change over required long-term commitment from management. Millions of youth workers in organisations and informal groups around Europe will find your project and be able to cooperate with you. Find out more about us. Login now!
Finding partners for international projects has never been easier Do you want to find new contacts in the world, send a volunteer abroad or find the missing partner for your project? Search for partners Use our powerful search filters to find the perfect match in our database of more than organisations and informal groups. Add your organisation or informal group Register your organisation or informal group to demonstrate your interest in international cooperation. Create projects and request partners Millions of youth workers in organisations and informal groups around Europe will find your project and be able to cooperate with you.