Product owners and user researchers often grapple with the challenge of gauging the success and impact of their products.
The struggle lies in understanding what methods and types of evaluative research can provide meaningful insights.
Empathy is crucial in this process, as identifying user needs and preferences requires a deep understanding of their experiences.
In this article, we present a concise guide to evaluative research, offering practical methods, highlighting various types, and providing real-world examples.
By delving into the realm of evaluative research, product owners and user researchers can navigate the complexities of product assessment with clarity and effectiveness.
What is evaluative research?
Evaluative research assesses the effectiveness and usability of products or services. It involves gathering user feedback to measure performance and identify areas for improvement.
Product owners and user researchers employ evaluative research to make informed decisions. Users' experiences and preferences are actively observed and analyzed to enhance the overall quality of a product.
This research method aids in identifying strengths and weaknesses, enabling iterative refinement. Through surveys, usability testing, and direct user interaction, evaluative research provides valuable insights.
It guides product development, ensuring that user needs are met and expectations exceeded. For product owners and user researchers, embracing evaluative research is pivotal in creating successful, user-centric solutions.
Now that we understand what evaluative research entails, let's explore why it holds a pivotal role in product development and user research.
By identifying strengths and weaknesses, it becomes a powerful tool for informed decision-making, leading to product improvements and enhanced user experiences:
1) Unlocking product potential
Evaluative research stands as a crucial pillar in product development, offering invaluable insights into a product's effectiveness. By actively assessing user experiences, product owners gain a clearer understanding of what works and what needs improvement.
This process facilitates targeted enhancements, ensuring that products align with user expectations and preferences. In essence, evaluative research empowers product owners to unlock their product's full potential, resulting in more satisfied users and increased market success.
2) Mitigating risk and reducing iteration cycles
For product owners navigating the competitive landscape, mitigating risks is paramount. Evaluative research serves as a proactive measure, identifying potential issues before they escalate. Through systematic testing and user feedback, product owners can pinpoint weaknesses, allowing for timely adjustments.
This not only reduces the likelihood of costly post-launch issues but also streamlines iteration cycles. By addressing concerns early in the development phase, product owners can refine their offerings efficiently, staying agile in response to user needs and industry dynamics.
3) Enhancing user-centric design
User researchers play a pivotal role in shaping products that resonate with their intended audience. Evaluative research is the compass guiding user-centric design, ensuring that every iteration aligns with user expectations. By actively involving users in the assessment process, researchers gain firsthand insights into user behavior and preferences.
This information is invaluable for crafting a seamless user experience, ultimately fostering loyalty and satisfaction. In the ever-evolving landscape of user preferences, ongoing evaluative research becomes a strategic tool for user researchers to consistently refine and elevate the design, fostering products that stand the test of time.
With the significance of evaluative research established, it's essential to know when is the right time to conduct it.
When should you conduct evaluative research?
Knowing the opportune moments to conduct evaluative research is vital. Whether in the early stages of development or after a product launch, this research helps pinpoint areas for enhancement:
During the prototype stage, conducting evaluative research is crucial to gather insights and refine the product.
Engage users with prototypes to identify usability issues, gauge user satisfaction, and validate design decisions.
This early evaluation ensures that potential problems are addressed before moving forward, saving time and resources in the later stages of development.
By actively involving users at this stage, product owners can enhance the user experience and align the product with user expectations.
In the pre-launch stage, evaluative research becomes instrumental in assessing the final product's readiness.
Evaluate user interactions, uncover any remaining usability concerns, and verify that the product meets user needs.
This phase helps refine features, optimize user flows, and address any last-minute issues.
By actively seeking user feedback before launch, product owners can make informed decisions to improve the overall quality and performance of the product, ultimately enhancing its market success.
After the product is launched, evaluative research remains essential for ongoing improvement. Monitor user behavior, gather feedback, and identify areas for enhancement.
This active approach allows product owners to respond swiftly to emerging issues, optimize features based on real-world usage, and adapt to changing user preferences.
Continuous evaluative research in the post-launch stage helps maintain a competitive edge, ensuring the product evolves in tandem with user expectations, thus fostering long-term success.
Now that we understand the timing of evaluative research, let's distinguish it from generative research and understand their respective roles.
Evaluative vs. generative research
While evaluative research assesses existing products, generative research focuses on generating new ideas. Understanding this dichotomy is crucial for product owners and user researchers to choose the right approach for the specific goals of their projects:
With the differentiation between evaluative and generative research clear, let's delve into the three primary types of evaluative research.
What are the 3 types of evaluative research?
Evaluative research can take various forms. The three main types include formative evaluation, summative evaluation, and outcome evaluation.
Each type serves a distinct purpose, offering valuable insights throughout different stages of a product's life cycle:
1) Formative evaluation research
Formative evaluation research is a crucial phase in the development process, focusing on improving and refining a product or program.
It involves gathering feedback early in the development cycle, allowing product owners to make informed adjustments.
This type of research seeks to identify strengths and weaknesses, providing insights to enhance the user experience.
Through surveys, usability testing, and focus groups, formative evaluation guides iterative development, ensuring that the end product aligns with user expectations and needs.
2) Summative evaluation research
Summative evaluation research occurs after the completion of a product or program, aiming to assess its overall effectiveness.
This type of research evaluates the final outcome against predefined criteria and objectives.
Summative research is particularly relevant for product owners seeking to understand the overall impact and success of their offering.
Through methods like surveys, analytics, and performance metrics, it provides a comprehensive overview of the product's performance, helping stakeholders make informed decisions about future developments or investments.
3) Outcome evaluation research
Outcome evaluation research delves into the long-term effects and impact of a product or program on its users.
It goes beyond immediate outcomes, assessing whether the intended goals and objectives have been met over time.
Product owners can utilize this research to understand the sustained benefits and challenges associated with their offerings.
By employing methods such as longitudinal studies and trend analysis, outcome evaluation research helps in crafting strategies for continuous improvement and adaptation based on evolving user needs and market dynamics.
Now that we've identified the types, let's explore five key evaluative research methods commonly employed by product owners and user researchers.
5 Key evaluative research methods
Product owners and user researchers utilize a variety of methods to conduct evaluative research. Choosing the right method depends on the specific goals and context of the research:
Surveys represent a versatile evaluative research method for product owners and user researchers seeking valuable insights into user experiences. These structured questionnaires gather quantitative data, offering a snapshot of user opinions and preferences.
Types of surveys:
Customer satisfaction (CSAT) survey: measures users' satisfaction with a product or service through a straightforward rating scale, typically ranging from 1 to 5.
Net promoter score (NPS) survey: evaluates the likelihood of users recommending a product or service on a scale from 0 to 10, categorizing respondents as promoters, passives, or detractors.
Customer effort score (CES) survey: focuses on the ease with which users can accomplish tasks or resolve issues, providing insights into the overall user experience.
When to use surveys:
Product launches: Gauge initial user reactions and identify areas for improvement.
Post-interaction: Capture real-time feedback immediately after a user engages with a feature or completes a task.
2) Closed card sorting
Closed card sorting is a powerful method for organizing and evaluating information architecture. Participants categorize predefined content into predetermined groups, shedding light on users' mental models and expectations.
What closed card sorting entails:
Predefined categories: users sort content into categories predetermined by the researcher, allowing for targeted analysis.
Quantitative insights: provides quantitative data on how often participants correctly place items in designated categories.
When to employ closed card sorting:
Information architecture overhaul: ideal for refining and optimizing the structure of a product's content.
Prototyping phase: use early in the design process to inform the creation of prototypes based on user expectations.
3) Tree testing
Tree testing is a method specifically focused on evaluating the navigational structure of a product. Participants are presented with a text-based representation of the product's hierarchy and are tasked with finding specific items, highlighting areas where the navigation may fall short.
What tree testing involves:
Text-based navigation: users explore the product hierarchy without the influence of visual design, focusing solely on the structure.
Task-based evaluation:research participants complete tasks that reveal the effectiveness of the navigational structure.
When to opt for tree testing:
Pre-launch assessment: evaluate the effectiveness of the proposed navigation structure before a product release.
Redesign initiatives: use when considering changes to the existing navigational hierarchy.
4) Usability testing
Usability testing is a cornerstone of evaluative research, providing direct insights into how users interact with a product. By observing users completing tasks, product owners and user researchers can identify pain points and areas for improvement.
What usability testing entails:
Task performance observation: Researchers observe users as they navigate through tasks, noting areas of ease and difficulty.
Think-aloud protocol: Participants vocalize their thoughts and feelings during the testing process, providing additional insights.
When to conduct usability testing:
Early design phases: Gather feedback on wireframes and prototypes to address fundamental usability concerns.
Post-launch iterations: Continuously improve the user experience based on real-world usage and feedback.
5) A/B testing
A/B testing, also known as split testing, is a method for comparing two versions of a webpage or product to determine which performs better. This method allows for data-driven decision-making by comparing user responses to different variations.
What A/B testing involves:
Variant comparison: Users are randomly assigned to either version A or version B, and their interactions are analyzed to identify the more effective option.
Quantitative metrics: Metrics such as click-through rates, conversion rates, and engagement help assess the success of each variant.
When to implement A/B testing:
Feature optimization: Compare different versions of a specific feature to determine which resonates better with users.
Continuous improvement: Use A/B testing regularly to refine and enhance the product based on user preferences and behavior.
Now that we're familiar with the methods, let's see some practical evaluative research question examples to guide your research efforts.
Evaluative research question examples
The formulation of well-crafted research questions is fundamental to the success of evaluative research. Clear and targeted questions guide the research process, ensuring that valuable insights are gained to inform decision-making and improvements:
Usability evaluation questions:
Usability evaluation is a critical aspect of understanding how users interact with a product or system. It involves assessing the ease with which users can complete tasks and the overall user experience. Here are essential evaluative research questions for usability:
How was your experience completing this task? (Gain insights into the overall user experience and identify any pain points or positive aspects encountered during the task.)
What technical difficulties did you experience while completing the task? (Pinpoint specific technical challenges users faced, helping developers address potential issues affecting the usability of the product.)
How intuitive was the navigation? (Assess the user-friendliness of the navigation system, ensuring that users can easily understand and move through the product.)
How would you prefer to do this action instead? (Encourage users to provide alternative methods or suggestions, offering valuable input for enhancing user interactions and task completion.)
Were there any unnecessary features? (Identify features that users find superfluous or confusing, streamlining the product and improving overall usability.)
How easy was the task to complete? (Gauge the perceived difficulty of the task, helping to refine processes and ensure they align with user expectations.)
Were there any features missing? (Identify any gaps in the product’s features, helping the development team prioritize enhancements based on user needs and expectations.)
Product survey research questions:
Product surveys allow for a broader understanding of user satisfaction, preferences, and the likelihood of recommending a product. Here are evaluative research questions for product surveys:
Would you recommend the product to your colleagues/friends? (Measure user satisfaction and gauge the likelihood of users advocating for the product within their network.)
How disappointed would you be if you could no longer use the feature/product? (Assess the emotional impact of potential disruptions or discontinuation, providing insights into the product's perceived value.)
How satisfied are you with the product/feature? (Quantify user satisfaction levels to understand overall sentiment and identify areas for improvement.)
What is the one thing you wish the product/feature could do that it doesn’t already? (Solicit specific user suggestions for improvements, guiding the product development roadmap to align with user expectations.)
What would make you cancel your subscription? (Identify potential pain points or deal-breakers that might lead users to discontinue their subscription, allowing for proactive mitigation strategies.)
As we delve into the questions, let’s explore the case study on evaluative research.
Case study on evaluative research: Spotify
The case study discusses the redesign of Spotify's Your Library feature, a significant change that included the introduction of podcasts in 2020 and audiobooks in 2022. The goal was to accommodate content growth while minimizing negative effects on user experience. The study, presented at the CHI conference in 2023, emphasizes three key factors for the successful launch:
Early involvement: Data science and user research were involved early in the product development process to understand user behaviors and mental models. An ethnographic study explored users' experiences and attitudes towards library organization, revealing the Library as a personal space. Personal prototypes were used to involve users in the evaluation of new solutions, ensuring alignment with their mental models.
Evaluating safely at scale: To address the challenge of disruptive changes, the team employed a two-step evaluation process. First, a beta test allowed a small group of users to try the new experience and provide feedback. This observational data helped identify pain points and guided iterative improvements. Subsequently, A/B testing at scale assessed the impact on key metrics, using non-inferiority testing to ensure the new design was not unacceptably worse than the old one.
Mixed method studies: The study employed a combination of qualitative and quantitative methods throughout the process. This mixed methods approach provided a comprehensive understanding of user behaviors, motivations, and needs. Qualitative research, including interviews, diaries, and observational studies, was conducted alongside quantitative data collection to gain deeper insights at all stages.
Ingrid Pettersson, Carl Fredriksson, Raha Dadgar, John Richardson, Lisa Shields, Duncan McKenzie
Best tools for evaluative research
Utilizing the right tools is instrumental in the success of evaluative research endeavors. From usability testing platforms to survey tools, having a well-equipped toolkit enhances the efficiency and accuracy of data collection.
Product owners and user researchers can leverage these tools to streamline processes and derive actionable insights, ultimately driving continuous improvement:
Blitzllama stands out as a powerhouse tool for evaluative research, aiding product owners and user researchers in comprehensive testing. Its user-friendly interface facilitates the quick creation of surveys and usability tests, streamlining data collection. With real-time analytics, it offers immediate insights into user behavior. The tool's flexibility accommodates both moderated and unmoderated studies, making it an invaluable asset for product teams seeking actionable feedback to enhance user experiences.
Maze emerges as a top-tier choice for evaluative research, delivering a seamless user testing experience. Product owners and user researchers benefit from its intuitive platform, allowing the creation of interactive prototypes for realistic assessments. Maze excels in remote usability testing, enabling diverse user groups to provide valuable feedback. Its robust analytics provide a deep dive into user journeys, highlighting pain points and areas of improvement. With features like A/B testing and metrics tracking, Maze empowers teams to make informed decisions and iterate rapidly based on user insights.
Survicate proves to be an essential tool in the arsenal of product owners and user researchers for evaluative research. This versatile survey and feedback platform simplifies the process of gathering user opinions and preferences. Survicate's customization options cater to specific research goals, ensuring targeted and relevant data collection. Real-time reporting and analytics enable quick interpretation of results, facilitating swift decision-making. Whether measuring user satisfaction or testing new features, Survicate’s agility makes it a valuable asset for teams aiming to refine products based on user feedback.
In conclusion, evaluative research equips product owners and user researchers with indispensable tools to enhance product effectiveness. By employing various methods such as usability testing and surveys, they gain valuable insights into user experiences.
This knowledge empowers swift and informed decision-making, fostering continuous product improvement. The types of evaluative research, including formative, summative, and outcome evaluations, cater to diverse needs, ensuring a comprehensive understanding of user interactions. Real-world examples underscore the practical applications of these methodologies.
In essence, embracing evaluative research is a proactive strategy for refining products, elevating user satisfaction, and ultimately achieving success in the dynamic landscape of user-centric design.
FAQs related to evaluative research
1) What is evaluative research and examples?
Evaluative research assesses the effectiveness, efficiency, and impact of programs, policies, products, or interventions. For instance, a company may conduct evaluative research to determine how well a new website design functions for users or to gauge customer satisfaction with a revamped product. Other examples include measuring the success of educational programs or evaluating the effectiveness of healthcare interventions.
2) What are the goals of evaluative research?
The primary goals of evaluative research are to determine the strengths and weaknesses of a program, product, or policy and to provide actionable insights for improvement. Through evaluative research, product owners and UX researchers aim to understand how well their offerings meet user needs, identify areas for enhancement, and make informed decisions based on data-driven findings. Ultimately, the goal is to optimize outcomes and enhance user experiences.
3) What are the three types of evaluation research methods?
Evaluation research employs three main methods: formative evaluation, summative evaluation, and developmental evaluation. Formative evaluation focuses on assessing and improving a program or product during its development stages. Summative evaluation, on the other hand, evaluates the overall effectiveness and impact of a completed program or product. Developmental evaluation is particularly useful in complex or rapidly changing environments, emphasizing real-time feedback and adaptation to emergent circumstances.
4) What is the difference between evaluative and formative research?
Evaluative research and formative research serve distinct purposes in the product development and assessment process. Evaluative research examines the outcomes and impacts of a completed program, product, or policy to determine its effectiveness and inform decision-making for future iterations or improvements. In contrast, formative research focuses on gathering insights during the developmental stages to refine and enhance the program or product before its implementation. While evaluative research assesses the end results, formative research shapes the design and development process along the way.