Skip to content

Latest commit

 

History

History
78 lines (53 loc) · 6.63 KB

4_Applying_iterative_design_principles_to_the_live_product.md

File metadata and controls

78 lines (53 loc) · 6.63 KB

Applying iterative design principles to the live product

The introduction went well and now we would like improve our product continously. Below you will find mostly excerpts of Tableau visualizations. You can find the corresponding workbook on my Tableau Public account.

First we look how our product is performing. We have designed KPI in chapter 1 which we use to measure our success. There are different layers which we could check:

  • KPI on business level (depending on business model, e.g. # of purchases, renewal of subscriptions, paying customer satisfaction)
  • KPI on customer journey level (Acquisition, Activation, Retention, Revenue, Referral)
  • KPI on feature level
    • Acquisition (Who saw the features?)
    • Activation (Who used the feature?)
    • Retention (How often did a user return?)
    • Revenue (How much revenue through the feature?)
    • collected customer data provides further insights like:
      • who used or used not the new feature
      • when and how a user use the new feature

In this chapter we will focus on KPI on feature level.

Funnel Analysis

A funnel analysis is a good way to look at the performance on feature level. You can see the dropoff rates for each feature and investigate when users are stop experiencing our product. In our scenarion we have already run a multivariate test experiment, where we test new features to improve our product. Lets see hot that worked out.

Impact on the conversion rate of users booking a flight (based on previous feature)

Impact on the conversion rate of users booking a flight per each group (out of all users opening the app)

Before we talk about the meaningfulness of the experiment we should check if the results are statistical significant. Doing this, we make sure the likelihood that the results are du to random chance is below a certain threshold (p value). In order to achieve statistically significant results, it is important to have a large enough group of participants in our experiment. The number of participants needed for the experiment to produce statistically significant results is called sample size. We will follow this checklist:
  1. Set Null Hypothesis: “Results of the Control State are not different than the experimental states”
  2. Set Alternative Hypothesis:”Experimental States are significantly different than the control state”
  3. Confidence Threshold => 95%
    • p value would be 0.05, but we have a two-tailed t-test, therefore p value = 0.025

There are different ways how you can calculate p values. One way is using free online tools. Going through every group you will see that we have no statistically significant results. Below you see an example for control group vs. exp.1 group:

We can summarize that the results are not significant. The test condition did not result in a statistically significant increase in conversions, which means any difference between the different versions is more likely due to random chance than to the differences between the different versions. It cannot be said with confidence that the different versions lead to more users booking rides. We should continue to test different versions to see if they can generate a significant test result that shows higher conversions in the test condition.

Segment Analysis of Funnel

To gain deeper insights we can include user data such as demographical or behavioral segment data. Here is our demographical data:

Combining it with our previous funnel analysis we get this:

Age: There are almost no significant movements between the control & experimental groups. First Dropoff is about 58%, second dropoff around 36% and last dropoff at 98%. There is one thing that is a bit off at the 50+ age group. The 2nd dropoff here is around 67% (vs. 36% in average).

Neighborhoods: There are no significant movements between the control & experimental groups. First dropoff is about 58%, second dropoff at 52% and third dropoff at 98%. This is materially visible across all neighborhoods.

Take action based on our findings

We noticed a gap for users aged 50+. Next we conducted interviews to gather qualitative data to understand why this segment experiences higher dropoffs. The feedback is very insightful. Most of the user have trouble using the app. A more convenient way of booking rides could improve our dropoff rate for Age 50+ passengers by 30%. Talking about ideas how to solve this issue we came up with this options as new features:
  • Design an easy and playful instruction wizard
  • Develop features so that home adresses and favorite places can be saved and quick suggested during the booking process
  • Instead of using the interface via touch screen we could provide an option for booking a ride supported through voice command

Other stakeholders having as well features they would like to see as part of our running product. They might have different reasons to say that their feature is more important than the ones we listed above. The RICE Framework helps you to prioritize.

Next steps

After prioritizing the features we are almost ready to run experiments again. But we need to think about:
  • doing A/B or Multivariate Testing?
  • Track User Actions during the test:
    1. Track User Data
    2. Create Instrumentation Plan
    3. Track Experiment Conversion rates

And this all has to be done unbiased.

Next thing we notice is we are back at the top of this chapter and checking out the Funnel Analysis.