We took the existing transactional flows within the app, looking both at successful (post-purchase) or unsuccessful (bounce back) flows.
We analysed them looking at the different steps of the user journey and assessed each step and how the user would emotionally feel at the different moments.
- After making the payment (Intermediated)
- After receiving the package (Intermediated)
- After reviewing the seller (F2F & Intermediated)
- After the purchase is rejected or has expired
- After delivery is unsuccessful
- User asks for a refund / user receives a refund
We took the use cases and and identify where we could see spaces for opportunities within the flow as separate screens or in incorporating them in existing ones. Based on these spaces we identified and matched the key behavioural principles which would be triggered at those specific times.
After buying, people tend to feel more confident about their decision. It may be beneficial to validate their choice and encourage another purchase.
Once a user owns something, they value it more. Suggesting related accessories or complementary products makes them more likely to invest further.
Users remember final moments of an experience and, if the are giving a good review, it will create a positive emotional state, where items can be suggested for further engagement.
Rather than analysing the whole flow, in this case we identified the key moments where there was potential to impact the user.
We took said key points, gathered the data on the traffic of users & transactions and mapped the user experiences on an emotional scale (excitement, happiness, distrust, frustration, disappointment...).
If a user misses out on an item, they feel a sense of loss. Showing similar or alternative items immediately can reduce disappointment and recover the sale.
People remember unfinished tasks better than completed ones and seek closure. If a user had to abandon their purchase or it failed, they are reminded of the incomplete transaction and are in an unresolved process.
They can be nudged to continue browsing with recommendations like “Still looking? These items match your search” or through notifications of something similar becoming available.
Once potential directions were identified, we refined our hypothesis, considered our resources and proposed a solution whilst defining the key metrics we would look at to understand if our approach was successful or not.
Since this user flows weren't owned by the Content Discovery team we had to align with teams from other tribes, prioritise according to joint needs and engineering team availability.
In parallel we user tested the mock ups to validate design direction and ensure the designs would match user needs and interaction expectations.
We user the User Testing platform to run non-moderated tests.
Our goal was to:
- validate the recommendations slider design
- the integration of recs in the TTS
- the user flow
We validated:
- horizontal sliders
- OK to have recommendations in the TTS although they may go unnoticed
- users expect to always navigate to the previous screen and not exit the flow
After user testing, we went onto experimentation.
In the Successful moments we got positive results on two iterations and had to rollback & iterate on the sellers review touchpoint.
For the Unsuccessful moments we rolled out but based on a different approach.
Experiment: Two variants were tested with a different positioning of the recommendation in the TTS page
Impact: Both variants gave us an uplift in the metrics:
- Increase the interactions to click items (+0,34%)
- Increase the purchase intention (+0,88%)
Learnings: Variant B had less harm & even positive impact compared to the baseline, in terms of:
-Reduce Cancellation Rate (-3%, with 87% of significance)
-Practically no harm in Clicks on Wallapop Protect
Experiment: Two variants were tested with a different positioning of the recommendation in the TTS page
Impact: Positive impact from both variables:
- Increase the interactions to click items (+0,44%)
- Increase the purchase intention (+1,28%)
Learnings:
- the TRACER reduces a bit its response rate by -1.7%
Experiment: One variant tested as this was a new screen added to the flow
Impact: Negative impact in metrics in clicks & PI
Learnings: Between the purchase decision and seller review, there can be a long time window, which makes complementary items less relevant.
Iteration: We identified users resume their recent activity (based on their latest views and latest searches).
Experiment: Two variants with Recently viewed items & Last searches
Impact: Both variants gave us an uplift in the metrics:
- Increase the interactions to click items (+2%)
- Increase the purchase intention (+2,56%)
Rolled out recently viewed, since it had higher impact.
Learnings & Iteration: The good impact comes from allowing the user to retake their previous actions and convert again to PI.
Experiment: We tested "Similar items" in all three flows.
Traffic: The traffic in bounceback flows is very low, this meant we didn't get significant results at a user level.
Impact: Aggregated results from bounceback flows:
- Open Wallapop to Personalized PI: -0.18% (56% significance, not significant)
- Open Wallapop to Personalized Item Click: -0.03%
- Personalized Item Click to PI: -0.15%
- Health Metric: Open Wallapop to PI: -0.01%
Learnings:
- We kept the experiment open for additional weeks to to achieve 95% power and greater confidence
- This still didn't allow us to reach desired sample size
- Since the impact is "neutral" and the purpose of the feature is beneficial for the user, the decision was to proceed with rolling out the feature and monitoring it