leveraging new touchpoints
for recommendations

overview

As part of the content discovery team, we saw a need to leverage touchpoints,  other than the wall and item detail, to bring value to the user by showing items relevant to them.
Ensuring we impacted them at the right moment with the right type of content, to improve their user experience and ultimately also increase the key metrics of the tribe.

duration

6 months, 2024

tools

Figma, Miro, UserTesting

project

buyers tribe @ Wallapop

role

UX research, product designer
💔 challenge
Buyer frequency of transactions, has been decreasing since Nov 2022, affecting the growth of total transactions.
❤️‍🩹 problems
70% of users do not engage with other items after completing a purchase.

20% of transactions fail due to various reasons, causing potential drop-offs and frustration.
❤️‍🔥 opportunity
Expand recommendations impact by adding them to different moments of the transactional journey to increase retention & sales.

user journey &
emotional scale

We took the existing transactional flows within the app, looking both at successful (post-purchase) or unsuccessful (bounce back) flows.

We analysed them looking at the different steps of the user journey and assessed each step and how the user would emotionally feel at the different moments.

🥳

the user purchases a product with success

- After making the payment (Intermediated)

- After receiving the package (Intermediated)

- After reviewing the seller (F2F & Intermediated)

🥲

the user bounces back after something went wrong

- After the purchase is rejected or has expired

- After delivery is unsuccessful

- User asks for a refund / user receives a refund

approach 1 -
flow analysis matched
to behaviours

We took the use cases and and identify where we could see spaces for opportunities within the flow as separate screens or in incorporating them in existing ones. Based on these spaces we identified and matched the key behavioural principles which would be triggered at those specific times.

🏅

choice supportive bias

After buying, people tend to feel more confident about their decision. It may be beneficial to validate their choice and encourage another purchase.

"Other users also bought"
"Complementary items ”

🔗

the endowment effect

Once a user owns something, they value it more. Suggesting related accessories or complementary products makes them more likely to invest further.

"Complementary items ”
"Similar items you might like"

🏁

peak-end rule

Users remember final moments of an experience and, if the are giving a good review, it will create a positive emotional state, where items can be suggested for further engagement.

"More good finds"
"Other users also bought"
"From similar sellers"

approach 2 - emotional scale matched to behaviours

Rather than analysing the whole flow, in this case we identified the key moments where there was potential to impact the user.

We took said key points, gathered the data on the traffic of users & transactions and mapped the user experiences on an emotional scale (excitement, happiness, distrust, frustration, disappointment...).

🥊

loss aversion/frustration reduction

If a user misses out on an item, they feel a sense of loss. Showing similar or alternative items immediately can reduce disappointment and recover the sale.

"Similar items you might like"
"Same item from another seller"

⚡️

zeigarnik effect/unfinished task

People remember unfinished tasks better than completed ones and seek closure. If a user had to abandon their purchase or it failed, they are reminded of the incomplete transaction and are in an unresolved process.

They can be nudged to continue browsing with recommendations like “Still looking? These items match your search” or through notifications of something similar becoming available.

"Items which match your search"
"Notify me of similar items!"

UX strategy

Once potential directions were identified, we refined our hypothesis, considered our resources and proposed a solution whilst defining the key metrics we would look at to understand if our approach was successful or not.

hypothesis
We believe that by strategically integrating context-aware recommendations we can improve user engagement, recover failed transactions, and enhance the overall shopping experience.
WSLL
Primary Metric:
Uplift in Open Wallapop to Condis PI

Secondary Metrics:
- Open Wallapop to Condis Item Click
- Condis Item Click to PI
- Sessions per User

Health metrics:
- ensure no negative impact in interaction with sections of to the screens owned by other tribes
- Open wallapop to PI
proposed solution
If we implement personalised recommendations tailored to the user’s journey, we will keep users active and maximise engagement and sales opportunities. We will target:

Successful moments:
Show Complementary items, leveraging the user’s recent buying behaviour.

After seller review:
Show also Complementary items to see if the same pattern works in this touchpoint too or if it differs too much.

Unsuccessful moments:
Show Alternative products or Similar products to reduce frustration and recover lost sales.

alignment & timeline

Since this user flows weren't owned by the Content Discovery team we had to align with teams from other tribes, prioritise according to joint needs and engineering team availability.

In parallel we user tested the mock ups to validate design direction and ensure the designs would match user needs and interaction expectations.

user testing

We user the User Testing platform to run non-moderated tests.

Our goal was to:
- validate the recommendations slider design
- the integration of recs in the TTS
- the user flow

We validated:
- horizontal sliders
- OK to have recommendations in the TTS although they may go unnoticed
- users expect to always navigate to the previous screen and not exit the flow

designs & results

After user testing, we went onto experimentation.

In the Successful moments we got positive results on two iterations and had to rollback & iterate on the sellers review touchpoint.

For the Unsuccessful moments we rolled out but based on a different approach.

complementary items after successful payment

ROLLOUT

Experiment: Two variants were tested with a different positioning of the recommendation in the TTS page

Impact: Both variants gave us an uplift in the metrics:
- Increase the interactions to click items (+0,34%)
- Increase the purchase intention (+0,88%)

Learnings: Variant B had less harm & even positive impact compared to the baseline, in terms of:
-Reduce Cancellation Rate (-3%, with 87% of significance)
-Practically no harm in Clicks on Wallapop Protect

complementary items after item is delivered

ROLLOUT

Experiment: Two variants were tested with a different positioning of the recommendation in the TTS page

Impact:
Positive impact from both variables:
- Increase the interactions to click items (+0,44%)
- Increase the purchase intention (+1,28%)

Learnings:
- the TRACER reduces a bit its response rate by -1.7%

complementary items after seller review

ROLLBACK

Experiment: One variant tested as this was a new screen added to the flow

Impact:
Negative impact in metrics in clicks & PI

Learnings: Between the purchase decision and seller review, there can be a long time window, which makes complementary items less relevant.

Iteration: We identified users resume their recent activity (based on their latest views and latest searches).

ITERATION & ROLLOUT

Experiment: Two variants with Recently viewed items & Last searches

Impact:
Both variants gave us an uplift in the metrics:
- Increase the interactions to click items (+2%)
- Increase the purchase intention (+2,56%)
Rolled out recently viewed, since it had higher impact.

Learnings & Iteration: The good impact comes from allowing the user to retake their previous actions and convert again to PI.

bounceback flow

ROLLOUT & MONITOR

Experiment: We tested "Similar items" in all three flows.

Traffic: The traffic in bounceback flows is very low, this meant we didn't get significant results at a user level.

Impact:
Aggregated results from bounceback flows:
- Open Wallapop to Personalized PI: -0.18% (56% significance, not significant)
- Open Wallapop to Personalized Item Click: -0.03%
- Personalized Item Click to PI: -0.15%
- Health Metric: Open Wallapop to PI: -0.01%

Learnings:
- We kept the experiment open for additional weeks to to achieve 95% power and greater confidence
- This still didn't allow us to reach desired sample size
- Since the impact is "neutral" and the purpose of the feature is beneficial for the user, the decision was to  proceed with rolling out the feature and monitoring it

algorithm iterations

As the experiments with the initial recommendations finished, the ML team iterated in parallel on the quality of the recommendations to ensure that what we were showing was relevant to users, measuring the impact on the metrics as we applied the changes.

next steps

Optimising: trial other recommendations which can be added to these touchpoints and keep monitoring the impact

New flows & touchpoints: analyse other touchpoints which are not being utilised to their full potential.

UI & Copies: work on communication, UI & TOV to ensure language and transitions adapt to screens in an impactful yet not disruptive way.

Feedback: gather in app feedback in app to understand how users feel about the recommendations and when they are seeing them.

Emails: Almost all the touchpoints have a corresponding email communication linked to it. This is also a potential space for recommendations