Crafting a Customer Rating System for a Ride-Hailing App by Yulita PutriCrafting a Customer Rating System for a Ride-Hailing App by Yulita Putri

Crafting a Customer Rating System for a Ride-Hailing App

Yulita Putri

Yulita Putri

From Feedback to Features: Crafting a Customer Rating System in a Ride-Hailing App

·
5 min read
·
Oct 14, 2024
This is a reflection on the process I went through while working at a ride-hailing app company a couple of years ago. In this article, I’ll share how we developed a customer rating feature that transformed driver feedback into actionable insights, ultimately improving user experience in our ride-hailing app. I enjoyed working on this project because I had the chance to combine several methods involving real users before arriving at a conviction on how we should shape the product. I hope this piece can help other UX/product teams out there who are looking for inspiration in discovering and prioritizing problems for their products.
Some Context of How It All Began…
At that time, I was part of the products team focused on solving problems for the drivers. As the next quarter approached, we needed to look ahead and formulate OKRs for the upcoming period. The main question revolved around what problems were out there and which ones we needed to focus on first.
As a starting point, we initiated a research project to identify the pain points that drivers experienced. In the first phase of our study, we conducted focus group discussions (FGDs) with drivers. The main objective of this phase was to identify the most salient sentiments among our drivers towards our app. Which aspects were most appreciated? Which were the least? What was the most important feature that was missing? From the first phase of the research, we gathered a wealth of feedback and insights. Most importantly, we sensed that drivers’ biggest frustration stemmed from a feeling of unfairness.
Building on our initial findings, we supplemented the feedback with a desk study to further understand drivers’ needs. We reviewed app feedback, conducted competitive analyses, and examined several past studies to create an exhaustive list of driver needs.
The third phase of the study was the “buy features” session. This was the part I enjoyed the most! The main objective of this phase was to identify which needs were prioritized by our drivers and why. “Buy features” is a prioritization method involving end users in a fun game! From this session, we not only prioritized features but also gained insight into drivers’ decision-making processes. Here’s a concise overview of the steps:
Compile all of the feedback/feature ideas from multiple sources.
Create feature cards — I downloaded one here.
Assign prices based on complexity. I needed help from my product team to assign prices for each feature. It usually depends on the complexity of the feature. The more technically difficult, the more expensive the price is.
Prepare the money. Good old Monopoly money should do, but I preferred to customize it, so I used this template for the money.
Provide each participant with a set amount of money. The total amount given to all participants should not exceed one-third of the total price for all the features.
Facilitate group discussions to reach consensus and synthesize the results.
From the session, we found that our drivers prioritized their choices based on four main factors that interplayed in their decision-making process: convenience, perceived control, frequency, and cost vs. gain. Interestingly, these four factors boiled down to a single consideration: perceived fairness. This finding aligned with our earlier study, which identified the feeling of being unheard as the most salient problem for drivers.
Now we decided to explore customer rating as a priority solution, but the next question was: how were we going to do that?
Before we built the feature, we needed to consider several things:
Which rating system should we adopt?
What are the caveats of each option?
How are we going to mitigate those issues?
To answer those questions, we conducted a survey and a card sorting exercise in our fourth and final phase. The objective was to simulate drivers’ responses to the same stimulus using different rating systems. We had three cohorts of drivers, each receiving one set of rating systems: either binary, three options, or five-star rating. All of them responded to the same scenarios typical for drivers encountering customers. From this study, we decided to go with the five-star rating system.
Unfortunately, each rating system has its own caveats, and the five-star rating system is no exception. We learned that a five-star rating might lead to more subjectivity and could exhibit a tendency towards the desirability effect, which can impede our ability to interpret the later results. To mitigate that, we ran a card sorting exercise. The main objective was to identify how our drivers fundamentally classify customer behavior into ratings and why. We needed to utilize this knowledge to design a five-star rating system that does not encourage subjectivity. From the study, we developed a classification of customer behaviors for each rating option, fostering standardized ratings by providing drivers with a set of hand-picked behaviors whenever they choose a rating.
Alrighty, we developed a five-star rating system to give drivers a channel to voice their opinions. Researchers and designers worked closely throughout the process. Now we asked: how should the design look?
Tactical questions we had before designing the rating system included:
How might we represent the nuances of a five-star customer on-screen?
What design elements can we utilize to elicit the feeling of being understood?
How could we encourage drivers to rate customers through an intuitive flow and interaction?
In the online survey we ran, we also asked a subset of drivers to choose a word and emoji that best represent each rating scale. From that, we incorporated the most chosen words and emojis into the customer rating design. We also iterated on some parts of the design, including utilizing different colors on the star ratings to indicate the changeability of the ratings.
Conclusion
This project highlighted the importance of understanding driver sentiment and the need for a customer rating system. Key takeaways include the value of involving users in the design process and the necessity of iterative testing to refine our approach. I encourage fellow UX and product teams to reflect on their own user engagement strategies and share their experiences in the comments below. Let’s learn from each other!
Like this project

Posted Apr 15, 2025

Developed a customer rating system for a ride-hailing app to enhance user experience.