This was an on-spec project I worked on in the span of a week from Bitesize UX.  RunBuddy, a fictional app that helps runners find great routes in NYC. 
UX/UI Design; research, synthesis research, sketching, design iterations, visual design, mockups
The task at hand was to design a feature that would allow users to quickly rate/review a route that they just ran, based on criteria that may be helpful for other runners to decide if it is a good route for them to run. Bitesize UX provided a number of research that was gathered from RunBuddy users and created personas based off of that research. 
Deliverables - 
1. Final mockups
2. Show how the research connected to the final designs
3. Outline next steps that would be taken after completing mockups
My first reaction after reading over the brief was "What are the different types of rating systems and what are there benefits?" I began to do some research on the main function of this feature and I discovered ... 

Star Ratings - "The star rating has its place. It is very useful in systems where the rating you leave is relevant to you and you alone. Your music library, for example, you know why you rate one song two stars and another at four"

Emotion Ratings - The emotion ratings are utilized for feedback on something that can have a change in a matter of minutes. eg: restroom at the airport. Sad face = needs cleaning. Happy face - this restroom is sparkling clean. 

Binary Ratings - "Thumbs-up ratings help products personalize content"

A star rating system is relevant to you and you alone only means that this rating system doesn't scale well when more than one person is rating the same thing. The emotions rating system is relevant to constant change. With my last example in mind, the characteristics of a route can change with time and weather but you can't report it to a specific person in order for it to change. Lastly, the binary system has a lot of benefits for a product like RunBuddy. The binary system helps products personalize content and RunBuddy wants to help runners find great routes, this system can be curated to the runner's likings. 
Bitesize UX crafted these personas from prior research.
Before putting any ideas down on paper, I took a look at plenty of feedback and review UI patterns that other products were doing to start flushing out what might work and won't work for RunBuddy.
Angela states "I would be happy to leave feedback on the route I just ran - but I don't always want to type out a whole review." With this in mind, I thought the best quick and straightforward way for Angela to leave some feedback was to provide her a series of tags based on her initial thumbs up or thumbs down rating she provided. 
Andy states "I would love to find some scenic routes I may not have known about." In addition to providing a series of tags based on prior rating, I thought it would be a great fit to add the characteristics of a route that can describe it to be scenic or not for Andy. 
I started to think about interaction design after putting the first flow of wireframes together and decided to add a submit button for assurance in the response before moving forward. The “skip question” position seemed to be contradicting as it was placed on the left side instead of the right for indication of moving forward. I also relocated the progress bar to be on top of the submit button instead of top right and replaced it with “skip question” as users are used to top left and right corners used for actions.
David states “When I pick a route, I want to know if it’s crowded or not.” In the previous flow I had a question built around if the route was crowded or not but when thinking about hierarchy, the question doesn’t possess much value as a question like “did you feel safe on this route?” I thought of the way to consolidate that question was to add a “crowded” or “deserted” tag to the third screen.
The visual design seemed to be established in the brief and I continued that throughout the final designs to stay on brand. I utilized the same colors and background. 
The next steps I would like to take would be to create a clickable prototype of the mockups and test the flow in a contextual inquiry as Angela brings up a great point about being out of breath after a run. I would want to see if the flow is quick and straightforward as expected to be. I would also like to test the binary rating system by doing some A/B testing against the emotions rating system as I’m interested to see how having an intermediate rating would change things or not. After completing the testing on the flow and rating systems I would like to design the flow of how to organize all the feedback that is gathered from this process. How does the percentage of liked routes look? How does the search for certain tags look? Finally, how do the summaries that were left look?
Back to Top