Skip to content

Latest commit

 

History

History

phaseII

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Phase II: Refining Interaction and Designing Wireframes

Introduction

Stay Safe addresses the need for individuals to stay informed and aware of criminal activity in their surroundings. Whether one is concerned about the safety of their own neighborhood, considering a new place to live, planning a trip to an unfamiliar area, or evaluating a potential real estate investment, our goal with this iteration is to create a more efficient and clean design for user interaction. We aim to ensure that users clearly understand the application’s offerings and each feature's purpose, as well as being able to access every feature without difficulty. We have begun developing wireframes at this stage of the project, serving as a transition from our initial sketches to the eventual prototype and final product. Our goal is to enhance the user experience with our app. It has been emphasized how critical it is to achieve this clarity before proceeding, so we have gathered feedback from multiple sources to inform improvements.

Methods

Our primary research method was cognitive walkthroughs, involving testing our wireframes with two external evaluators (n = 2). Our focus at this stage centered on one persona, “Molly,” and her associated scenario, which was the focal point for evaluator feedback. The evaluators participated in the walkthrough with no prior knowledge of the project, simulating the experience of an average user interacting with the app for the first time. For each step of the scenario, two key questions guided the evaluators: “Will the user know what to do at this step?” and “If the user performs the correct action, will they recognize that they are making progress toward the goal?” This process was repeated for each step and frame of our wireframe.

In addition, we obtained informal feedback from our software engineering (SE) team regarding their project. The sample consisted of a class of 65 students, all of whom are enrolled in the relevant SE course associated with the project (n = 65). We provided the SE team with questions to give to the evaluators that focused on identifying potential improvements. For example, questions included: “Based on our minimum viable product, what could be improved?” and “Were there any features you expected in the app or sketches that were missing?” These questions not only assisted the SE team in their work but also provided valuable insights to our user experience (UX) team in identifying any essential features we may have overlooked.

Findings

From the cognitive walkthroughs, we obtained insights on each step required to complete the scenario. Each step was analyzed using the two guiding questions, which yielded insights into our strengths as well as areas for improvement. The consensus was that our persona could navigate the app and accomplish her primary objective with relative ease; however, some features mentioned in the persona were not present in the wireframe. The scenario involved “Molly,” a 23-year-old runner looking for safe routes with low crime rates. Her steps toward achieving this goal were found to be clear and comprehensible. Nevertheless, certain features, such as specific crime types, time-based data, and historical information, were missing from the wireframe, which might impact a potential user in a similar situation. To give an example, Molly’s scenario had her checking crime analytics for her area, but our wireframe did not have that feature accessible at the time.

We also received valuable feedback from the SE team in response to our questions. When asked, “Based on our minimum viable product, what could be improved?” suggestions included disaggregating crime types, user reports, and showing the crime rate per capita for an area. When asked, “Were there any features you expected to find in the app or sketches but did not?”, the testers recommended color coding for different crimes, notes for areas with specific characteristics (such as tourist locations), crime type filtering, and accessible data on the most prevalent crime types. These answers gave us extra insight on what was missing from our project, covering areas that had been missed during our cognitive walkthrough. We got these suggestions early in the lifetime of our wireframes, allowing us to easily incorporate them into our design as we went forward. Suggestions like them were essential to refining the app experience and addressing the project’s existing shortcomings.

Conclusions

Through our findings, we identified what has been successful in the project thus far and areas in need of improvement. This phase of the project has not emphasized innovation or introducing new features; rather, the focus has been on refining what we have accomplished and making enhancements leading to our first prototype. Many of the questions we asked were guided by this approach. We learned that our initial wireframes provided a solid foundation and were generally well-received. However, certain essential elements, such as missing information and features, were highlighted through suggestions from the SE team’s demonstration.

From there, we ensured that suggestions were incorporated to cover every relevant aspect in our wireframes, making certain that each feature mentioned in user personas and scenarios is both accessible and functional in our app. Input from outside our group has been especially valuable, as our intuitive familiarity with the project can sometimes obscure insights that external evaluators provide. We prioritized considering all input thoroughly and remained open to suggestions at each stage. It has been our goal for the last stretch of this section of the project to enhance and refine every bit of our project that we can before we move on to the next big step, the prototyping phase. With this feedback and the changes that we have made, our project is much more prepared to move forward into the final steps of the process.

Caveats

Some limitations affected the potential findings and feedback we could gather. First, we only completed two walkthroughs, and this was considered fortunate, as many groups only received one. One of our walkthroughs was minimally helpful, providing only general positive input without much detail. Additionally, we only had one user persona available for the walkthrough, limiting input to a specific area of the project rather than its entirety. Although our informal feedback pool was larger, some opportunities were still missed. We posed three questions to the SE team, but only received responses to the first two due to time constraints. Any additional comments beyond these sources would be highly beneficial, given the limiting circumstances around our research activities.