-
Notifications
You must be signed in to change notification settings - Fork 0
/
r6.html
69 lines (58 loc) · 12.5 KB
/
r6.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Katie Soldau - Reading 6</title>
<link href="style.css" rel="stylesheet" type="text/css" />
</head>
<body class="page_readings">
<div class="container">
<div class="content">
<div id="top"></div>
<div class="name">
<p>K<span class="smaller">ATIE</span> <span class= "taller">S</span><span class="smaller">OLDAU - IS4300</span> </p>
<p class="email"> [email protected] </p>
</div>
<div class="navigation_bar_container">
<div class="navigation_bar">
<ul class="nav_options">
<li><a href="index.html">about</a></li>
<li><a href="readings.html" class="current">readings</a></li>
<li><a href="homework.html">homework</a></li>
<li><a href="team_project.html">team project</a>
<li>
</ul>
</div>
</div>
<!-- for navigation -->
<div class="readings">
<!-- R2 -->
<div>
<h1 >
<div class="h1_text">Reading 6 -- Evaluation Overview and Preparing for Usability Testing </div>
</h1>
<p>
<a href="http://www.ucc.ie/hfrg/projects/respect/urmethods/paperproto.htm" class="citation">Rettig wrote a little about paper prototyping.</a> He said it’s a simple and easy way to test an interface. It is a low fidelity prototype. To do this type of prototyping a design team sits with a user or records a user’s interactions with the interface. Interface elements are activated by the user and the user is encouraged to express their ideas and opinions throughout the process. Unfortunately paper prototypes can’t be used to reliably simulate system response times or the evaluation of fine design details. To utilize this method a design team sketches out designs of ideas they wish to use. Sophisticated methods even involve cardboard or plastic elements. Paper prototyping is particularly useful to figure out how users react to your interface design and allows for usability problems to be detected.
</p><p>
Jakob Nielsenm in his <a href="http://www.nngroup.com/articles/ab-testing-usability-engineering/" class="citation">A/B Testing, Usability Engineerings, and Radical Innovation article</a>, thinks there are three approaches to better design: A/B testing, usability, and radical innovation. A/B testing is when users are split into two parts (one big, one small), and the smaller group sees an alternative design. After statistics are collected, the design with the better KPI (key performance indicator) is chosen. This type of testing is cheap because there’s just a one time expense of running the software and designing the second system. A/B testing is beneficial because it’s the only way to determine the best approach when the alternatives don’t differ by too much. There’s really no risk involved in this type of testing if the statistical analysis is done correctly and to conduct this testing you don’t need to understand design principles or user behavior. It would be possible to run an A/B test every week. </p><p>
Usability methods include everything in user-centered design (UCD): user testing, field studies, iterative design, low-fidelity prototyping, competitive studies, etc. This type of testing costs anywhere from $200 to $38,000. However, this cost still is small compared to budget required to run a big website, says Nielsen. Benefits of usability methods include that the metrics you target are normally doubled in terms of improvement. These methods are low risk because bad ideas are thrown aside during user testing. Trained specialists are often required to perform this method. It is common for usability tests to be performed monthly. </p><p>
Radical design innovation is when a completely new design is created instead of emerging from a past design. Fundamental breakthroughs routinely use this method. This is extremely costly and can cost up to hundreds of millions of dollars. Research labs and experimentation are often needed. Benefits of this include that products can be extremely better than they were before. However, this method is incredibly risky because almost all innovations fail. To help make success more probably, the best people in the world should be employed to make this method work. This happens about once every decade.
</p><p>
Nielsen recommends that usability strategy is the method to use because it makes the most money on average. However, he says to not limit oneself to a single strategy. Once you have a product, continuous quality improvement is key and that’s what usability and A/B tests provide. If a budget exists, radical innovation can be sought after. </p><p>
<a href="http://www.nngroup.com/articles/ab-testing-usability-engineering/" class="citation"> The User Interface Evaluation in the Real World paper</a> compares four techniques for evaluating a user interface: heuristic evaluation, software guidelines, cognitive walkthroughs, and usability testing. They found that heuristic evaluation produced the greatest results because it found the most problems. The downside to heuristic evaluation is that several people with the necessary knowledge and experience are needed to utilize the technique. Also, this type of evaluation also reported problems that were really specific and low-priority. The paper also describes how usability testing did a good job of finding serious problems. It found general and recurring problems and avoided the low-priority ones. However, usability testing was the most expensive out of all techniques covered and it still failed to find several serious problems. Evaluating guidelines was found to be the best out of the techniques at finding recurring and general problems. The guidelines forced evaluators to take a broad look at the interface instead of just seeing a subset of it. However, the guideline-based evaluators failed to find several serious problems. The cognitive walkthrough technique performed similar to guidelines. Since this is the first time the technique has been used by a group of evaluators, the paper talked about what might be useful additions for it. It said that a method that defined tasks and was driven by a model underlying the walkthrough methodology would be really helpful. Though the paper did talk about how the problems cognitive walkthroughs found were less general and less recurring than problems found by other techniques. Generally, heuristic evaluation and usability testing have the greatest advantages. Guidelines and cognitive walkthroughs can still be used by software engineers though. Yet the paper says that no matter how good the technique is, most of the usefulness and strength of the techniques come from the UI professionals who are using them and that the importance of these people cannot be underestimated. </p> <p>
<a href="https://www.ccs.neu.edu/course/is4300sp13/ssl/articles/p381-wharton.pdf" class="citation">The Applying Cognitive Walkthroughs to More Complex User Interfaces paper</a> examines the success of the cognitive walkthrough technique. Three complex software systems are critiqued and five core issues are focussed on. One, task selection, coverage, and evaluation. Two, the process of doing a cognitive walkthrough. Three, requisite knowledge for the evaluators. Four, group walkthroughs. And five, the interpretation of results. </p><p>
A cognitive walkthrough is a methodology used for performing theory based usability evaluations on user interfaces. It focuses on user’s cognitive activities. It aims to improve software by detecting defects, amplifying them, and removing them. The paper found that cognitive walkthroughs were not close to being successful. It did not perform well in real systems in environments that were tested. The technique needs to be researched and extended further. The problems with cognitive walkthroughs arise because of process mechanics and limitations in the current method. One limitation is that this does not mesh well with current software development practice. The developers that were worked with for this paper were interested in this type of topic, but didn’t have the training or need to become an expert on it because usability isn’t the only thing they have to focus on. For now, it seems like this method should be held off on and not used until it is further refined. </p><p>
<a href="https://www.ccs.neu.edu/course/is4300sp13/ssl/articles/p373-nielsen.pdf" class="citation">The Finding Usability Problems Through Heuristic Evaluation paper</a> examined heuristic evaluation. Heuristic evaluation is a cost-efficient method that aids in finding usability problems in a user interface design by having a small number of evaluators look at the interface. These evaluators look at the interface and see how well it adheres to recognized usability principles. The paper found that usability specialists performed much better than those who had no usability expertise. When those specialists had expertise in the kind of interface being evaluated they did even better than the other specialists who did not have the same expertise. It was also found that having 3-5 evaluators made for a good number and recommended that this be the number used by groups in the future. Heuristic evaluation is more likely to find major usability problems than minor ones. Still, about twice as many minor errors are found as major errors. When using this type of evaluation special attention must be paid to problems dealing with the lack of clearly marked exists or missing interface elements. When heuristic evaluation is performed the expertise of the staff maters quite a lot. Still this method is a great method because even though there are shortcomings, those shortcomings have been identified and can be tackled using other methods.
</p><p>
Jakob Nielsen wrote another article, <a href="http://www.nngroup.com/articles/time-budgets-for-usability-sessions/" class="citation">Time Budgets for Usability Sessions</a>, that focuses on how time is wasted in testing. When users engage in activities that are nonessential, up to 40% of testing time is wasted. It can be better to just focus on watching users perform tasks with the target interface design. Time is routinely wasted because design teams spend little time actually observing their users. In order to not waste time, one day a week should be designated as a “user day.” When new users routinely come through the designers can see how the user respond to design features. This way decisions can be made that accurately reflect a user’s needs. Sadly, most companies do not or rarely ever test out their designs with users.
</p><p>
When companies do actually observe users, they waste most of their time. In the 60-90 minutes of user time that is had, the focus should be on watching how the users naturally behave. What the users say can be vastly more important than what they say. Common time wasters include extensive demographic surveys, subjective satisfaction ratings after each tasks, satisfaction questionnaires with dozens of questions, and ending the session with a long discussion. Instead, overhead should be kept to just five minutes and necessary subjective satisfaction ratings should only take another five. A bulk of user testing time must be devoted to observing users as they perform interactive tasks with your design in order to best use a project’s resources. </p><p>
In class I’d like to discuss more about radical design intervention, mostly because I’d find it interesting to see how such novel and new products can be created and tested. I know this isn’t something that I’ll be using at anytime in class or in the near future but it would be cool to get a little insight into how it can be used to find great new things (or fail at doing so). I’d like to know how companies or others are able to do this without wasting huge amounts of money. Do they assume that if they find one good product or idea that it could make up for the rest? Or are their budgets so large that it doesn’t particularly matter? I found some of the papers confusing in terms of what information they were trying to convey. However, I get that the point of these papers is to present their research and findings and so the way in which they convey information may not be compatible with how I wish to learn the information. That being said, I think the conclusions of these papers held a lot of useful information and condensed the findings of the studies into something useful.
</p>
</div>
<!-- end of R2 -->
<!-- for content -->
</div>
<!-- for container -->
</body>
</html>