User Centered Design human factors

Yüklə 458 b.
ölçüsü458 b.

User Centered Design






Human-Machine Model

Human Component

  • Human Component

    • Effectors
    • Senses
    • Supportive Processes
  • Machine Component

    • Controlled Process
    • Displays
    • Controls
  • Environment

    • Workspace
    • Physical Environment
    • Work Organization

What is Good Design?

  • Subsidiary questions:

    • What are interactive systems?
    • Why do we design them?
    • How do we know if we’ve succeeded?
    • What happens if we fail?
    • How do we maintain a track record of success in design?
    • How can we also retain our creativity?

Subsidiary Questions

  • What are interactive systems?

    • The user interface
    • Technocentric versus anthropocentric approaches
  • Why do we design them?

    • To resolve a situation of concern


  • How do we know if we’ve succeeded?

    • By testing whether the situation is resolved
  • But we can’t do this during design

    • By measuring or predicting usability

The Need for Usability Analysis

  • The need arises when we’re faced with the following kinds of questions during design:

    • Will the operator be able to handle emergency telephone calls faster than before?
    • Have we simplified the design of this ticket machine to a point where people will use it successfully on their first attempt?
    • Is the small size of this screen target
    • going to result in a significant number of
    • errors in selecting it?
    • If the user invokes this command by mistake,
    • will he or she find the escape route?

The Need for Usability Analysis

  • Will the word-processor user remember that there are three different ways of changing the properties of a formatting style?

  • Is it so difficult to change the layouts of menus that hardly any users will bother?

  • Once the system is set up to support work-groups of a particular size and structure, how much effort is involved in changing the system to support changes in the group?

  • How many of the people who try the system

  • will actually continue to use it?

  • To answer these questions, we analyze the

  • design in terms of its usability.

Usability Factors

  • The speed of performance of the activity, which affects how many people are needed to perform it

  • The incidence of errors while performing the activity

  • The user’s ability to recovery from errors that occur

  • The magnitude of the user’s task in learning to use the system

  • The user’s retention of learned skills

  • The user’s ability to customize the system to suit their way of working or the situation of use

  • The ease with which people can reorganize

  • activities supported by the system— their own

  • activities and other people’s

  • Users’ satisfaction with the system.

What happens if we fail?

Breakout #1

  • Divide into your groups

  • Do the following:

    • Watch the demonstration video.
    • List all the usability issues you can identify in the use of this product.

Fundamentals of Interactive System Design

  • Identifying the human activity that the proposed interactive system will support

  • Identifying the people, or users, who will perform the activity

  • Setting the levels of support that the system will provide, otherwise known as the system’s usability

  • Selecting the basic form of solution to the design problem.

Product Development Phases

Usability touches each phase of Product Development

Using Human Factors Tools & Techniques

Two Ways of Assessing Usability

  • Analytically - by simulating how the user’s activity will be performed

  • Empirically - by building and testing a prototype

Analytical Methods Advantages

  • Analytical methods have advantages:

    • We can test designs that we can’t build
    • We can save time by not building a prototype
    • We don’t need to plan and conduct an experiment
    • In other words, they can be used more quickly, earlier in the design

Empirical Methods Advantages

  • Empirical methods have advantages:

    • We receive more precise information about how a user will interact with the product
    • We can see major flaws easier and therefore reducing modifications to the product once it is fielded
    • In other words, it will provide richer information; however, there is are also increased costs associated with the experimentation

Two Type of Information

  • Qualitative

  • Quantitative

Tools & Techniques


  • Purpose: Useful for reaching target populations and gathering rich information

  • Types: structured, unstructured

  • Making sure the following are covered:

    • The interview’s purpose, explained at the outset.
    • Enumerating activities by asking a general question, e.g., “What are your tasks?” and following this up with more specific questions.
    • Work methods: finding out how tasks are performed.
    • Performance issues. These provide a measure
    • of the “usability” of the current support system
    • and the need for improvement.


  • Purpose: Useful for reaching large populations and thus gathering large amounts of data

  • Issues to consider in design:

    • The need to make things easy for the subject.
    • The need for unambiguous questions.
    • The need to gather precise data.
    • The need to support the intended analysis.

Focus Groups

  • Purpose: Useful for reaching target groups of users to get consensus information on product information

  • Method:Facilitated workshops of groups of 5-10 people - current or likely users

  • Type of Information:subjective (tasks, requirements, product ideas, etc.)

Function Allocation

Function Allocation Strategy

  • Mandatory - system requirements, safety, legal or labor constraints, etc.

  • Balance of value - assignment based on relative performance

  • Utilitarian - human is available and is capable

  • Cost-based - relative cost of performance

  • Affective and cognitive support – meaningful work and maintaining adequate knowledge of the system


  • Purpose: Guidelines provide us with advice on the solution of design problems.

    • They suggest possible solution strategies.
    • Each guideline has a context or domain within which it applies.
    • Guidelines act as heuristics, drawing on assumptions derived from past experience.
    • In many cases, the experience we draw on includes empirical research.

Roles of Guidelines

  • Raising awareness of concepts

  • Assisting in design choices

  • Offering strategies for solving design problems

  • Supporting evaluation

Problems in selecting guidelines:

  • Problems in selecting guidelines:

    • the tendency to apply the first guideline that seems relevant
  • Problems in applying guidelines, e.g., from Tullis (1988): Example: Reduce search times by minimizing the number of groups of items while designing each group to subtend a visual angle as close as possible to 5 degrees

    • Does this apply to our problem?
    • Will it have the desired effect?
    • Will the resulting design really be more
    • usable?

Check the guideline against the problem statement:

  • Check the guideline against the problem statement:

    • Is the guideline appropriate to the activity that the design is to support?
    • Is it applicable to the type of user who will perform the activity?
    • Does it address the particular levels of support or usability factors that determine the success of the design?
    • Is it appropriate to the form of solution chosen?

Guideline Categories

  • Five contexts that cover the spectrum of guideline use:

    • General principles that apply to any user interface
    • Guidelines that apply to forms of solution for interactive display layouts, including those that use color
    • Guidelines for use with specific interaction styles
    • Sets of guidelines offered in style guides associated with proprietary systems and standards
    • Guidelines for the design of individual user interface components supporting particular user tasks.

General Design Principles

  • Two universal principles:

    • Design with a view to supporting the user’s task or process.
    • Know the user (Hansen, 1971 in Newman and Lamming,1995).
  • Suggested general principles:

  • Shneiderman (1992) Nielsen and Molich (1989)

  • Strive for consistency. Be consistent.

  • Enable frequent users to use Provide short cuts.

  • shortcuts.

  • Offer informative feedback. Provide feedback.

  • Design dialogues to yield closure. Good error messages.

  • Offer simple error handling. Provide clearly marked exits.

  • Permit easy reversal of actions. Support internal locus of control.

  • Reduce short-term memory load. Minimize user memory load.

  • Simple and natural dialogue.

  • Speak the user’s language.

  • Prevent errors.

Examples of Guidelines

  • Make all facets of design consistent with user expectations considering both the user’s prior experience and well established conventions, such as symbology

  • Design workstations, controls, and displays around the basic capabilities of users regarding such characteristics as strength, dexterity, memory, reach, visual acuity, and hearing

  • Be sure that auditory signals are well within users’ threshold values for amplitude and frequency considering the effects of ambient noise

  • Be sure the brightness of visual signals must be sufficient to be perceived by users working under various conditions of ambient illumination and that the brightness and contrast are adequate to optimize legibility

  • Be careful that labels and displayed information

  • are easy to read from the typical viewing angles

  • and distances. Symbol size, contrast, color, and

  • display depth must be considered.

Ensure abbreviations, symbols, text, and acronyms placed on, or displayed by, the device are also used in the instructional manual

  • Ensure abbreviations, symbols, text, and acronyms placed on, or displayed by, the device are also used in the instructional manual

  • Design control knobs and switches to correspond with both general conventions and any that are unique to the user population

  • Arrange and design knobs, switches, and data-entry key in a way that reduces the likelihood of accidental activation

  • Use color and shape coding to facilitate the rapid identification and discrimination of controls and displays. Color and codes should correspond to universal industry convention.

  • Space keys, valves, and control knobs sufficiently apart for easy manipulation. This will also reduce likelihood of accidental

  • activation.


Performance Task Analysis

  • Time

  • Errors

  • Quality

  • Quantity

  • Workload, etc.

Methods of Performance Analysis

  • Three widely-used methods:

    • GOMS analysis methods, including keystroke-level analysis
    • Heuristic evaluation: introducing walkthrough and performance analyses as needed
    • Cognitive walkthrough, in which performance analysis is folded into the sequence analysis

Two Stages of Analysis

  • We are making predictions about how a human activity, performed as a sequence of steps, will be supported.

  • So:

    • 1. We must establish the sequence of steps
    • 2. We must analyze the performance of each step

GOMS Analysis

  • Analysis in terms of four components of the activity:

    • Goals that users are trying to achieve
    • Operators, i.e., basic actions that users perform
    • Methods employed by users to attain goals, made up of sequences of operators
    • Selection rules for choosing between methods

Heuristic Evaluation

  • Can be applied to problems where GOMS and Cognitive Walkthrough are unsuitable, i.e.:

    • (a) method of operation is not fully predictable, and
    • (b) user is not a complete novice
  • The Heuristic Evaluation method:

    • employ a team of evaluators to identify problems in the design
    • provide a list of heuristics (general guidelines) to guide their evaluation, e.g.:
      • Simple and natural dialogue Provide clearly
      • marked exits
      • Speak the user’s language Provide short cuts
      • Minimize user memory load Good error
      • messages
      • Be consistent Prevent errors
      • Provide feedback

Advantages of Heuristic Evaluation

  • Low cost

  • Intuitive to perform

  • Requires little training

  • No advance planning required

  • Can be used early in the design process

  • Provides high-level evaluation, but inherently less repeatable than other analysis methods

Alternate Set of Heuristics

  • Learning:

    • Help and Documentation:
      • design for use without documentation
      • provide easy-to-use task-oriented documentation
    • Adopt the User’s Viewpoint:
      • speak the user’s language (avoid jargon)
      • make use of existing knowledge
    • Simple and Natural Dialogue:
      • avoid extraneous information, steps, actions
      • information should be in a logical, natural order
    • Design for Advancement:
      • provide shortcuts (quick keys, customization)

Alternate Set of Heuristics

  • Adapting to the User:

    • Provide Maps and a Trail:
    • Show the User What is (Not) Possible:
      • provide affordances to indicate what can be done
    • Intuitive Mappings:
      • design good response compatibility between controls and actions
    • Minimize Memory Load:
      • remove the need to remember across dialogues
      • provide multiple views for easy comparisons
    • Consistency in the System and to Standards:
      • make sure the same term / action has one meaning
      • when there is no better way, conform to a standard

Alternate Set of Heuristics

  • Feedback and Errors:

    • Feedback:
      • provide timely feedback about all processes and system status
    • Prevent Errors:
      • make it difficult to make errors
    • Error Messages:
      • diagnose the problem and suggest a solution
    • Clearly Marked Exits and Error Recovery:
      • make sure the user can get out of an undesirable state easily
      • design assuming that people will make errors and need to recover previous states

Heuristic Evaluation

  • Find evaluators:

    • independent usability AND application experts.
  • Apply heuristics:

    • apply each heuristic to many parts of the system.
    • apply heuristics while completing benchmark tasks.
  • Enter violations into a database or form:

    • evaluator records problems or observer takes notes on evaluator.

Heuristic Evaluation

Breakout #2

  • Divide into your groups

  • Using the provided devices:

    • PART 1: perform a heuristic evaluation using the calculator to perform the following math function to solve for y.
      • y = (3)2 * (40)-6 + 1000 – 7*100 + (-100)
    • PART 2: perform a GOMS analysis of the setting the time on your watch to central time.

Cognitive Walkthrough Analysis

Determining the Sequence

  • Earlier, we discussed two methods of analysis:

    • Analytically - by analyzing the options available to the user at each step—a form of Walkthrough
    • Empirically - by studying how the user performs the activity, and choosing a representative (‘benchmark’) sequence.
  • Walkthroughs play a double role:

Steps in Conducting a Cognitive Walkthrough

  • Identify what the user is trying to do, and then ask the following questions repeatedly:

    • Q1: Will the correct action be made sufficiently evident to the user?
    • Q2: Will the user connect the correct action’s description with what they are trying to do?
    • Q3: Will the user interpret the system’s response to the chosen action correctly, i.e., will they know if they have made a right or wrong choice?
  • The result is to expose design flaws that may interfere with exploratory learning.

  • The method is best applied by small teams walking through the design together.


  • Field study versus laboratory study

  • Video recording

  • Concurrent verbal protocols

  • Passive observation


Prototype Stages

  • Identifying key properties. We focus on properties identified in the problem statement or in early requirements documents.

  • Developing the prototype. All we need is a prototype that

    • has the functions to support the tasks of interest
    • has the performance to allow a realistic test and
    • has enough robustness to survive each test without serious failure.
  • Experimental design. We need a small number of users, to whom we set a suitable range of benchmark tasks, chosen to exercise the prototype’s functionality as fully as possible.

  • Collecting data. Direct observation and recording of video and concurrent protocols are especially effective.

  • Data analysis. The good and bad features of the design will probably be obvious right away; we may also take simple performance measurements.

  • Drawing conclusions. The primary outcome of informal testing is a list of design changes.


Yüklə 458 b.

Dostları ilə paylaş:

Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur © 2022
rəhbərliyinə müraciət

    Ana səhifə