Wednesday, December 9, 2009

Usage Centred Design - Modelling Users and their Tasks

Usage Centered Design is a user interface design methodology developed by Larry Constantine and Lucy Lockwood. It has some similarities with Cooper's about-face methodology but uses abstract models instead of concrete ones. This makes it harder to start using but, in my opinion, gives great reasoning power once its understood and used properly.

This blog post describes the first phase of Usage Centered Design - modeling users and their tasks. A later post will describe using them to design a user interface.

Why model users?

The first key insight to understand is that we're not actually designing for users. We're designing to let people do things. This raises two important questions:
  1. How can we best describe the aspects of the people and how they need to interact?
  2. How can we best describe the things that they have to do?
The most well known methodology for describing the people and the things they do is Cooper's about-face methodology (http://www.cooper.com). In this methodology, users are researched and then described using Personas. Personas are precise descriptions of someone who typifies an actual user - with all details about them. They're completely made up. The things they do are described using Scenarios. A Scenario is a precise description of what a Persona might actually do to achieve their goal - it contains much information about the context surrounding the interaction with a system. There are lots of real examples of both on the web:
  • http://chopsticker.com/2007/06/08/download-an-example-persona-used-in-the-design-of-a-web-application/
  • http://www.uiaccess.com/accessucd/scenarios_eg.html
From my perspective, the key criticism of this methodology is that it contains too much detail - and that the detail distracts from the information. This is because the methodology isn't trying to model the users - it's trying to describe them to the most precise level of detail. Usage Centered Design is different because it models only the details that are relevant to user interface design.

How can we model users?
This is really two questions.
  • What relationship between a user and a system do we want to model?
  • What information about that relationship do we want to capture?
In Usage Centered Design we model the role that a user plays when they are interacting with a system and we model how they will interact with the system while they are in that role. This is called a "User Role Model" and is very similar to a Use Case Model (with some additional information about the human actors involved). Constantine and Lockwood define a user role as a set of characteristic needs, expectations, and behaviors that a user can take on when interacting with a system - users play different roles at different times to achieve different goals.

For example, in a Pizza company we can model the users by creating several different roles:
  • Telephone Answerer
  • Order Maker
  • Order Deliverer
  • Staff Roster Maintainer
  • etc
The really important thing about these roles is that the roles are independent of the actual people who work there, their job titles, and the number of staff - these roles must be played in any pizza company. Having these roles (as well as relationships between the roles: order taker versus telephone order taker) lets us reason about the system in an abstract way. This can let us determine how

In addition, we keep some additional information about each role. This can include:
  • The context of use (front of shop, potential interruption by customers)
  • Characteristics of use (customers have a tendancy to change their mind)
  • Criteria (speed, simplicity, accuracy)
In contrast, Cooper's methodology models stereotypical users and captures all information about those users.

How can we model the things that users do?
Essential Use Cases are used in Usage Centered design to model the things that users do. Each use case describes a particular task a user has to do with the system (or a goal they want to achieve with the system).

Essential Use Cases are just like ordinary use cases except:
  • When writing them we have an unholy focus on the minimal, essential interaction that is necessary for the user to achieve their goal. We assume that this use case is all that the user will be using the system for - resolving the navigation between different use cases is done later.
  • They're written in a 2-column format in order to visualize the necessary interaction.
  • We write the use case from the user's perspective - the interactions that they want to have first are first in the use case. If they don't care about order then we only have 1 step.
This focus on the minimal interaction is key. It lets us determine how good our solution is with respect to any one use case - we can count the number of steps in the solution  and compare with any particular use case. It also lets us compare the visual layout of a solution with the order required in the use case.

In contrast, Cooper's methodology models how a particular user might actually achieve a particular goal - with all the contextual information in there.

Aren't these typical Business Analysis artifacts?
Yes, the models used in Usage Centered Design are models that Business Analysts use on a daily basis. The difference is in how the models are used. In Usage Centered Design, the analyst has to have a constant focus on the user and how they're likely to use the system. This might involve a serious amount of user research if the tasks and how they're achieved is ambiguous (or, in a Pizza company, it might not :).

I've found that it's pretty easy to explain the focus of a use case and a user role map to business stakeholders. I've also found that once they've "got it" the type of information I'm getting from them changes - we start talking about requirements instead of solutions!

That's all good, but how do we use them?
 Stay tuned for the next update: Usage Centered Design - using the models.

The Use Case Model

The use case model is, essentially, a visual representation of all the people and computers that will interact with a system and all the tasks that the people and computers can do with the system. In other words, it's a high level view of what a system will do.

This blog describes how to draw a use case model (it's not hard) and, much more importantly, how to reason about a system using a use case model. Because use case models can be drawn very quickly at the beginning of a project, any reasoning we can do with a model can have huge pay-off down the line.

What is a Use Case Model?
The thing I like most about use case models is that they're damn easy to understand. Take my most often used example: a pizza company:



This model tells us that there are two key people using the system - someone who takes orders and someone who makes them. The person who takes the orders can do three things - order a pizza, cancel an order, and deliver a pizza. The person who makes the order can only do one thing - make a pizza.


For a bit extra reasoning power, I call the "people" in the diagram "user roles". This is because, in reality, one person might play both roles. Or there might be one role played by multiple people. To get technical, a "user role" is a role that a person takes on when using the system.


Are there any guidelines for drawing one?
In fact, yes. The use case model above is pretty damn awful. Here are some guidelines and how they're broken above:
  • A user role describes a role that a particular user takes on when interacting with the system. In the diagram above, the "order taker" role can perform activities relating to manipulating orders in the system and activities related to delivering pizza. This is bad because when we come to do solution design, we want to be able to support the tasks for a user role on as few screens as possible - and having order delivery support on the same screen as entering new orders would most probably be dumb.
  • The task names like "make pizza" relate to the overarching goals of the user as opposed to the goals from a particular interaction with the system. Much more sensible names would be things like "get next order to make" and "mark order as complete". In other words, the tasks performed at the system boundary are not well defined.
  • The complete set of tasks described is insufficient to achieve the key business scenario - get the right food to the right customers within 30 minutes. To support that scenario we need tasks like "view list of waiting orders".
Here is an updated use case model with the first two points applied:


This is a much more precise model. We're now clearly defining the key functional requirements of the system in a much less ambiguous way. We've also identified three types of user roles that require different types of solution support. For example, the requirements of the order taker, a person who has to deal with a customer who will change their mind are quite different to the requirements of the order maker - a person who doesn't need a high degree of interactivity in any solution. We can reasonably expect that the order taker will become an expert user of the system as their goal is to get the orders into the system whereas, for the order maker, using the system is secondary to their main goal of making the orders!



What kinds of reasoning can we do with them?
There are several kinda of reasoning that we can now do. 

The first is identifying missing functionality. Given that our key business scenario is "get the right food to the right customers within 30 minutes", we might want to add in a "order delay manager" user role and use cases that let them examine the order queue and assign staff to particular parts of the shop. We might want to add in basic cash management and accounting functions - or even inventory control functions. 

The second is around scope and prioritization. With a simple quick diagram we can have a conversation with our customers about how they view the system being used. We can then talk about which functions are the most important - and if we're doing things in an agile way, we can start building those key functions straight away.


Finally, we can use the diagram as a jumping point for analysis. There will be business rules applying to all functions. For example,  there might be a cost to the customer of canceling an order. However, to read about how to best document these rules - and the process required for describing the requirements for a function - you'll have to wait for the "use case" blog post I've got lined up!

Monday, December 7, 2009

On the importance of definitions

As analysts we're always told to make our requirements SMART - specific, measurable, attainable, realistic, and time-bound. We also keep attributes about our requirements - priority, stability, etc. However, the meanings of "high", "medium", and "low" when it comes to this attributes is often badly defined. For example, "high stability - unlikely to change" isn't a really useful definition.

This blog post looks at a real world example where we tightly defined what a particular attribute meant. The impacts of tightly defining the definition meant that the project workload was reduced by 90%.

Project Background
This project, as with lots of projects out there, isn't one I can give you actual details of what the project was trying to achieve. However, the essence of the problem is that the organization had identified a particular business scenario where they would need to perform many additional processes. Due to the unlikelihood of the scenario ever happening, they decided to determine and document manual processes instead of building a technology solution.

We had approximately 200 process to determine and document.

Levels of detail?
The first step of the project was to determine the scope of all the business processes. Once we did this we added two attributes to each process: level of detail required for the detailed process documentation and level of accuracy required if the process is ever performed. Our initial definition for the level of detail was this:
  • Low: the person performing the process requires a low level of detail in order to achieve the outcomes.
  • Medium: the person performing the process requires a medium level of detail
  • High: the person performing the process requires a high level of detail.
It's nice - but somewhat meaningless. I started thinking - what do we really want out of a definition? After some thought I came up with this:
  • Precise
  • Objective (we were having troubles due to other ambiguous things in the project so I didn't want any subjective definition)
  • Testable
In retrospect, I could have gone with SMART attributes as well as SMART requirements :)

Redefined
Using those rules, I came up with these definitions:
  • Low: if the business scenario ever happens, the person responsible for the process can execute the process with appropriate subject matter expert help and no other information but the name of the process.
  • Medium: if the business scenario ever happens, the person responsible for the process can execute the process with appropriate subject matter expert help, a list of required input data, where to get the input data from, a list of required output data, and where to send the data to.
  • High: if the business scenario ever happens, the person responsible for the process can execute the process with appropriate subject matter expert help, and a complete mapped process using BPMN2.
The unexpectedly great thing about these definitions is that they focused us on what the users of the processes actually needed.

Outcomes
After a couple of weeks of debate, we determined that there were 2 medium detail processes. The remainder of processes required a low level of detail. Rather than requiring four analysts for 8 months to determine and describe processes, we finished with two analysts in a month.

(Yes, I am somewhat proud of this outcome).

Saturday, December 5, 2009

Quality Function Deployment - The Quality Chart

"The iPhone is a quality product."
Let's think about that statement for a while. Do you agree with it? Why? What does the word "quality" mean for you? Do have any colleagues/friends who do agree with it? What does "quality" mean for them? What about your colleagues/friends who do not agree with it? What does quality mean for them? Hopefully it's clear that 'quality' is a subjective aspect of a product. One person's quality is not another person's quality.

This blog post looks at quality. There's a lovely methodology that helps us determine how to build quality into a product. It's called Quality Function Deployment (QFD). I'm going to be blogging a bit about the QFD methodology and will start with the first model in QFD – the quality chart. The quality chart helps us understand what our target audience mean by quality - and what elements we need to build into a product to ensure that we are achieving quality for our audience.

Along the way we look at using the quality chart to drive product development - an interesting side topic!

The Quality Chart
The Quality Chart is, basically, a matrix linking customer demanded quality items with quality elements that we can deliver. It's designed to help translate a customer's perception of quality into a set of elements in the language of the company engineers (or software developers :). Consider this example (which I just made up for a phone  - a proper analysis might have quite different things):



The items in the 'demanded quality' side of the matrix are things that have come up in customer research. To determine the quality elements that the product must have to deliver the demanded quality, we first determine all the elements (the top of the matrix) that are necessary to deliver the demanded quality and then analyze the demands to define exactly which elements are necessary to deliver each attribute of quality that is demanded.

We then rearrange this matrix to be explicit about which elements support which demanded characteristics.
  • finish does not degrade - scratch resistance, waterproof
  • easy to find features - consistent navigation; unambiguous naming scheme
  • etc, etc, etc.
This view of the information tells us exactly which elements we need to deliver a particular attribute. The advantage of this view is that each demanded quality characteristic appears only once in the list - it may appear multiple times in the matrix due to the decomposition.


Product Selection, Creation, and Modification

A really useful thing about the demanded quality items is that we can group sets of items into products. For example, one product might have demanded quality items 1, 2, 3.1, and 4.1.2. Another product might have 1, 2, 3, 4.1.3. This lets us carve up our products into a user centered set of demanded qualities - and then determine the quality elements we will have to build into each product. As our understanding of demanded quality and quality elements changes, we can: update the matrix, determine the impacts on the quality elements for each product, and produce new versions of each product for the next release.

This process of updating our quality chart provides a product delivery cycle based on the demanded qualities of our products - quite a useful because as customer expectations (demanded quality) change, our product's features (quality elements) changes to match.

Decomposition of Demanded Quality
Demanded quality is what our customers/users/stakeholders want. We produce this though customer workshops, interviews, surveys, feedback, market research, etc, etc, etc. The key things about demanded quality items are:
  • Items at the 3rd level must be SMART (specific, measurable, achievable, realistic, time-bound).
  • Items should not be restricted to a particular product. Let the people determine the scope of the items.
Personas and Scenarios (as exposed by Cooper) might also be very useful in determining demanded quality.

Decomposition of Quality Elements

Quality elements are, essentially, what we can build into a product. The experience that the product team has will impact the comprehensiveness of the set of quality elements. Because creating a comprehensive list is hard, it's more important to prioritize elements than attempt comprehensive analysis.

Luckily, the ISO has defined 6 quality elements for software:

  • Functionality: are the required functions available, including interoperabilithy and security
  • Reliability: maturity, fault tolerance and recover-ability
  • Usability: how easy it is to understand, learn, operate the software system
  • Efficiency: performance and resource behaviour
  • Maintainability: how easy is it to modify the software
  • Portability: can the software easily be transfered to another environment, including installability
We can (probably) produce a standard 3 level docomposition for each of these. For example, usability might break down at the 2nd level to:
  • easy to understand
  • easy to learn
  • avoids errors
  • guessable (or 'intuitive')
  • user experience
  • etc
then we can decompose each of those further - to a level that is clearly measurable.

Is this a thinking tool?
For any product of reasonable complexity, the quality matrix is going to be large. This means that an analyst won't be able to cognitively understand and reason about the entire quality matrix. However, it does give us these abilities:
  • Confidence that the features being built/designed will achieve the demanded quality
  • Ability to perform an impact analysis of a particular feature not being delivered
  • Ability to prioritize quality elements for design and development.
Those are all very good things. However, because of the lack of abstraction and ability to cognitively manipulate the matrix (like we can do with use cases and use case models), I call this a reasoning tool - it helps us reason about quality and determine impacts.

Thursday, December 3, 2009

Web 2.0

This posting is a musing on the future direction of Web2.0 concepts and ideas. We look at what Web2.0 means to me and the impacts of this on organizations. Finally I make random predictions about what Web3.0 will be.

A requirement of our website is that it is written in Web2.0

I had this conversation with a C level executive at a previous company. He'd just been out talking with software vendors and was now all into Web2.0. Unfortunately he thought that it meant javascript like interactivity as opposed to a complete change of the way the company interacts with its customers. For me, the difference between Web1.0 and Web2.0 is quite simple:
  • Web 1.0 is about providing information to a passive user of your site
  • Web 2.0 is about letting users of your site engage with each other.
Yes, it's a bit simplistic, but it'll do for now.

So what's new?

Traditionally, communication in companies happens in one of three ways. Upward communication is when people pass information freely to their managers. Downward communication is where the managers pass information freely to their direct reports. Gossip is informal sideways communication because the two other channels aren't working properly.

In a traditional corporate organization, Web2.0 technologies have the potential to formalize this informal communication. This makes the "water cooler conversations" accessible to everyone - including senior management. The impact this can have on an organization can vary considerably - however I believe that the organizational culture is a good predictor of the positive or negative nature of this change (disclaimer: I used to work for that company).

Really?

So have we really formalized informal communications - or have we simply increased the ability for upward and downward communication (both of which are correlated with constructive cultures). One of the key things about the "water cooler conversations" is the transient nature of the communication. We can say things - or imply things - that are not on any record. Consider: the raised eyebrow that tells a whole story about a team member; the sharp intake of breath that warns you to be wary of a particular stakeholder; or the slumped shoulders when you ask how someone how work is going.

Perhaps Web3.0 will be about communicating this transient informal information.

Scaling Things Down - Enterprise Architecture


Enterprise Architecture is often sold as 'a way of understanding your entire business - from the value you're providing to your customers to each piece of hardware that's used in delivering that value.' While the traceability of customer value through software and finally to the hardware that supports the delivery of that value can be very powerful, it's very daunting starting an enterprise architectural project - the benefits of the work might not be seen for several years.

This blog post describes how I used TOGAF - a framework for creating enterprise architecture - on a tightly constrained part of the enterprise. This delivered business value in weeks instead of years.

TOGAF? What's that?

TOGAF is, well, complex and comprehensive. In it's simplified essence, it consists of a framework to describe enterprise architecture, a method of developing the architecture, and a repository to reuse pieces of existing architecture. Of the most interest, and key to understanding TOGAF, is the framework to describe the architecture. It consists of four deeply linked architectural types:
  1. The Business Architecture (who does what in the organisation).
  2. The Application Architecture (the set of logical and physical applications that support the people who do stuff)
  3. The Data Architecture (the logical and physical data that is used by the applications)
  4. The Infrastructure Architecture (the hardware that stores the data, runs the application, and transfers data from one place to another).
Over the top of these four pillars of architecture is the 'enterprise architecture'. This is, typically, a set of business scenarios and capabilities.

That seems like a lot of work!

Scared yet? It is complex. Business Analysts are always at risk of getting themselves into analysis paralysis and this is the perfect tool to do exactly that. Luckily, TOGAF provides two methods of avoiding analysis paralysis: scope control; and detail control. Scope control is about managing which parts of the organization you are examining. Detail control is about managing the level of detail you need to describe in order to deliver value.

This is an important point so I'll repeat: we must not forget that the purpose of doing architectural work is to deliver value to the organization.

Case Study

As it happened, I had a lovely piece of work fall onto my lap. The Business has a semi-automatd process that takes weeks to perform - and it has to happen every month during peak staff workloads. They want full automation of the process. This will reduce demands on key staff at peak times, deliver greater value to customers (as the customer will get timely reports), and increase team moral (as the team doesn't particularly like doing it). As they've been burnt by development costs before (who hasn't!), they wanted a gap analysis of what will need to be done to move to full automation.

Luckily, TOGAF is especially good a gap analyzes. The development method is quite clear about creating a vision, defining the current architectural state, defining a future architectural state, and determine a path to move from the current state to the future state. However, telling stakeholders that you're doing architectural work to understand an existing process doesn't go down well - so I didn't tell them.

To start the analysis I first decided on the scope of the analysis. The breadth of the analysis was tightly constrained to the process - production of a particular report for our customers. The depth was less well constrained. I had to leave it as "enough depth that we can determine the gaps, and the cost/benefit of closing each one". With that scope in mind, I dived in, interviewed, works-hopped, and did the normal BA type things. The resulting report was comprehensive, detailed, and had all the information anyone would ever want. To make it readable I added a chapter for high-level results and recommendations. I don't think anyone from Business read beyond that.


TOGAF is a thinking tool
Clearly TOGAF can scale down successfully to be useful on smaller pieces of work. The ore interesting thing was how using it on a smaller piece of work clarified my thinking. The separation of the way the business works from how the technology works clarified my understanding of both, let me tap into expert knowledge at the right time, and helped me ask the right questions. It also gave a nice report structure to describe a very messy process (the business work-arounds due to technology were nasty!).

On a final note, I found TOGAF a very good thinking tool for the larger problems that can't be modeled in a business process model or in a use case model. If you think you've mastered both those thinking/analysis techniques, then give TOGAF a look now.