Evaluating Software Architectures: Methods and Case Studies

Paul Clements / Rick Kazman / Mark Klein  
Total pages
October 2001
Related Titles

Product detail

Product Price CHF Available  
Evaluating Software Architectures: Methods and Case Studies
88.90 approx. 7-9 days


The foundation of any software system is its architecture. Using this book, you can evaluate every aspect of architecture in advance, at remarkably low cost -- identifying improvements that can dramatically improve any system's performance, security, reliability, and maintainability. As the practice of software architecture has matured, it has become possible to identify causal connections between architectural design decisions and the qualities and properties that result downstream in the systems that follow from them. This book shows how, offering step-by-step guidance, as well as detailed practical examples -- complete with sample artifacts reflective of those that evaluators will encounter. The techniques presented here are applicable not only to software architectures, but also to system architectures encompassing computing hardware, networking equipment, and other elements. For all software architects, software engineers, developers, IT managers, and others responsible for creating, evaluating, or implementing software architectures.

Table of Contents

List of Figures.

List of Tables.



Reader's Guide.

1. What Is Software Architecture?

Architecture as a Vehicle for Communication among Stakeholders.

Architecture and Its Effects on Stakeholders.

Architectural Views.

Architecture Description Languages.

Architecture as the Manifestation of the Earliest Design Decisions.

Architectural Styles.

Architecture as a Reusable, Transferable Abstraction of a System.


For Further Reading.

Discussion Questions.

2. Evaluating a Software Architecture.

Why Evaluate an Architecture?

When Can an Architecture Be Evaluated?

Who's Involved?

What Result Does an Architecture Evaluation Produce?

For What Qualities Can We Evaluate an Architecture?

Why Are Quality Attributes Too Vague for Analysis?

What Are the Outputs of an Architecture Evaluation?

Outputs from the ATAM, the SAAM, and ARID.

Outputs Only from the ATAM.

What Are the Benefits and Costs of Performing an Architecture Evaluation?

For Further Reading.

Discussion Questions.

3. The ATAM-A Method for Architecture Evaluation.

Summary of the ATAM Steps.

Detailed Description of the ATAM Steps.

Step 1: Present the ATAM.

Step 2: Present the Business Drivers.

Step 3: Present the Architecture.

Step 4: Identify the Architectural Approaches.

Step 5: Generate the Quality Attribute Utility Tree.

Step 6: Analyze the Architectural Approaches.

Step 7: Brainstorm and Prioritize Scenarios.

Step 8: Analyze the Architectural Approaches.

Step 9: Present the Results.

The Phases of the ATAM.

Phase 0 Activities.

Phase 1 Activities.

Phase 2 Activities.

Phase 3 Activities.

For Further Reading.

Discussion Questions.

4. The Battlefield Control System-The First Case Study in Applying the ATAM.


Phase 1.

Step 1: Present the ATAM.

Step 2: Present the Business Drivers.

Step 3: Present the Architecture.

Step 4: Identify the Architectural Approaches.

Step 5: Generate the Quality Attribute Utility Tree.

Step 6: Analyze the Architectural Approaches.

Phase 2.

Step 7: Brainstorm and Prioritize Scenarios.

Step 8: Analyze the Architectural Approaches.

Step 9: Present the Results.

Results of the BCS Evaluation.



Sensitivities and Tradeoffs.

Architectural Risks.


Discussion Questions.

5. Understanding Quality Attributes.

Quality Attribute Characterizations.




Characterizations Inspire Questions.

Using Quality Attribute Characterizations in the ATAM.

Attribute-Based Architectural Styles.


For Further Reading.

Discussion Questions.

6. A Case Study in Applying the ATAM.


Phase 0: Partnership and Preparation.

Phase 0, Step 1: Present the ATAM.

Phase 0, Step 2: Describe Candidate System.

Phase 0, Step 3: Make a Go/No-Go Decision.

Phase 0, Step 4: Negotiate the Statement of Work.

Phase 0, Step 5: Form the Core Evaluation Team.

Phase 0, Step 6: Hold Evaluation Team Kick-off Meeting.

Phase 0, Step 7: Prepare for Phase 1.

Phase 0, Step 8: Review the Architecture.

Phase 1: Initial Evaluation.

Phase 1, Step 1: Present the ATAM.

Phase 1, Step 2: Present Business Drivers.

Phase 1, Step 3: Present the Architecture.

Phase 1, Step 4: Identify Architectural Approaches.

Phase 1, Step 5: Generate Quality Attribute Utility Tree.

Phase 1, Step 6: Analyze the Architectural Approaches.

Hiatus between Phase 1 and Phase 2.

Phase 2: Complete Evaluation.

Phase 2, Step 0: Prepare for Phase 2.

Phase 2, Steps 1-6.

Phase 2, Step 7: Brainstorm and Prioritize Scenarios.

Phase 2, Step 8: Analyze Architectural Approaches.

Phase 2, Step 9: Present Results.

Phase 3: Follow-Up.

Phase 3, Step 1: Produce the Final Report.

Phase 3, Step 2: Hold the Postmortem Meeting.

Phase 3, Step 3: Build Portfolio and Update Artifact Repositories.

For Further Reading.

Discussion Questions.

7. Using the SAAM to Evaluate an Example Architecture.

Overview of the SAAM.

Inputs to a SAAM Evaluation.

Outputs from a SAAM Evaluation.

Steps of a SAAM Evaluation.

Step 1: Develop Scenarios.

Step 2: Describe the Architecture(s).

Step 3: Classify and Prioritize the Scenarios.

Step 4: Individually Evaluate Indirect Scenarios.

Step 5: Assess Scenario Interactions.

Step 6: Create the Overall Evaluation.

A Sample SAAM Agenda.

A SAAM Case Study.

ATAT System Overview.

Step 1: Develop Scenarios, First Iteration.

Step 2: Describe the Architecture(s), First Iteration.

Step 1: Develop Scenarios, Second Iteration.

Step 2: Describe the Architecture(s), Second Iteration.

Step 3: Classify and Prioritize the Scenarios.

Step 4: Individually Evaluate Indirect Scenarios.

Step 5: Assess Scenario Interactions.

Step 6: Create the Overall Evaluation-Results and Recommendations.


For Further Reading.

Discussion Questions.

8. ARID-An Evaluation Method for Partial Architectures.

Active Design Reviews.


The Steps of ARID.

Phase 1: Rehearsal.

Phase 2: Review.

A Case Study in Applying ARID.

Carrying Out the Steps.

Results of the Exercise.


For Further Reading.

Discussion Questions.

9. Comparing Software Architecture Evaluation Methods.

Questioning Techniques.

Questionnaires and Checklists.

Scenarios and Scenario-Based Methods.

Measuring Techniques.


Simulations, Prototypes, and Experiments.

Rate-Monotonic Analysis.

Automated Tools and Architecture Description Languages.

Hybrid Techniques.

Software Performance Engineering.



For Further Reading.

Discussion Questions.

10. Growing an Architecture Evaluation Capability in Your Organization.

Building Organizational Buy-in.

Growing a Pool of Evaluators.

Establishing a Corporate Memory.

Cost and Benefit Data.

Method Guidance.

Reusable Artifacts.


Discussion Questions.

11. Conclusions.

You Are Now Ready!

What Methods Have You Seen?

Why Evaluate Architectures?

Why Does the ATAM Work?

A Parting Message.

Appendix A: An Example Attribute-Based Architectural Style.

Problem Description.


Architectural Style.



Priority Assignment.

Priority Inversion.

Blocking Time.

For Further Reading.


Index. 020170482XT10082001

Back Cover

Praise for Evaluating Software Architectures

“The architecture of complex software or systems is a collection of hard decisions that are very expensive to change. Successful product development and evolution depend on making the right architectural choices. Can you afford not to identify and not to evaluate these choices? The authors of this book are experts in software architecture and its evaluation. They collected a wealth of ideas and experience in a well-organized and accessible form. If you are involved in the development of complex systems or software, you will find this book an invaluable guide for establishing and improving architecture evaluation practice in your organization.”

         -Alexander Ran, Principal Scientist of Software Architecture, Nokia

“Software engineers must own this book. It is a well-written guide to the steps for evaluating software architecture. It argues for the inclusion of architecture evaluation and review as a standard part of the software development lifecycle. It introduces some new and innovative methods for analyzing important architecture characteristics, like extensibility, portability, and reliability. I believe these methods will become new engineering cornerstones for creating good software systems.”

         -Joe Maranzano, AT&T Bell Labs Fellow in Software Architecture (1990), and former head of the Bell Labs Software Technology Center

“Experience and teamwork are the only approaches I know of to deliver products faster, cheaper, and yet to delight your customers. In their first book, Software Architecture in Practice, Paul and Rick (and Len Bass) helped me match my experience with theory. Their invaluable approaches and case studies changed my practice and the way I proceed to design systems and software architectures. This second book, with Mark, covers what I will look at before I feel good about an architecture. It is about how I can tap other people's experience to produce an improved outcome, using other people's feedback. I have used many of the concepts explained in this book for my customers' benefit. Using this book, you-architects, developers, and managers-will develop a common language and practice to team up and deliver more successful products.”

         -Bertrand Salle, lead architect with a major telecommunications company

“If architecture is the foundation of system construction, architectural evaluation is part of the foundation of getting to a 'good' architecture. In this book, the authors put their considerable expertise to one of the most pressing issues in systems development today: how to evaluate an architecture prior to system construction to ascertain its feasibility and suitability to the system of interest. The book provides a practical guide to architecture evaluation using three contemporary evaluation methods. It should prove valuable to practitioners and as a basis for the evolution of architectural evaluation as an engineering practice.”

         -Rich Hilliard, Chief Technical Officer, ConsentCache, Inc., and technical editor, IEEE Recommended Practice for Architectural Description of Software-Intensive Systems

“Too many systems have performance and other problems caused by an inappropriate architecture. Thus problems are introduced early, but are usually detected too late-when the deadline is near or, even worse, after the problem makes the headlines. Remedies lead to missed schedules, cost overruns, missed market windows, damaged customer relations, and many other difficulties. It is easy to prevent these problems by evaluating the architecture choices early, and selecting an appropriate one.”

         -Connie U. Smith, Ph.D., principal consultant, Performance Engineering Services Division, L&S Computer Technology, Inc., and coauthor of the new book, Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software

“The ATAM an evaluation method described in this book is the natural quality-gate through which a high-level design should pass before a detail design project is initiated. Why use the ATAM to evaluate an architecture? Mitigation of design risk is a major reason, but more importantly, the ATAM provides an interactive vehicle that can give key development and user stakeholders architectural visibility-visibility that can lead to an important 'early buy-in.'”

         -Rich Zebrowski, Software Technology Manager, Motorola, Inc.

“Caterpillar's experience with architecture reviews includes SAAM, ATAM, ARID, and ADR evaluation methods described in this book, the first three in detail. These reviews ensured that the needs of the user community were being met, and they exposed the architecture to others in the organization helping with understanding and organizational buy-in. The SAAM- and ATAM-based evaluations worked well to expose the architecture early in the development cycle to a broad range of people. The ARID- and ADR-based evaluations facilitated the exposure of technical details of the architecture later in the development cycle. As the architect of the pilot project for ARID, I observed that this review even served as an architecture training session before the architecture was fully documented.”

         -Lee R. DenBraber, former Lead Software Architect, Caterpillar, Inc.

“We've heard all the management hype about harnessing the innovative creativity of our teams, establishing integrated customer-developer-product teams, and better targeting our systems to meet end user needs. The ATAM techniques described in this book give technical managers, system architects, and engineers proven tools for breaking down the communications barriers that impede our ability to realize these goals. We have successfully integrated the ATAM techniques throughout our lifecycle, including development and maintenance, and have found that they provide the strong technical basis we need to evaluate the many difficult trades required by a system as large as EOSDIS.”

         -Mike Moore, Deputy Manager, Science Systems Development Office, Earth Observing System Data Information System (EOSDIS) Project, NASA Goddard Space Flight Center

“If you know how difficult architecture reviews are, you will be amazed how effective ATAM evaluations can be. For example, an ATAM evaluation we conducted on an important software product line identified a major architectural risk, which we subsequently were able to avoid-a benefit we expect to continue seeing. Moreover, ATAM techniques have enabled us to explain such risks to stakeholders far more clearly than by any other review method.”

         -Stefan Ferber, Corporate Research, Robert Bosch GmbH

Drawing on clearly identified connections between architecture design decisions and resulting software properties, this book describes systematic methods for evaluating software architectures and applies them to real-life cases. It shows you how such evaluation can substantially reduce risk while adding remarkably little expense and time to the development effort (in most cases, no more than a few days). Evaluating Software Architectures introduces the conceptual background for architecture evaluation and provides a step-by-step guide to the process based on numerous evaluations performed in government and industry.

In particular, the book presents three important evaluation methods:

  • Architecture Tradeoff Analysis Method (ATAM)
  • Software Architecture Analysis Method (SAAM)
  • Active Reviews for Intermediate Designs (ARID)

Detailed case studies demonstrate the value and practical application of these methods to real-world systems, and sidebars throughout the book provide interesting background and hands-on tips from the trenches.

All software engineers should know how to carry out software architecture evaluations. Evaluating Software Architectures is the chance to get up to speed quickly by learning from the experience of others.


Paul Clements is a senior member of the technical staff at the SEI, where he works on software architecture and product line engineering. He is the author of five books and more than three dozen papers on these and other topics.

Rick Kazman is a senior member of the technical staff at the SEI. He is also an Associate Professor at the University of Hawaii. He is the author of two books, editor of two more, and has written more than seventy papers on software engineering and related topics.

Mark Klein is a senior member of the technical staff at the SEI. He is an adjunct professor in the Masters of Software Engineering program at Carnegie Mellon and a coauthor of A Practitioner's Handbook for Real-time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems (Kluwer Academic Publishers, 1993).