The User Experience of Enterprise Software Matters

Envision the Future

The role UX professionals play

A column by Paul J. Sherman
December 15, 2008

Over the past twenty years, the field of user experience has been fortunate. Software and hardware product organizations increasingly have adopted user-centered design methods such as contextual user research, usability testing, and iterative interaction design. In large part, this has occurred because the market has demanded it. More than ever, good interaction design and high usability are part of the price of entry to markets.

However, there’s one area that I believe has lagged behind: the enterprise software space. I can’t tell you how many frustratingly unusable enterprise Web applications I’ve encountered during my 12 plus years in corporate America. As important as the user experience of enterprise software is to a business’s success, why isn’t its assessment usually a factor in technology selection?

Champion Advertisement
Continue Reading…

Just as the mass market has demanded and is receiving more usable products, so should businesses demand that their technology vendors make their software easier to learn, more efficient to use, and easy to remember. But for a variety of reasons, many organizations don’t even know how to make this demand.

Consider this column a call to action to organizations that buy enterprise-level software. Here’s what I have to say to them:

Your technology selection processes are incomplete. You’re not assessing the usability of the technology you buy. You’re not only incurring huge hidden costs because of this failure to assess usability, you’re letting enterprise technology vendors get away with building products with poor usability.

The rest of this column explains why this happens and what enterprise technology purchasers can do about it.

Enterprise Software

Enterprise software products are complex, powerful tools. Their complexity is one of the reasons businesses sometimes fail to fully realize the expected return on investment from these products.

For enterprise employees, who must use these enterprise applications, this complexity poses a considerable challenge. When an organization deploys an application, it expects users to learn the new system, integrate it into their existing work processes, and become proficient enough to allow the organization to realize the system’s full benefits. Far too often, however, enterprise employees find these new systems hard to learn, hard to master, and difficult to integrate into existing processes.

Enterprise software, which broadly encompasses functions such as enterprise resource planning and management, customer relationship management, supply chain management, network management, project portfolio management, and business intelligence, is a multi-billion-dollar-per-year industry. Well-known vendors include BMC, Oracle, SAP, Siebel, and telecommunications equipment manufacturers such as Nortel and Cisco, to name a few.

Most Fortune 500 companies have multiple enterprise software products installed, and many mid-sized business are either actively considering or have implemented enterprise solutions. As the market has matured and vendors have searched for new growth opportunities, enterprise software developers have made solutions available to even small businesses with fewer than 100 people.

To a growing company, enterprise software promises to convey benefits in a variety of areas, for example:

  • Centralizing customer information from sales, marketing, customer service, and support to improve customer service and enable better prospect identification.
  • Identifying and managing enterprise-wide resources an organization needs to receive, process, and track orders.
  • Gathering, storing, analyzing, and providing access to enterprise data that lets users make better business decisions.
  • Consolidating information relating to project planning, tracking, and resource management for multiple projects, enabling an enterprise-wide view of project scope, resource allocation, risk, cost, and performance.
  • Configuring and maintaining the performance of network resources, managing and recovering from network faults, managing network traffic for billing and accounting purposes, and ensuring the security of the network.

This is complex stuff.

Consequences of Poor Usability

From the CTO’s and IT Director’s perspectives, these promises assume the internal user groups can and will learn the new systems and incorporate them into their work processes. But these outcomes are far from assured. Some of the problems and pitfalls businesses encounter include the following:

  • Some businesses find that their employees’ productivity actually decreases, because common or critical processes take longer using the new application.
  • Others fail to realize an application’s benefits, because users vote with their fingers and don’t adopt the new system.

Businesses can also experience reduced employee morale and increased turnover as a result of the imposition of new systems and processes. There will always be some employees who resist change in any form. However, if a business mandates process changes and deploys systems users perceive as difficult to learn, use, and remember, the user population will see it as a change for the worse and resist. In such a situation, employee morale declines, and those who are sufficiently disgruntled may even leave if other opportunities present themselves.

IT (Information Technology) organizations responsible for supporting an enterprise application can find themselves overwhelmed as they struggle under the unexpectedly high numbers of support requests that often accompany an application rollout. As anyone who has worked a help desk knows, rollout day for a complex application often seems like a perfect storm for level-one support staff.

Why do such scenarios play out in organization after organization? I argue that two factors drive these kinds of outcomes:

  1. Enterprise software developers don’t pay sufficient attention to the specific wants and needs of their internal user groups.
  2. Enterprises don’t hold their vendors to high enough standards for application learnability, usability, and efficiency.

To illustrate the real-world costs of these two failures, I’ll relate two case studies. These are true stories. They happened in organizations I’ve worked for—though I’ve obscured their identities to protect the guilty.

Case 1: The Business Intelligence System

The Technical Support group for a major financial software application was responsible for generating weekly and monthly reports on calls to their help desk. The reports summarized statistics for management, including call burden, call reasons, call resolution time, hold time, post-call work time, and other statistics that let them track costs and the productivity of support representatives. The process of producing the reports was mostly manual. The data resided in three separate systems, requiring data extraction through complex—though usually repetitive—queries, and they generated and formatted reports using a spreadsheet application.

Other groups within the organization—including Product Management, Development, User-Centered Design, and Quality Assurance—frequently requested custom reports and raw data from this amalgam of systems. Trying to deliver their periodic reports and comply with outside requests sometimes overwhelmed the people in the Technical Support and IT groups.

The business decided to deploy an enterprise-level application that would enable Technical Support to centralize the data, automate data retrieval and report production, and provide outside groups with self-service capabilities that would let them meet their own data and reporting needs.

Upon deploying the application, the business discovered that it took five to ten times as long for the Support and IT staffs to extract the data and produce the reports—even after training had made them proficient in the application’s use. Both groups discovered that the Web application’s user interface comprised a reporting wizard that required users to drag and drop date ranges, field names, and other delimiters into a form. Complex queries that had previously taken five to ten seconds to type now took two or three minutes to drag, drop, delimit, and run. The application forced users—who had considerable technical abilities and expert-level knowledge—to interact with the system as if they were neophytes.

Furthermore, the application output the reports as static HTML that users could not reformat, rotate, or otherwise adjust. Although the simplistic process of query-building proved easier for occasional users from outside groups, their inability to manipulate the output proved frustrating. As a result, the Support and IT staffs were again forced into a bottleneck role, even more laboriously creating reports to comply with outside requests. Within six months, the Technical Support group had brought their old systems back online and reverted to their previous process.

Case 2: The Expense Reporting System

A large telecommunications equipment manufacturer decided to move from spreadsheet-based expense reporting to a system that let users input expense information directly into the company’s accounting system. The application promised to eliminate manual steps—including double entry of data—remove data-entry bottlenecks, and streamline the accounting process.

Employees at this company had an inkling that the new system might pose difficulties when, two weeks prior to the system rollout date, HR (Human Resources) disseminated a 50-slide training presentation to all employees. Next, all employees found out they must complete a mandatory hour-and-a-half-long desktop-video training session in the use of the new system that HR and IT had developed jointly.

The productivity lost to training employees to use the new system was a significant expense, as were the projects that were necessary to produce the training materials. However, the loss in productivity and expense the organization suffered when the system actually went live dwarfed the up-front training costs.

Everything about the application’s user experience was problematic. The process employees followed to enter, describe, and categorize expenses was confusing, unnecessarily long, and ill thought out. The data-entry screens were poorly designed. The terminology the application used, while familiar to finance and accounting professionals, was opaque and unclear to most other employees. The application presented information in illogical formats. For example, it forced users to scroll through 200-item drop-down lists of accounting categories and cost centers, whose order made sense only to those in the Finance department. Users could not resort lists in the user interface into alphabetical or numerical order.

Successfully submitting an expense report, which had previously taken only a few minutes, was now a half-hour undertaking fraught with error and frustration. As a result, productivity and morale suffered. Worse, compliance waned and systemic errors propagated through the accounting system. Some employees simply stopped expensing small purchases or assigned expenses to accounts that appeared near the top of long account lists.

I’m sure my experiences are not unique, and these types of fiascos occur in enterprises worldwide. Why? Let’s explore the dynamics of technology selection.

The Vendor’s Lament: If You Build It, They Will Complain

With their page-spanning feature matrixes, long lists of supported platforms and databases, ROI calculators, and downloadable case studies, enterprise application providers make fantastic promises. However, many enterprise software development processes don’t adequately incorporate users’ specific wants and needs—at best, waiting to consider them until it is too late in the development process; at worst, failing to do so at all.

So why aren’t users’ wants and needs considered earlier in the development lifecycle? There are many reasons, but I can boil them down to this short list:

  • Vendors build applications to satisfy their own perceptions of users’ needs, not users’ actual needs.
  • Engineering groups own too much responsibility for user interface design.
  • Featuritis makes applications unnecessarily complex.

The Vendor Is Not the User

Often, vendors build applications without incorporating the perspectives of actual user groups. Product Managers, Requirements Analysts, and Engineers make assumptions about users instead of observing users and asking them about their wants and needs.

Gathering information about users’ wants and needs is not difficult, but it’s important to do it right. However, it’s easy to do user research poorly or incompletely, resulting in a biased or incomplete perspective of users’ requirements.

The key to developing an accurate picture of user needs is to distinguish the main user groups—and how they differ from one another—then identify the users’ skills, tasks, and needs according to the roles they assume while using the application. It’s also helpful to test the usability of conceptual prototypes with actual users from the target customer groups. In this way, you can test early concepts and iterate designs very inexpensively.

Engineers Are Not the Users

Many development organizations give engineers responsibility for transforming requirements into user interactions, process flows, and screen designs. What results is a user interface that reflects engineers’ mental models. Unfortunately, their mental models for how things work differ drastically from those of users. Consider this example:

  • The engineer’s perspectiveIt’s a state-persistent container for database objects that requires authentication and setting cookies.
  • The user’s mental modelIt’s a shopping cart.

Though both valid, these mental models are very different, indeed. And what makes sense to the engineer would be completely incomprehensible to the user.

Featuritis: The Bane of Users

Featuritis is a pernicious malady. Both vendors and purchasers contribute to the persistence of this disorder. Here’s an example of what typically happens on the vendor side of the equation:

Competitor A has these five features and competitor B has these ten other features, so we’d better put them all in our next release.

This kitchen sink approach to product definition leads to a mishmash of features, with no organizing principle or overarching information architecture.

Technology Selection in Enterprises

Purchasers buy applications with poor usability, because they don’t know any better. The evaluation and decision-making process for the purchase of enterprise applications usually looks like this:

  • An organization identifies the need for a better, more scalable, or faster process.
  • Management makes the business case for deploying a new application.
  • The IS (Information Systems) organization sets technical and feature requirements that are often informed—in a somewhat circular fashion—by vendors’ application feature lists.
  • Purchasing solicits vendors—sometimes, asking them to respond to a Request for Proposal, or RFP.
  • Purchasing evaluates vendors on the basis of their responses and generates a short list of possible vendors.
  • IT often brings vendors’ systems into the organization’s test labs for performance and technical trials.
  • The organization selects a vendor.
  • IT undertakes the deployment of the new application.

This decision-making process typically neglects methods of evaluating the goodness of fit between the enterprise users’ processes, wants, and needs and vendors’ solutions. Organizations could avoid many a rollout disaster simply by testing the usability of vendors’ solutions with employees during a trial phase.

So What’s the Solution?

User-centered design methods and usability testing can aid both application producers and application purchasers.

  • Application vendors can employ user-centered design methods to ensure they produce solutions that meet enterprise users’ specific wants and needs and, thus, give their applications a competitive advantage in the marketplace,
  • Enterprise customers can test the usability of vendors’ applications to ensure the IT investments they make deliver fully on their value propositions.

In my next column, I’ll describe how organizations can better assess and establish the usability of the enterprise applications they’re considering and, armed with this information, push technology vendors to develop more usable enterprise products. 

Founder and Principal Consultant at ShermanUX

Assistant Professor and Coordinator for the Masters of Science in User Experience Design Program at Kent State University

Cleveland, Ohio, USA

Paul J. ShermanShermanUX provides a range of services, including research, design, evaluation, UX strategy, training, and rapid contextual innovation. Paul has worked in the field of usability and user-centered design for the past 13 years. He was most recently Senior Director of User-Centered Design at Sage Software in Atlanta, Georgia, where he led efforts to redesign the user interface and improve the overall customer experience of Peachtree Accounting and several other business management applications. While at Sage, Paul designed and implemented a customer-centric contextual innovation program that sought to identify new product and service opportunities by observing small businesses in the wild. Paul also led his team’s effort to modernize and bring consistency to Sage North America product user interfaces on both the desktop and the Web. In the 1990s, Paul was a Member of Technical Staff at Lucent Technologies in New Jersey, where he led the development of cross-product user interface standards for telecommunications management applications. As a consultant, Paul has conducted usability testing and user interface design for banking, accounting, and tax preparation applications, Web applications for financial planning and portfolio management, and ecommerce Web sites. In 1997, Paul received his PhD from the University of Texas at Austin. His research focused on how pilots’ use of computers and automated systems on the flight deck affects their individual and team performance. Paul is Past President of the Usability Professionals’ Association, was the founding President of the UPA Dallas/Fort Worth chapter, and currently serves on the UPA Board of Directors and Executive Committee. Paul was Editor and contributed several chapters for the book Usability Success Stories: How Organizations Improve by Making Easier-to-Use Software and Web Sites, which Gower published in October 2006. He has presented at conferences in North America, Asia, Europe, and South America.  Read More

Other Columns by Paul J. Sherman

Other Articles on Enterprise UX Strategy

New on UXmatters