Date: Wed, 10 Mar 2004 23:22:07 -0800 From: Norm Matloff To: Norm Matloff Subject: false claims of quality of offshored development To: H-1B/L-1/offshoring e-newsletter I've been saving up some material which will thoroughly address the issue of the quality software development which is offshored to India. The Indian companies claim that their work is not only cheaper than that performed in the U.S., but it is actually of higher quality than the U.S. work. Those claims of higher quality made by the Indian firms is highly misleading, as I've explained briefly in the past, but I've been planning a more detailed posting on the matter. The issue came up on last night's Lou Dobbs Show on CNN, which seemed to catch Dobbs by surprise. (Transcript of Dobbs at http://www.cnn.com/TRANSCRIPTS/0403//09/ldt.00.html.) Here first is a summary of why the quality claims are bogus: * The Indian firms' claims of high quality is based on the fact that many of them have attained a high rating on a measure called the Capabilities Maturity Model (CMM). Ref.: Web page of CMM's developer, the Software Engineering Institute (SEI), http://www.sei.cmu.edu/cmm/cmm.sum.html. * The rating is on a scale from 1 to 5, and most firms which have attained Level 5 are in India. Ref.: "Why 'India Inside' Spells Quality," Dataquest India, October 27, 2003, http://www.dqindia.com/content/advantage/103102703.asp * CMM is NOT a measure of technical quality. It is merely a measure of PROCESS, i.e. a measure of how a software project is managed, e.g. how often is work peer-reviewed. As even an official at SEI, CMM's originator and main locus, says, "You can be a Level 5 organization that produces software that might be garbage." In fact, a study found that Level 5 firms actually had MORE code defects than the other firms. The case study enclosed below also shows this kind of problem. Refs.: Bursting the CMM Hype, Christopher Koch, CIO Magazine, March 1, 2004, http://www.cio.com/archive/030104/cmm.html; Network Computing, March 16, 2003 (enclosed below) * Other than firms which do business with the Dept. of Defense, most American firms have not shown much interest in seeking CMM certification. So, the fact that the Level 5 firms are mostly Indian is irrelevant. * The first organization to attain Level 5 was actually NASA--which sadly has a tragic track record of quality control, as we all know. Ref.: "Why 'India Inside' Spells Quality," Dataquest India, October 27, 2003, http://www.dqindia.com/content/advantage/103102703.asp * The Indian programmers apparently consist overwhelmingly of young, relatively inexperienced workers. Tata Consultancy Services says 50% of its programmers are under age 25, and 88% are under 30. This certainly does not indicated quality. Ref.: TCS recruiting page, http://www.usa-tcs.com/careers/whyjoin.html * Indeed, the business model of the Indian firms is to concentrate on using what it calls "less-skilled" workers. They're cheaper, of course, and the Indian firms feel that they can get around that obvious handicap by breaking projects into very small parts. This view of programmers are interchangeable commodities is totally wrong. Study after study has shown a huge variation in productivity among programmers, with ratios of 10 to 1 (in one study, 20 to 1 in another) in productivity between the best and the worst. As the manager writes in the offshoring case study enclosed below, "This development inexperience led to a series of rookie blunders." Refs: "India Feels Backlash on Jobs," Chicago Tribune, February 8, 2004, quote of Arvind Thakur; Tom DeMarco and Timothy Lister, Peopleware: Productive Projects and Teams, Dorset House Publishing Co., 1987; "Comparing Java vs. C/C++ Efficiency Differences to Interpersonal Differences," Communications of the ACM, 42 (10), 1999; "How Offshore Outsourcing Failed Us," Network Computing, October 16, 2003 (enclosed below) * The bottom line is that the Indian firms' claim to quality is merely a marketing ploy based on a measure which really has very little to do with software quality. I might add that CMM is quite a marketing item for SEI. As I've mentioned (especially for CMU, SEI's host), universities have an unquenchable thirst for research funding, and CMM has been a cash cow in terms of money CMU gets from the Defense Dept. Now, for the enclosure. It is a little long but absolute required reading for anyone interested in the issue of the quality of offshoring. Keep in mind that this was a top offshoring firm, and pay particular attention to the point about the youthful, inexperienced nature of the offshoring team being one of the major causes of failure, and then note the Indian business model above. Norm http://www.nwc.com/shared/article/printFullArticle.jhtml?articleID=15201900 How Offshore Outsourcing Failed Us Oct 16, 2003 By Wesley Bertch What are my options if my highly productive, 15-person software team generates only one-third the output our customers demand? I was certain that augmenting our team with offshore development was the right answer. It wasn't, at least for a small project we recently outsourced to an Indian firm. Here's our story. I lead the software development group at Life Time Fitness, a high-growth, nationally expanding health and fitness chain. We're more than just health clubs--we have spas and salons; cafés; member services, such as personal training and swimming lessons; a division that produces a nationally distributed magazine; a division that formulates and distributes energy bars, powders and other consumer goods to national retailers; and a corporate wellness unit that sells products and services to thousands of companies. In addition to supplying these departments with systems, we provide software services to Life Time's internal real-estate group. Keeping pace with the growing software needs of so many diverse business units is a huge challenge. From almost every angle, offshore development appeared to be the ideal solution. We needed to augment our internal team cost-effectively, without sacrificing quality. Judging from what analyst firms and the media were saying about Tier 1 offshore developers, with their CMM (Capability Maturity Model) qualifications, Six Sigma quality experience and "proven" low-cost development model, how could we go wrong? Personally, I was excited about the promise of offshore outsourcing. If it worked, we'd be heroes to the business. Philosophically, I view free trade as highly beneficial to its participants. We met the key criteria for offshoring: centralized IT, process maturity and years of experience working with Indians both in the United States and offshore. We had executive sponsorship. We had IT commitment. We even had the perfect project to test the waters: a small, low-risk Web application for our real-estate division. The application's purpose is to provide screens for entering new location information. The application isn't complex: The back-end database is Microsoft SQL Server; server-side Java components implement business rules; and Java Server Pages (JSPs) are used for the front end. We use BEA Systems WebLogic as the application/Web server and Concurrent Version Systems (CVS) for source-code control. The Tier 1 Indian vendor we invited to implement the project was successfully supporting our Siebel 7 sales-force-automation implementation, so both sides thought this project would be a slam dunk. The vendor agreed to take on the project for a fixed fee of $20,000, with a nine-week time line. To avoid finger-pointing, everyone agreed that the vendor should perform all phases of the project, from gathering business requirements through QA (quality assurance). Life Time's internal staff would monitor and participate in every way necessary for the project to succeed. If the project proved successful (defined as anything shy of disaster), we promised a small fortune in project work. Here's how the project team was organized: o An on-site liaison, supplied by the vendor, acted as a bridge between the Life Time team and the offshore project manager. This person was on a senior level technically and had strong communication skills. o An on-site business analyst, supplied by the vendor, completed the application's functional requirements, then returned to India to act as offshore project manager. o An offshore project manager tracked tasks and schedules for three offshore team members: a Java developer, a JSP developer and a tester. o An offshore technical manager supervised our project, as well as three others. o A Life Time software manager coordinated his team with the on-site liaison to provide code reviews, database design and general advice. o A manager in Life Time's real-estate division served as the business champion. The project got off to a good start. The vendor's business analyst met frequently with the real-estate division's users and, with the on-site liaison, worked furiously to document all the functional and user interface requirements within four weeks. By week three, however, our internal lead business analyst threw up a red flag. His review of the functional specs exposed problems in the requirements, particularly in the interface specs. For example, the UI as laid out forced the users to re-enter data they had previously entered. Plus, the screen flow was illogical and confusing. The on-site liaison countered that though the UI had problems, it ostensibly complied with the documented business requirements. To ensure that we would get what we needed, we extended the project time line, agreed to a cost increase of $7,000 to allow for additional analysis and better interface design, and dedicated internal Life Time analysis and UI people to guide the final version of the documentation. After the vendor's business analyst wrapped up the documentation, he returned to India and, in an effort to exploit his knowledge of the project requirements, was assigned as the offshore project manager. By this point, the offshore technical manager had lined up the offshore project team, so the coding design began in earnest. Once offshore, however, the project started down the slippery slope. Upon receiving the offshore company's database design, Life Time's lead data architect declared it to be the worst he'd ever seen. There were so many critical database flaws--more than 100--that our architects were unable to log all of the defects within the scheduled one-week review period. The database was not the only problem area. Determined to dazzle us with their software prowess, the offshore developers insisted on completing the entire code design before allowing us to review it (we had requested an early design sample to head off any problems). Naively confident in their original code design, the offshore team had launched immediately into writing Java code before checking the code design into CVS for our review. Tragically, our review determined that the offshore team's design patterns weren't in accordance with the standards Life Time follows, invalidating all the offshore team's Java code. In two short weeks, the offshore team had gone from proud and eager to embarrassed and dejected. Once the stark reality of our logged defects sank in, the team knew there was no way they could straighten out the code design and then code and test the applications within the set time frame. Frustration levels were high on the offshore team, and the on-site liaison became increasingly defensive. The internal Life Time team was disappointed and annoyed as well, but we accepted the fact that mistakes were bound to happen on our first end-to-end offshore project. We valued a quality final product much more than time-line precision. Nevertheless, as we learned only later, the offshore team began working extra-long hours to avoid asking for a time extension. To the Rescue? Given all the problems up to that point, we sensed the project was at risk, so our internal software development manager, QA manager and I flew to India to meet with the offshore people. Graciously, the vendor also scheduled us to meet with the top brass. The visit was highly informational and warm feelings prevailed, but by this time the application was in the testing phase and nearly "complete." Not long after our trip, the offshore team delivered the tested, "finished" application. According to the on-site liaison, all we now needed to do was perform a ceremonial user-acceptance review, sign off on the project's successful delivery and celebrate. Not so fast. We instead decided to perform a little QA of our own. In less than a day, one Life Time tester and one developer found more than 35 defects, many of them showstoppers. Screens randomly went blank, "saved" data was lost, functionality was missing, the interface wasn't consistent and data validation didn't work. The offshore team categorized the hundreds of newly found defects as "in scope" (these they fixed) or "out of scope" (these were deemed Life Time's problem). Even after the vendor fixed the "in scope" defects, the application was unusable. And fixing it meant it would be late and even more over-budget. At this point, we decided the best course was to take delivery of the application and overhaul the code ourselves. We couldn't bear trying to explain to the offshore vendor all the code fixes that were needed and then haggle over who would pay for them. Post Mortem You might assume that, given our dismal experience with offshore development, we have written off this model completely. Not so. Offshore may still hold promise as a way to cost-effectively extend our current team. What would we do differently? Instead of relying on the vendor to institute the offshore processes and team, we would set that up ourselves. Ideally, we would have a developer (probably an Indian) from our internal team relocate to India to build and manage a competent offshore team, perhaps within leased space at an existing development facility. Another lesson we learned the hard way is that fixed-bid offshore projects tend to misalign the vendor's interests with ours by placing undue emphasis on cost and time line while sacrificing quality and customer focus. Because we care about what the code looks like (this vendor's on-site liaison and account executive admitted to me that they do much better with fixed-bid projects when the customer doesn't inspect their code), we would have been better off using a time and materials arrangement, which would have given us more control over every part of the process. Finally, next time I would pay more attention to my employees' concerns. Even before the project started, several employees expressed doubts about the quality of offshore code and predicted they would end up redoing it themselves. Turns out they were right. Wesley Bertch is director of software systems at Life Time Fitness. Send your comments on this article to him at wbertch@lifetimefitness.com. Why did our offshore software project fail? Root Cause #1 -- Inexperienced Labor Indian software labor is highly educated and dedicated, to be sure, but we found that workers lack the technical and people skills that come only with experience. Our vendor's employees averaged only two years' experience. Because so much was riding on this trial project, the vendor assigned us a "senior" team: The Java and JSP developers each had four years of experience, and the tester had two years of experience. By comparison, any one of our internal Life Time software developers has more experience than the entire offshore team combined. The on-site liaison explained that one reason offshore developers and testers are on such a junior level is because as soon as they gain experience, they're promoted to project management, account management and other nontechnical roles. Apparently, there isn't much of a promotion track for developers. This development inexperience led to a series of rookie blunders: formatting every database field within the back-end components as "string" (only to reformat them back at the interface); disallowing punctuation in a comment field because the documentation called for "alphanumeric"; and not asking for guidance when faced with difficult coding decisions. Root Cause #2 -- Overemphasis On Process I never thought I would say that an offshore vendor is too process-dependent. I had always listed this vendor's quality and process focus as a strength--and it can be. But process by itself can't assure project success, and documentation can't substitute for domain expertise. Like a contract manufacturing plant, the offshore model is designed to funnel any and all projects through a labyrinth of processes and internal controls so that novice employees who don't know anything about a customer's business can achieve acceptable results. The problem is that you can't factory-produce this kind of software. Developing software is more like team surgery, where competency, experience, group chemistry and knowledge of the patient go a lot further than a set of processes for how the surgery should be performed. Root Cause #3 -- Project Performance Metrics Masking Problems Our offshore vendor uses a comprehensive project-tracking system, and its employees are reviewed and rewarded based on customer-satisfaction surveys. You would assume, therefore, that this project's problems and our dissatisfaction were evident to the vendor's management, right? Wrong. During our visit to the vendor's development center, the offshore project manager showed us data on our project. I was astonished to find that the data indicated things were perfectly on target, and the number of hours worked during each phase was precisely in line with the vendor's original estimate. The coding snafus and overtime hours weren't evident, nor did we detect any inkling that the project was at risk. Likewise, the vendor's surveys appeared to be "managed." For instance, after the vendor completed our Siebel implementation, one of the vendor's employees requested that I fill out the post-project customer survey with all 5s, the highest score possible. These surveys are mandatory, but I'm still waiting for the survey to arrive for this latest offshore project. So, on a project that went relatively well, we were hounded to complete the survey, while on a project that didn't fare well, no survey appears to be forthcoming. o Savings Not So Big Although wages are generally 80 percent lower in India compared with the United States, total labor cost savings are just 10 percent to 15 percent for most U.S. companies that outsource to India, according to a report from Deloitte Consulting. What chews up potential savings? Higher costs for travel, communications, equipment and managerial oversight, along with lower productivity, cultural differences and incompatible systems. And, as U.S. workers make productivity gains, the savings gap could widen further. o Security AngleDespite the events of 9/11, U.S. corporate IT security worries still mainly involve worms or cyber-attackers. Doing business overseas opens up a whole new set of scenarios. For example, in August two bombings in Bombay killed 44 people and wounded more than 150, many of them critically. This followed a series of blasts that have killed 66 people since December 2002. These attacks, blamed on Islamic militants, struck at the economic base of India: Bombay contributes more than 30 percent of India's taxes and revenues. See CBS News.com.