Practical Technology

for practical people.

The Pre-History of Software as a Service

| 0 comments

Now everyone uses Software as a Service. But we tried this business model before, over a decade ago, and it failed miserably. What changed in cloud computing to make it work today?

Software as a Service (SaaS) has become a major end-user platform. Whether the software at hand is Customer-Relationship Management (CRM), office productivity software, or software development tools (such as Smart Bear’s own QAComplete), SaaS is a multi-billion dollar business that continues to grow bigger. Indeed, SaaS has become so profitable that it’s one of the factors crippling the PC business.

It wasn’t always that way.

In the early days of the dot com boom, the computer industry tried to implement Software as a Service. The term for it was Application Service Providers (ASP) back then, but the premise was similar: Users pay for a subscription to an application accessed via a website.

Companies founded on the ASP model mostly failed. Sure, some evolved and survived to become SaaS-driven businesses, such as the Great Plains accounting system (it became Microsoft Dynamics) and Salesforce.com. Others, such as the would-be desktop-as-a-service company BmyPC, ASP services aggregator Agiliti, and early mapping company ObjectFX all survived – but it was no thanks to the ASP model.

So why is SaaS software development hotter than hot today, when yesterday it was as cold as pancakes left out overnight in a Vermont winter?

Let’s set the scene: There have been attempts to stop PCs from taking over IT for as long as there have been PCs. The diskless workstation, thin-client, and some client-server variations tried and failed in their time. Each of these has echoes in the success of SaaS. For example, one of the promised benefits of thin clients was a reduction in support costs, since end users got an up-to-date image of the operating system and corporate-approved applications. But IT wanted control; users wanted flexibility. (Sound like the current concern over “Bring Your Own Device”?)

I covered ASPs closely, at one time. Its promises were the same as SaaS: low cost, easy deployment, painless upgrades. So was the deployment model: remote full-featured applications over the Internet.

There was only one major difference. Properly designed SaaS applications make good on their promises. ASPs couldn’t.

Some of the reasons were purely technical. For example, in the ASP model the vendors often hosted multiple instances of third-party, client-server applications. In SaaS, providers develop their own applications and operate a multi-tenant infrastructure model. That is to say, while users access the same code base, their data and customized interfaces are kept separate from one another.

So, what’s the difference? SaaS takes full advantage of virtualization and cloud-based scalability. Thanks to virtual machines, it’s easy to spin up a new instance for each customer. While virtualization dates back to 1960s mainframes, in the bad old ASP days vendors often had to set up physical servers to meet user demand. For example, say a customer needed its London staff to have their own logins, requiring more instances of an application to be spawned. The request might come in at 10:00am their time… which happened be at 6:00pm  in the ASP’s time zone. The ASP would have to manually set up the server and application, for every new customer or customer request. That went about as well as you’d expect.

It wasn’t just an issue of manual setup. As Dana Gardner, senior analyst at Interarbor Solutions, pointed out in 2007, ASPs used custom development for their apps rather than spawned images. “The economy of scale is what killed a lot of hosting providers back in the ASP days and ran them out of business,” Gardner wrote. “They were just doing an implementation for every customer, as opposed to a single implementation that can now be used by multiple customers – personalized and managed. The people who use the application run and use it differently, but the implementation is pretty much the same for all customers.”

Even when ASPs’ architecture worked, most of the hosted applications were painfully slow. In May 2000, one long-gone ASP, Mi8, tried to provide us with e-mail and groupware services based on Lotus Notes. I wrote that running it was “as slow as running a marathon on crutches.” (And, yes, we used Notes internally at Sm@rt Reseller, so we knew what we were getting into; this was far worse than any regular Notes implementation.)

Besides, if you think people don’t trust the cloud now, you should been with us in the 90’s when people didn’t trust their Internet connections. And they had reason for the cynicism. Company networks had a hard time staying connected on 128Kbps ISDN lines. Even those without dial-up weren’t willing to bet their business on something that wasn’t “right here.” This was as much an issue of perception, which changed over time, as it was the actual Internet connectivity numbers.

Even had the ASPs’ applications been speedy, the Internet wasn’t really fast enough to support them. In 2000, the Federal Communications Commission (FCC) reported that there were 2.8 million “high-speed” Internet lines. But they defined “high-speed” as faster than 200 kilobits per second. Not megabits, kilobits. Today, the average U.S. Internet speed is 21 megabits per second.

The ubiquity of Internet access made the proposition of hosted applications’ acceptance dubious. The FCC also reported in 2000 that “58 million consumers accessed the Internet through a dial-up connection in the second quarter of this year. By contrast, 2.8 million accessed the Net through cable modems, 1.1 million through Internet TV, and 286,000 through DSL. Thus, cable modems and DSL combined accounted for only 5 percent of consumer access to the Internet in the second quarter.”

Nowadays, we take Wi-Fi for granted; we all grumble if we cannot get online from a hotel room with access speeds fast enough to watch a movie on Netflix. But in the late 90’s, during the dot com boom, 802.11b had just been invented. In 1999, we at Ziff-Davis’ Sm@rt Reseller magazine did some of the first product reviews of Wi-Fi networking systems, and we used a lot of terms like “shows promise” and “expensive but exciting.” 3G and 4G were the stuff of science-fiction. While some of us worked from remote offices or hotel rooms (always carrying a spare Ethernet cable), the mobile coffee-shop lifestyle was years away.

Even when they did work, most of the ASP applications were version 1.0 quality software in a universe where the desktop versions were at version 3.0. There was no good reason for a business to make a monthly financial commitment for an application that did less than the established software your business had already paid for. Today, a good SaaS application does everything the locally-installed software can, works anywhere our ever-mobile workforce happens to be, sidesteps the IT department having to provision and update every instance – and it looks good doing it.

But the technical barriers were not the source of the ASP failure. It was always a business problem.

On the business side, ASPs’ pricing models were out of whack. Nobody knew how much to charge, or what the traffic would bear. Turns out, it wouldn’t bear a lot. I remember early conversations about a “freemium” model in which everyone was sure that too few people would upgrade to make the company profitable. Yeah, that didn’t work out so well.

All this led to a very fundamental business problem: Not enough customers! As Rick Chapman, publisher of SoftLetter, an online publication devoted to SaaS, wrote, “The difference between SaaS and ASPs is that SaaS companies make money and ASPs didn’t.” As he explained, “In 2000-2001 I interviewed perhaps three dozen people from different ASP firms and asked why their companies were dead or dying. Invariably, the answer was lack of revenue from lack of enough people signing up for the ASP offering.”

That certainly was the case. Between the technical challenges and a long-held assumption by techies that businesspeople are ready for new ideas, the adoption ramp often was steeper than the ASP was able to handle.

For example, I remember one ASP – it was owned by a friend of a friend – who set up a site for people to design custom packaging and send them the boxes. (Think: You need 50 boxes in which to ship asparagus, a vegetable that is wider at the bottom than at its fragile top.) Even after creatively addressing technical issues (e.g. launching a custom design app remotely) and building an impressive site UI in the days when “C-frames” were the state of the art in web design, the company didn’t take into account that packaging professionals are technology laggards. They’ve barely given up fax machines even today.

Chapman continued, “Too many SaaS companies launched into horizontal markets occupied by big bruiser companies ready, willing, and able to defend their turf. SaaS succeeded by moving into new markets and opportunities inherent in the on demand model.” (The one significant exception was Salesforce, he wrote.)

The ASPs were also prone to the other woes of dot com businesses. Some were over-funded and wasted their money. Even far too many of the good ones, with vertical apps in areas without competition, were drained in the dot com whirlpool.

Could SaaS follow ASP into the grave? I doubt it.

While many factors killed the ASP, none of the major ones are issues for SaaS. Oh, SaaS has its challenges. Security, assuring high quality service levels, and supporting multiple platforms (both desktop and mobile), to name but a few. But the technology has reached the point where we can now profitably deploy and run SaaS. ASPs, alas, were ahead of their time. Still, their failures should be kept in mind as we further push the SaaS model lest we too try to deploy applications that are ahead of our technology or are users’ trust in that technology.

The Pre-History of Software as a Service was first published in Smart Bear.

Leave a Reply