Practical Technology

for practical people.

February 13, 2014
by sjvn01
0 comments

DDOS in 2014: The New Distributed Denial of Service Attacks and How to Fight Them

In early days of the Web, no one heard of a Distributed Denial of Service (DDoS) attack; malware came to us through other vectors. Then in early 2000, a series of them DDoS attacks knocked off the air such popular websites as Yahoo, CNN, and Amazon. Today, DDoS attacks are thicker than fleas on a hound-dog, and are more complex than ever. They can hit your sites and your customers’ sites from more angles and they can knock out a website for minutes or even days.

Continue Reading →

February 6, 2014
by sjvn01
0 comments

The Standards Wars and the Sausage Factory

Standards-making is like sausage making. You need it, but it’s ugly. Yet the standards process is the necessary evil behind every technology we rely on.

On August 6, 1890, William Kemmler became a victim of an early technology standards war.

In the 1880s, the technology standard war of the day was between Thomas Edison, the primary supporter of Direct Current (DC) for electrical transmission, and his arch-rival George Westinghouse, who supported Alternating Current (AC). In a last ditch effort to show why DC should become the standard, Edison killed animals with AC-powered devices. He then persuaded the State of New York that an AC-powered device, the electric chair, would be a more humane way to kill condemned prisoners, the first of which was Kemmler’s execution by electric chair.

It was all in vain. Alternating Current was the more efficient technology, and today our homes and offices are powered by it.

“The wonderful thing about standards is that there are so many of them to choose from.” –Admiral Grace Hopper.

Today no one dies from standard wars, not that you’d know it from Internet comments. But years, millions of dollars, and endless arguments are spent arguing about standards. The reason for our fights aren’t any different from those that drove Edison and Westinghouse: It’s all about who benefits – and profits – from a standard.

I know, I know. Some of you are convinced that standards are determined by which technology is “best.” You are ready to trot out examples such as the famed battles including VHS vs. Beta video tape, or WiMax vs. Long Term Evolution (LTE) for 4G. You will bring up more current wars, such as Google’s SPDY vs. Microsoft’s HTTP Speed+Mobility (over how to speed up HTTP data transmission) and the hot-blooded fist fights over what will replace the X window server system for the Unix and Linux graphics stacks: Red Hat and friends’ Wayland or Canonical/Ubuntu’s Mir.

Sometimes the standard is driven by technical excellence. Usually, it’s not.

First, as the famous xkcd cartoon indicates, there isn’t an ideal “best” standard that covers everyone’s use case. To borrow Eric S. Raymond’s open-source truism, “Every good work of software starts by scratching a developer’s personal itch.” Standards are the same. What scratches your developers’ use case does not (necessarily) scratch other developers’ itches.

So since everyone wants to have things his own way, we – as businesspeople and as technologists – attempt to find compromises through organizations such as the IEEE, IEC, ISO, and IEFT. In theory, these Standards Development Organizations (SDO), according to the IEEE Standards Association, “offer time-tested platforms, rules, governance, methodologies, and even facilitation services that objectively address the standards development life-cycle, and help facilitate the development, distribution, and maintenance of standards.”

Crash!

Excuse me. I had to pick myself up from the floor from laughing so hard. You see, as a journalist I’ve covered how standards are actually made. It’s a lot like sausage making: an ugly, painful process that you hope produces a product that everyone finds tasty.

Take, for example, the long hard road for the now-universal IEEE 802.11n Wi-Fi standard. There was nothing new about the multiple-in, multiple-out (MIMO) and channel-bonding techniques when companies start moving from 802.11g to 802.11n in 2003. Yet it wasn’t until 2009 that the standard became official.

What took so long? At the start, four major groups fought to decide 802.11n’s fate. Two groups’ proposals – one from Mitsubishi and Motorola, and another from Qualcomm – quickly lost support. The other Wi-Fi networking companies quickly united into two competing groups: Task Group ‘n’ Synchronization (TGn Sync), with Intel, Atheros, and Nortel; and World-Wide Spectrum Efficiency (WWiSE), led by Airgo Networks. Airgo also had the advantage of being first to deliver MIMO-capable chipsets.

This kind of consolidation between rival companies or groups in a standard war is common. Few technology companies can afford to set their own technology standards and expect to survive in the marketplace. (It can happen that way. For example, for decades Microsoft could set desktop standards by dint of “We set the standard since we own this market segment.”) Apple manages to get it own way of doing things – from Advanced Audio Coding (AAC) for music formats to Apple Thunderbolt for high-speed I/O – and get away with it because within the closed garden of the Apple development ecosystem there are no other competitors. Companies such as these are the exception to the rule, however.

However, the first-mover advantage often isn’t that important in the long run. AC came after DC, VHS came after Betamax, and RCA’s color-TV technology came long after CBS’s now forgotten color TV tech. That proved to be the case with Airgo’s Wi-Fi experience as well.

For two years, TGn Sync and WWiSE fought it out in standard committee meetings, with neither gaining the required 75% super-majority. In late 2005, it looked as though the two finally came to an agreement in the Enhanced Wireless Consortium. But while its allies might have been ready to throw in the towel, Airgo wasn’t.

Airgo fought on with such tactics as adding more than 12,000 comments (count them, twelve thousand) into the “final” 2005 Wi-Fi standard draft. As Bill McFarland, Atheros’ CTO and one of the draft’s editors and writers, said at the time, “There were a lot of duplicate comments, and three people filed comments for each and every blank line in the document. The physical process of dealing with so many comments is tedious and time-consuming.”

What finally brought this stage of the fight to an end was Qualcomm buying Airgo in December 2006. With Airgo’s management out of the picture by 2008, a true unified standard was ready for approval.

It would be smooth sailing from here right? Wrong.

Before the IEEE approves a given standard, everyone with a patent that touches that standard must sign a Letter of Agreement (LoA). The LoA states that the patent holder won’t sue anyone using its patent in a standard-compatible device. All it takes is one holdout: Commonwealth Scientific and Industrial Research Organization (CSIRO), an Australian government research group that held a patent that concerning the development of a wireless LAN, refused to sign the 802.11n LoA.

Cue a patent war. Apple, Dell, Microsoft, and 11 other companies tried to get CSIRO’s patents overturned. They failed. In April 2009, the tech giants and 802.11n companies surrendered and signed a patent agreement.

Finally, on September 11, 2009, 802.11n was approved. It had taken “only” six years. For a tech standard that everyone agreed was of vital importance.

The Standard Sausage Factory

Why did it take so long? Because the stakes are so high. As Carl Shapiro and Hal Varian wrote in The Art of Standards Wars, “The outcome of a standards war can determine the very survival of the companies involved.”

That’s why, ultimately, technology wars are not about technology. They are about business. Yes, you want a great technology that delivers the goods. But even if your tech is the best, if you can’t turn it into a standard, your innovation is unlikely to make it to market or succeed once it gets there.

Because the stakes are so high, the players can, and do, fight over every tiny issue. Each side seeks an advantage to make sure the resulting software or hardware works best with its “version” of the proposed standard. (For example, Microsoft wanted its finger in the XML pie, and used Microsoft Office formats to try to control it. So today we have two popular office document standards: Microsoft’s OpenXML and the ODF (Open Document Format). In the end, Microsoft finally, albeit very quietly, supports ODF.) The arguments are conducted in technical details, but money is the real driver.

These standards wars are painful, ugly, and can be incredibly petty from the outside looking in. Each participant wants the biggest possible pie.

These fights can be expensive in both engineering and legal costs, so some companies are moving away from standard wars. That’s supported by the realization of the virtue of compromises. That’s always been true: Sony and Phillips realized in 1982 that fighting over CD formats would do neither company any good.

More recently, we’ve been seeing an interesting blend of open-source development and standards. The Linux Foundation brought together fierce rivals to work together on technologies and standards in such consortiums as AllSeen Alliance for the Internet of thingsOpenBEL for open-source biological research; OpenDaylight, for almost all the Software-Defined Networking (SDN) companies; and Open Virtualization Alliance and Xen Project for KVM and Xen virtualization. Perhaps it’s the collaborative nature of open-source projects to which we can credit such successes. Facebook’s Open Compute Project brought open-source methodology to the data center. Apache continues to bring competitors’ projects such as Big Data’s Hadoop and Solr for search.

Open-source software and its business and development methodologies have shown that working together to create common software and standards is more affordable. In short, rather than take a chance on one small, late to market, and expensive pie, it’s better to get a share of one bigger, timely, and affordable pie.

Maybe, thanks to open source, the sausage days of standard making will be behind us. I hope so.

The Standards Wars and the Sausage Factory was first published in Smart Bear.

January 31, 2014
by sjvn01
0 comments

Told you so! Microsoft backs off on Metro

It looks like Microsoft has finally been hit by the clue stick of awful Windows 8.x sales often enough that it’s learned its lesson. Apparently, in the forthcoming Windows 8.1 update, the user interface (UI) formerly known as Metro will be bypassed, and users will default to starting in the defective, but better than nothing, desktop mode.

I’m going to tell you more about this potential, game-changing move, but first let me get this out: “HA! I told you so!”

Sorry about that. But ever since I started pointing out just how awful Metro was for the desktop, I’ve been buried by nasty emails from Microsoft shills telling me how wonderful Metro really was. Even as Windows 8’s sales slunk below Vista’s abysmal sales adoption numbers they kept screaming that Metro was great.

Told you so! Microsoft backs off on Metro. More>

January 16, 2014
by sjvn01
0 comments

Windows 9 in 2015: Desperation isn’t pretty

Seriously, Microsoft? You want to get Windows 9 — Windows freakin’ 9 — out in 2015?

I get that you want to distance yourself from the Windows 8.x train wreck. Who wouldn’t? But by upgrading Windows on a consumer pace, aren’t you taking a big chance that your enterprise customers will turn their backs on you? I mean, companies want desktop operating systems they can rely on for three to five years, not three to five seasons.

Let’s start with some fundamentals. As Paul Thurrott, senior technical analyst for Windows IT Pro and the boss of Paul Thurrott’s SuperSite for Windows, put it, “Windows 8 is tanking harder than Microsoft is comfortable discussing in public.” According to Thurrott, the free upgrade Windows 8.1, which offers major improvements over Windows 8, is installed on fewer than 25 million PCs. “That’s a disaster,” he wrote. Windows 9 (as it’s expected to be designated; Threshold is the code name for now) “will need to strike a better balance between meeting the needs of over a billion traditional PC users” and enticing users to a Windows experience on new types of personal computing devices. “In short, it needs to be everything that Windows 8 is not.”

Windows 9 in 2015: Desperation isn’t pretty. More>

January 13, 2014
by sjvn01
0 comments

Lessons for IT from Windows 8/Metro

Windows 8: worse than Vista. Most people I talk to say so, the numbers back them up, and now it could be that even Microsoft sees the truth. So why did Windows 8 and the Interface Formerly Known As Metro fail? It’s a good question, especially since the answer could keep you and your own development team from a similar design fiasco.

Rumors are flying that Microsoft is going to do a lot more than just bring back the Start button in the next Windows version. A lot of smart people are saying that it will abandon its Metro interface on laptops and desktops. It took long enough.

The facts are simple. Nobody wanted Windows 8 in 2012. Nobody wanted it in 2013. Even Microsoft should know that no one will want it in 2014.

Lessons for IT from Windows 8/Metro. More>

January 7, 2014
by sjvn01
0 comments

The Pre-History of Software as a Service

Now everyone uses Software as a Service. But we tried this business model before, over a decade ago, and it failed miserably. What changed in cloud computing to make it work today?

Software as a Service (SaaS) has become a major end-user platform. Whether the software at hand is Customer-Relationship Management (CRM), office productivity software, or software development tools (such as Smart Bear’s own QAComplete), SaaS is a multi-billion dollar business that continues to grow bigger. Indeed, SaaS has become so profitable that it’s one of the factors crippling the PC business.

It wasn’t always that way.

In the early days of the dot com boom, the computer industry tried to implement Software as a Service. The term for it was Application Service Providers (ASP) back then, but the premise was similar: Users pay for a subscription to an application accessed via a website.

Companies founded on the ASP model mostly failed. Sure, some evolved and survived to become SaaS-driven businesses, such as the Great Plains accounting system (it became Microsoft Dynamics) and Salesforce.com. Others, such as the would-be desktop-as-a-service company BmyPC, ASP services aggregator Agiliti, and early mapping company ObjectFX all survived – but it was no thanks to the ASP model.

So why is SaaS software development hotter than hot today, when yesterday it was as cold as pancakes left out overnight in a Vermont winter?

Let’s set the scene: There have been attempts to stop PCs from taking over IT for as long as there have been PCs. The diskless workstation, thin-client, and some client-server variations tried and failed in their time. Each of these has echoes in the success of SaaS. For example, one of the promised benefits of thin clients was a reduction in support costs, since end users got an up-to-date image of the operating system and corporate-approved applications. But IT wanted control; users wanted flexibility. (Sound like the current concern over “Bring Your Own Device”?)

I covered ASPs closely, at one time. Its promises were the same as SaaS: low cost, easy deployment, painless upgrades. So was the deployment model: remote full-featured applications over the Internet.

There was only one major difference. Properly designed SaaS applications make good on their promises. ASPs couldn’t.

Some of the reasons were purely technical. For example, in the ASP model the vendors often hosted multiple instances of third-party, client-server applications. In SaaS, providers develop their own applications and operate a multi-tenant infrastructure model. That is to say, while users access the same code base, their data and customized interfaces are kept separate from one another.

So, what’s the difference? SaaS takes full advantage of virtualization and cloud-based scalability. Thanks to virtual machines, it’s easy to spin up a new instance for each customer. While virtualization dates back to 1960s mainframes, in the bad old ASP days vendors often had to set up physical servers to meet user demand. For example, say a customer needed its London staff to have their own logins, requiring more instances of an application to be spawned. The request might come in at 10:00am their time… which happened be at 6:00pm  in the ASP’s time zone. The ASP would have to manually set up the server and application, for every new customer or customer request. That went about as well as you’d expect.

It wasn’t just an issue of manual setup. As Dana Gardner, senior analyst at Interarbor Solutions, pointed out in 2007, ASPs used custom development for their apps rather than spawned images. “The economy of scale is what killed a lot of hosting providers back in the ASP days and ran them out of business,” Gardner wrote. “They were just doing an implementation for every customer, as opposed to a single implementation that can now be used by multiple customers – personalized and managed. The people who use the application run and use it differently, but the implementation is pretty much the same for all customers.”

Even when ASPs’ architecture worked, most of the hosted applications were painfully slow. In May 2000, one long-gone ASP, Mi8, tried to provide us with e-mail and groupware services based on Lotus Notes. I wrote that running it was “as slow as running a marathon on crutches.” (And, yes, we used Notes internally at Sm@rt Reseller, so we knew what we were getting into; this was far worse than any regular Notes implementation.)

Besides, if you think people don’t trust the cloud now, you should been with us in the 90’s when people didn’t trust their Internet connections. And they had reason for the cynicism. Company networks had a hard time staying connected on 128Kbps ISDN lines. Even those without dial-up weren’t willing to bet their business on something that wasn’t “right here.” This was as much an issue of perception, which changed over time, as it was the actual Internet connectivity numbers.

Even had the ASPs’ applications been speedy, the Internet wasn’t really fast enough to support them. In 2000, the Federal Communications Commission (FCC) reported that there were 2.8 million “high-speed” Internet lines. But they defined “high-speed” as faster than 200 kilobits per second. Not megabits, kilobits. Today, the average U.S. Internet speed is 21 megabits per second.

The ubiquity of Internet access made the proposition of hosted applications’ acceptance dubious. The FCC also reported in 2000 that “58 million consumers accessed the Internet through a dial-up connection in the second quarter of this year. By contrast, 2.8 million accessed the Net through cable modems, 1.1 million through Internet TV, and 286,000 through DSL. Thus, cable modems and DSL combined accounted for only 5 percent of consumer access to the Internet in the second quarter.”

Nowadays, we take Wi-Fi for granted; we all grumble if we cannot get online from a hotel room with access speeds fast enough to watch a movie on Netflix. But in the late 90’s, during the dot com boom, 802.11b had just been invented. In 1999, we at Ziff-Davis’ Sm@rt Reseller magazine did some of the first product reviews of Wi-Fi networking systems, and we used a lot of terms like “shows promise” and “expensive but exciting.” 3G and 4G were the stuff of science-fiction. While some of us worked from remote offices or hotel rooms (always carrying a spare Ethernet cable), the mobile coffee-shop lifestyle was years away.

Even when they did work, most of the ASP applications were version 1.0 quality software in a universe where the desktop versions were at version 3.0. There was no good reason for a business to make a monthly financial commitment for an application that did less than the established software your business had already paid for. Today, a good SaaS application does everything the locally-installed software can, works anywhere our ever-mobile workforce happens to be, sidesteps the IT department having to provision and update every instance – and it looks good doing it.

But the technical barriers were not the source of the ASP failure. It was always a business problem.

On the business side, ASPs’ pricing models were out of whack. Nobody knew how much to charge, or what the traffic would bear. Turns out, it wouldn’t bear a lot. I remember early conversations about a “freemium” model in which everyone was sure that too few people would upgrade to make the company profitable. Yeah, that didn’t work out so well.

All this led to a very fundamental business problem: Not enough customers! As Rick Chapman, publisher of SoftLetter, an online publication devoted to SaaS, wrote, “The difference between SaaS and ASPs is that SaaS companies make money and ASPs didn’t.” As he explained, “In 2000-2001 I interviewed perhaps three dozen people from different ASP firms and asked why their companies were dead or dying. Invariably, the answer was lack of revenue from lack of enough people signing up for the ASP offering.”

That certainly was the case. Between the technical challenges and a long-held assumption by techies that businesspeople are ready for new ideas, the adoption ramp often was steeper than the ASP was able to handle.

For example, I remember one ASP – it was owned by a friend of a friend – who set up a site for people to design custom packaging and send them the boxes. (Think: You need 50 boxes in which to ship asparagus, a vegetable that is wider at the bottom than at its fragile top.) Even after creatively addressing technical issues (e.g. launching a custom design app remotely) and building an impressive site UI in the days when “C-frames” were the state of the art in web design, the company didn’t take into account that packaging professionals are technology laggards. They’ve barely given up fax machines even today.

Chapman continued, “Too many SaaS companies launched into horizontal markets occupied by big bruiser companies ready, willing, and able to defend their turf. SaaS succeeded by moving into new markets and opportunities inherent in the on demand model.” (The one significant exception was Salesforce, he wrote.)

The ASPs were also prone to the other woes of dot com businesses. Some were over-funded and wasted their money. Even far too many of the good ones, with vertical apps in areas without competition, were drained in the dot com whirlpool.

Could SaaS follow ASP into the grave? I doubt it.

While many factors killed the ASP, none of the major ones are issues for SaaS. Oh, SaaS has its challenges. Security, assuring high quality service levels, and supporting multiple platforms (both desktop and mobile), to name but a few. But the technology has reached the point where we can now profitably deploy and run SaaS. ASPs, alas, were ahead of their time. Still, their failures should be kept in mind as we further push the SaaS model lest we too try to deploy applications that are ahead of our technology or are users’ trust in that technology.

The Pre-History of Software as a Service was first published in Smart Bear.