Practical Technology

for practical people.

December 11, 2009
by sjvn01
0 comments

Linux Security Kernel Clean-Up

While Windows has more security problem than a barn dog has fleas, Linux isn’t immune to having its own security holes. Recently, two significant bugs were found, and then smashed. To make sure you don’t get bit, you should patch your Linux system sooner rather than later.

Bug number one on the hit list is a remote DDoS (distributed denial-of-service) vulnerability that could potentially let an attacker crash your server by sending it an illegally fat IPv4 TCP/IP packet. Those of you who are network administrators may be going, “Wait, haven’t I heard of this before?” Why, yes, yes you have.

It’s the good old ping-of-death DDoS attack back again. What happened, according to the Linux kernel discussion list, was that somewhere between the Linux kernel 2.6.28.10 and 2.6.29 releases someone made a coding boo-boo and made it possible for this ancient attack to work again.

More >

December 11, 2009
by sjvn01
2 Comments

Red Hat heads back to the desktop with SPICE

Red Hat is the number one Linux company, but they haven’t been interested in the Linux desktop for years. With the open-sourcing of SPICE (Simple Protocol for Independent Computing Environment), that’s changing.

SPICE, like Microsoft’s RDP (Remote Desktop Protocol) and Citrix’s ICA (Independent Computing Architecture), is a desktop presentation services protocol. They’re used for thin-client desktops, and SPICE will be too. In 2010, you can count on Red Hat returning to the Linux desktop.

But they won’t be doing it as a competitor to traditional desktops like Ubuntu 9.10 or Windows 7. Thin clients are meant for corporate desktops, like those in a company where Red Hat is already powering the servers. Remember, it’s in Linux servers, not desktops, that Red Hat has found its riches.

On the server side, SPICE depends on KVM (Kernel Virtual Machine) for its horsepower. Guess what virtualization software Red Hat focuses on? That would be KVM. So if you have a company that’s already invested in Red Hat on the servers, wouldn’t it make sense to offer them a complementary Linux desktop option as well? And perhaps sell a few more server licenses along the way? I think so, and Red Hat thinks the same way.

A few months back, I asked Red Hat CTO Brian Stevens if Red Hat was going back to the desktop. "Yes, Red Hat will indeed be pushing the Linux desktop again" with KVM, he told me.

The open-sourcing of SPICE is a step in that direction. Indeed, by the time Red Hat bought Qumranet, the company that was behind both KVM and SPICE, Qumranet had already released a complete KVM/SPICE virtualization program, Solid ICE).

Red Hat hasn’t announced a re-release of a now completely open source-based Solid ICE, but that’s only a matter of time. It’s the next step, and it will be a smart one.

I say that because, unlike RDP, ICA or Unix/Linux’s VNC (Virtual Network Computing), SPICE isn’t a "screen-scraper." With these protocols, the server has to do all the heavy lifting of rendering the graphics. But with Solid ICE and SPICE, each SPICE session can access local system resources. In other words, a SPICE Linux virtual desktop user can use their PC’s graphics. This means SPICE users get close to stand-alone desktop video performance and at the same time the servers aren’t being overloaded.

As Stevens explained in a statement about the open-sourcing of SPICE: "The SPICE protocol is designed to optimize performance by automatically adapting to the graphics and communications environment that it is running in, so vendors have a terrific opportunity to enhance it for their specific applications."

What all that means for you is that, some time in 2010, you can expect to see the release of Red Hat Enterprise Virtualization for Desktops. I expect to see it arrive sometime before another thin client-like take on the Linux desktop, namely Google’s Chrome OS, arrives.

As I’ve said before, we’re in for some interesting time in 2010 with desktop operating systems.

A version of this story first appeared in ComputerWorld.

December 10, 2009
by sjvn01
0 comments

802.11n: Fast Wi-Fi’s long, tortuous road to standardization

For a technology that’s all about being fast, 802.11n Wi-Fi sure took its sweet time to become a standard.

In fact, until September 2009, it wasn’t, officially, even a standard. But that didn’t stop vendors from implementing it for several years beforehand, causing confusion and upset when networking gear that used draft standards from different suppliers wouldn’t always work at the fastest possible speed when connected.

It wasn’t supposed to be that way. But, for years, the Wi-Fi hardware big dogs fought over the 802.11n protocol like it was a chew toy. The result: it took five drama-packed years for the standard to come to fruition.

The delay was never over the technology. In fact, the technical tricks that give 802.11n its steady connection speeds of 100Mbps to 140Mbps have been well-known for years.

Instead, the answer is the usual one behind standards wars: mud-wrestling among major vendors. They squared off over which approach would become the one, true, money-making standard. In this case, that wrestling match hit a new low point — several times, just when it seemed like agreement had been reached, it turned out that a new fight was brewing instead.

As Andrew Updegrove, a partner with the law firm Gesmer Updegrove LLP and well-known standards advocate, says, a major reason it took so long for 802.11n to be finalized was “the amount of money at stake and the number of vendors in the marketplace whose lives can be made easier or harder depending on the outcome.” In addition, the potential value of the technology kept growing “as Wi-Fi became more ubiquitous, in more and more devices, for more and more purposes.”

On top of that, Updegrove continues, while the basic technology was well understood, there were all kinds of small differences among approaches that could be debated. Among these: the number of channels, and their sizes, that a single 802.11n device could handle.

Another complication, he says, is that 802.11n attracted an “incredibly large number of submissions as candidates for the final gold star.” There were dozens of versions submitted to the deciding standards body, the IEEE (Institute of Electrical and Electronics Engineers), all with fairly minor variations.

Last, but not least, Updegrove explains, the IEEE “operates on consensus, which means, as a practical matter, that competing submitters have to cut deals and enter into alliances to get down to a single winner, often by successive mergers of competing groups of submissions. This alone is a very time-consuming process.”

It was indeed. In the beginning, in 2003, there were four major groups fighting to decide 802.11n’s fate. Two of these standardization efforts — a joint submission from Mitsubishi and Motorola and another from Qualcomm — quickly died off.

After some further consolidation, that left two major groups. The first was the Task Group ‘n’ Synchronization, or TGn Sync. It counted Intel, Atheros Communications and Nortel among its members. The other was World-Wide Spectrum Efficiency (WWiSE). Airgo Networks, the first company to deliver network chipsets that used MIMO (multiple-in, multiple-out technology), led this group.

Those two groups spent time battling with each other, but neither could gain the upper hand. With a 75% super-majority of the task force needed to approve the standard, the battle over whose version of the standard should prevail seemed unending.

Exhausted, in late 2005 the pair finally appeared to have hashed out their disagreements in the Enhanced Wireless Consortium and to have reached a compromise that would become the official IEEE standard, Updegrove says. But Airgo continued to hold out for its own take on the standard and it was able to block passage of the compromise version in mid-2006.

Adding insult to injury, even after delaying tactic, the authors and editors of the 802.11n standard had to wade through more than 12,000 comments on the 2005 “final” draft. As Bill McFarland, CTO at Atheros, a Wi-Fi chip vendor, and one of the editors and writers of the draft, observed at the time, “There were a lot of duplicate comments, and three people filed comments for each and every blank line in the document. The physical process of dealing with so many comments is tedious and time-consuming.”

In December 2006, Qualcomm purchased Airgo. With new ownership, Qualcomm/Airgo stopped fighting against the search for a consensus on the standard, and the first unified version, Draft 2, was passed in March 2007.

Further drafts were then passed in quick order, and it looked like 802.11n would finally be an established standard and that users could buy 802.11n equipment with certainty that it would interoperate by 2008. As Molly Mulloy, spokesperson of wireless OEM Broadcom, explained, “In 2008, we believed that draft 2.0 of the 802.11n specification was very solid. All of the major technical items had been resolved, and only relatively minor wording issues remained.”

Well, that was the plan. But, alas, there was one last major issue — a big, bad patent problem.

Before the IEEE will approve any given standard, everyone with a patent that touches that standard must sign a LoA (Letter of Agreement). The LoA states that the patent holder won’t sue anyone using his or her patent in a standard-compatible device. In this case, the holdout was CSIRO (Commonwealth Scientific and Industrial Research Organization), an Australian government research group that held a patent that concerned the development of a wirless LAN. CSIRO refused to sign the 802.11n-related LOA.

This led to a series of patent lawsuits. Apple, Dell, Intel, Microsoft, Hewlett-Packard and Netgear all attempted to overturn CSIRO’s 802.11n-related patents in court. They failed. Then, finally, in April 2009, the Wi-Fi vendors and allies gave up, paid up and signed patent licenses with CSIRO.

While the exact terms of all these deals are under nondisclosure agreements, the end result was that the 802.11n standardization process started moving quickly forward again. So it’s logical to assume that the settlement included an IEEE-acceptable LoA.

This means that, at long last, we will finally see interoperable 802.11n Wi-Fi access points, network routers and NICs (network interface cards). Vendors promise that customers will be able to update equipment build to the most recent draft 802.11n — 2008 and newer — to the new, final standard.

Some industry watchers don’t see the final standard as being that big a deal. Analyst Paul DeBeasi of the Burton Group, for example, believes that the real battle was won when Draft 2 was finally approved back in March 2007. In a blog posting, DeBeasai wrote, “The sorry fact is that the final ratification will have virtually no impact on the wireless industry. This is because what customers care about most is product interoperability. The Wi-Fi Alliance stepped into the standards void in 2007 and began certifying product interoperability based upon IEEE 802.11n draft 2.0. The fact of the matter is that the Wi-Fi Alliance did such a great job with their 802.11n certification program as to make the final IEEE standard a non-event.”

But other analysts predict that the standard is just now taking off. Victoria Fodale, an analyst with In-Stat, expects that 802.11n chipset revenue will surpass that of 802.11g this year, with a total Wi-Fi chipset revenue of over $4 billion by 2012. Overall revenue for products based on the most recent standard will be greater than that for 802.11g in 2012, she says.

It won’t just be computer networks zooming along at high speeds, though. Fodale also sees shipments of 802-11n-enabled TVs, set-top boxes, personal media players, digital still/video cameras and even mobile phones increasing quickly.

At that point, 802.11n may have finally caught up with itself.

SIDEBAR: The technology behind really fast Wi-Fi

Technically, 802.11n achieves its performance by adding MIMO (multiple-in, multiple-out) technology to the earlier 802.11g technology.

MIMO takes advantage of what has been one of radio communication‘s oldest problems: multipath interference. This occurs when transmitted signals reflect off objects and take multiple paths to their destination. With standard antennas, the signals arrive out of phase and then interfere with and cancel out one another.

MIMO systems employ multiple antennas to use these reflected signals as additional simultaneous transmission channels. In other words, MIMO knits the disparate signals together to produce a single, stronger signal.

In addition, 802.11n uses channel bonding to increase its throughput. With this technique, an 802.11n device uses two separate non-overlapping channels at the same time to transmit data. Thus, customers can send and receive multiple data streams at the same time.

As is the case with its predecessor, 802.11g, the new Wi-Fi operates in the 2.4GHz frequency range. Optionally, 802.11n can also operate at 5GHz. The new standard is backward-compatible with 802.11b and 802.11g.

SIDEBAR:

802.11 through the years

Slowest: 802.11 — 1 to 2Mbps. Established in 1997 and ran at the 2.4GHz frequency range. Now obsolete.

Slow: 802.11b — maximum throughput of 11Mbps. Normal throughput in practice: 4Mbps. Made a standard in 1999 and runs on the 2.4GHz frequency range. Most Wi-Fi devices still support 802.11b.

Faster: 802.11a — maximum throughput of 54Mbps. Normal throughput in practice: 20Mbps. Made a standard in 1999 at the same time as 802.11b, but regulatory slowdowns kept 802.11a off store shelves until 2002. 802.11a, which is still supported on some devices, runs on the 5GHz range. (Yes, the naming happened out of order; 802.11b made it to market much, much faster than 802.11a. Long story made short: It was easier to make 2.4GHz silicon than it was to make 802.11a’s 5GHz chip sets.)

Faster still: 802.11g — maximum throughput of 54Mbps. Normal throughput in practice: 20Mbps. Approved as an IEEE standard in 2003. Like 802.11b, it operates in the 2.4GHz range. While it has the same speed as 802.11a, it has a greater range inside buildings and so has become the most widely deployed Wi-Fi protocol.

Fastest: 802.11n — maximum throughput of 450Mbps. Normal throughput in practice: 100Mbps+. Approved in 2009. It can operate on both the 2.4GHz and 5GHz frequencies and is expected to supersede 802.11g devices. At the 2.4GHz range, 802.11n devices can also support 802.11g devices at the cost of lowering 802.11n device connection speeds by half. So, for example, an 802.11n router supporting both 802.11g and 802.11n devices would deliver only 50Mbps throughput to a 802.11n-based computer.

A version of this story first appeared in ComputerWorld.

December 9, 2009
by sjvn01
0 comments

Linux, Windows, or Mac: You need to patch Adobe Flash

I don’t think about Adobe Flash much. I just use it. I think that’s the case for most of us. Almost all the video on the Web is in Flash, and we just take it for granted. That’s a mistake. Like any other popular application, it can be an easy way for a cracker to hack into your computer.

Take Adobe Flash’s latest round of patches. Adobe doesn’t say a lot about exactly what it’s fixing in its security advisory, but out of the seven security bugs they’re fixing, six of the repairs are on problems that “could potentially lead to code execution.”

That’s a fancy way of saying that they could be used to bust into your PC. Once there, they could install malware, rip off your personal data, and in general make your life a living hell.

More >

December 9, 2009
by sjvn01
1 Comment

Thunderbird is Go!

When Thunderbird, the Mozilla Foundation’s open-source e-mail client, first came out, I liked it a lot. But, then Mozilla put Thunderbird on the back burner to focus its attention on Firefox, and, frankly, Thunderbird slowly aged into a second-rate e-mail client. Now, at long last, a new, and vastly improved version of Thunderbird has just been released, and, let me tell you, it’s back to being great.

I started working with Thunderbird again with its beta 3 release earlier this year. I was immediately impressed. The 2008-vintage versions of Thunderbird were just, in a word, sad. This time around Mozilla got it right. Thunderbird, with the 3.0 release, deserves any e-mail user’s attention.

I say this after working with Thunderbird on two test systems. Both systems were Dell Inspiron 530S PC with a 2.2-GHz Intel Pentium E2200 dual-core processor with an 800-MHz front-side bus under the hood. Both had 4GBs of RAM, a 500GB SATA (Serial ATA) drive, and an Integrated Intel 3100 GMA (Graphics Media Accelerator) chip set. On the first, I was running Windows XP SP3, and on the second, I was running MEPIS Linux 8.0. Yes, that’s right, Thunderbird, like Firefox, runs on all popular desktop systems including Mac OS X, and, I might add, in almost 50-different languages.

More >

December 7, 2009
by sjvn01
0 comments

Cheap WPA Wireless Cracking

Wi-Fi systems are always vulnerable to being broken into. I mean, just think about, anyone can pick up your Wi-Fi signal. On top of that many Wi-Fi security systems have long been busted. But, not there’s a ‘service’ that claims it will help you break into relatively secure systems for $34. What a deal!

The service, WPA Cracker is designed to bust Wi-Fi networks that use PSK (pre-shared key), also known as Personal mode, WPA (Wi-Fi Protected Access). Chances are if you you’re using WPA security in your SOHO (small office/home office) or small business network, you’re using Personal mode. I know I do.

In WPA-PSK, everyone uses the same WPA password. This password is then encrypted using a 256-bit key. That’s enough to stop casual attackers, but these days, you don’t really need much technical expertise to bust WPA if you’re lazy about your passwords. For example, the Church of WiFi has gathered together the most commonly used WPA passwords in look-up tables, also known as rainbow tables to make it easier to break WPA passwords when they’re found on the 1,000 most popular SSIDs (Service set identifier, the broadcast name of a Wi-FI Access Point).

More >