Practical Technology

for practical people.

February 7, 2007
by sjvn01
0 comments

Linux hackers tackle WiFi hassles

When it comes to troublesome Linux peripherals, WiFi takes the cake. Sparked by the Portland Project’s efforts to bring standardization to the Linux desktop, the Linux wireless developer community tackled this problem at its second Linux Wireless Summit last month in London.

The Summit was scheduled as a followup to the January IEEE 802 standards committee meeting, which, among other issues, moved a step closer to making 802.11n a real IEEE standard. As a result of this timing, participants at the Linux WiFi meeting included kernel developers and vendor representatives from Intel, Broadcom, Devicescape, MontaVista, and Nokia.

Once there, according to Stephen Hemminger, Linux Wireless Summit co-coordinator and a Linux software developer at the Linux Foundation, the attendees had a very productive meeting.

Still, it’s been slow going in some critical areas of Linux and WiFi, according to John Linville, the Linux wireless software maintainer. In particular, Linville reported that development work is proceeding too slowly on a new 802.11 stack (d80211); and with a new WiFi API (cfg80211), “development is even slower.” Hemminger described the cfg80211 as “a good start but there are no user interface tools (the iproute2 equivalent of iwconfig).”

As for d80211, a breakout session addressed its technical issues. Hemminger wrote in his report on the summit that most of its problems “are not specific to the d80211 stack but have existed all along in Linux wireless. The problem is that up to now each device was doing its own different implementation. The d80211 stack currently exposes a ‘master’ interface which is only used for reconfiguring the internal queuing disciplines. Since the master interface shows up as a device, it can be a potential point of bugs or user confusion so there was discussion on ways to get rid of it prior to acceptance in the main kernel.”

There was also a discussion of the d80211 stack’s more advanced features that are seeing little use as of yet. In particular, Nokia was interested in QoS (Quality of Service) and WMM (WiFi Multimedia) support, because their existing device uses a proprietary userspace library.

As it is, “D80211 supports WMM based on the IP type of service (TOS) value in the socket API,” Hemminger added. This led to a discussion of whether or not existing glibc header files include the right values for WMM/TOS. It was noted that many existing multimedia applications, like Ekiga, Skype, and Google Talk, are not properly setting TOS.

Another breakout session dealt with regulatory concerns. This discussion led the group into facing the dilemma of proprietary hardware and firmware. According to Hemminger, “Hardware vendors license their equipment under FCC section 15 regulations, even though technically pure software devices could be under SDR (Software Defined Radio) regulations. FCC wants all devices to have a ‘no trespassing’ sign on radio settings but there is no consensus on what that means.”

As a result, “the only solution that can get certified in the current regulatory environment is to have a closed source component either in firmware (good), kernel (bad) or userspace (less evil),” Hemminger continued. The reverse-engineered drivers don’t have this problem, but the developers were concerned about the legal implications of redistributing them. “There is some concern since FCC has already stopped hackers who modify power levels on access points. Vendors are reluctant to address the SDR issues too directly because of the regulatory impact to existing non-Linux products if there was any problem.”

The reason why the FCC takes this position is that the governmental agency doesn’t want it to be easy for users to tamper with radio settings or power levels. Thus, the only WiFi solution that can be certified must have a closed source component. Thus, as distasteful as it is to some free software distributors, legal support for WiFi on Linux will often have to incorporate proprietary elements in one way or another.

Coming out of the meeting, the developers committed to make experimental wireless tarball (and driver) packages available; move faster on the new cfg80211 API; and gain a more complete understanding of the regulatory WiFi situation.

The next Linux Wireless Summit will be this fall, most likely in October, on either the U.S. East Coast or in Israel.

A version of this story first appeared in DesktopLinux.

February 7, 2007
by sjvn01
0 comments

Linspire sheds light on new “wiki-ized” CNR

Several weeks ago, desktop Linux distributor Linspire Inc. announced that it was going to open up CNR (Click N Run), its Web-based software downloader/manager, to other distributions. Now, the company is revealing more about what this new Linux software distribution system will look like.

First, in a letter to Linspire customers, Kevin Carmony, Linspire’s CEO and president, wrote, “Because the new CNR.com system was designed from the beginning with the intention of supporting multiple distributions (both Debian and RPM), most of the work for supporting a new distribution will already be done. The vast majority of the work is in building the overall system and has nothing to do with a specific distribution. This means that with just the small additional effort specific to a new distribution, we can leverage 100% of the CNR system.”

Thus, once the universal CNR is in place, we can expect to see new distribution support rolled out quickly. Why would Linspire, which supports both its own self-named distribution and the community-based Freespire, support other desktop Linux distributions?

Continue Reading →

February 6, 2007
by sjvn01
0 comments

Weather alert: new Microsoft FUD storm expected

In recent weeks, Microsoft seems to have gone out of its way to put Linux down, while boosting Linux. First, there was the bribetop scandal; then, the Wikipedia ‘correction’ affair. Now, the company is up to one of its oldest tricks: playing games with analyst reports.

This time around, Sunbelt Software is working with the Yankee Group, a research company with a poor reputation in Linux circles, to produce its “yearly major survey comparing Windows to Linux.” Here we go again.

What’s wrong with that? I’ll tell you what’s wrong with it. Sunbelt is a Microsoft Gold Certified Partner. In other words, they’re buddies with Microsoft. If anyone does say much bad about Windows, how many people do you think will see those results? I suspect they’ll end up going straight in the great bit-bucket in the sky.

Next, they’re only looking for people to survey who read the Sunbelt publication, WServerNews. This is a publication that claims to be “the world’s largest newsletter focused on system admin issues for Windows NT4/2000/2003.” Funny, I don’t see the word “Linux” in there. Do you?

What do you think? Do you think people who read a publication devoted to Windows servers are going to have anything nice to say about Linux? If you do, I have a wonderful old bridge, lightly-used, in Brooklyn, that I’ll be willing to sell you at a remarkably good price.

Now, if Microsoft was just doing this “research” for its own benefit I wouldn’t have any problem with that. After-all, over at DesktopLinux.com, we do surveys, like our 2006 state of the Linux desktop. The difference is, we don’t pretend that the opinion of a group of largely Linux desktop users says a whole lot about the entire desktop universe.

The Yankee Group, however, proclaimed a couple of years ago that its independent study showed that Windows TCO (total cost of ownership) was better than Linux’s TCO. It was only later found out by Pamela Jones of Groklaw that Sunbelt was behind this “independent” study.

Anyone want to bet that when the press release goes out about this new survey’s results it won’t mention anything about it being done by a Microsoft partner with a group of self-selected Windows administrators? And, once it’s out, we can be certain that Microsoft will trumpet how much better Windows is than Linux on its Get the Facts website.

After all, as Mary Jo Foley points out in her All About Microsoft blog, recent court documents in the Iowa consumer antitrust case against Microsoft, Comes v. Microsoft Corp., show that as recently as 2002, Microsoft tried to force IDC analysts into tweaking their December 2002 study to put Microsoft in a better light. IDC wouldn’t go along.

You know, Microsoft, I have an idea. If Windows and Vista and all that are really better than Linux and the alternatives, why keep playing games with the facts? Why bribe bloggers? Why pay people to set the record straight? Why promote biased surveys?

Could it be that Microsoft, with its hundreds of millions of customers, with its billions of dollars of quarterly income, is running scared that people will start waking up one day to the fact that there are better and cheaper alternatives? Can you think of another reason? If so, I’d love to hear it.

Steven J. Vaughan-Nichols


Yankee Group responds!


Following the publication of this column, the Yankee Group sent us a rebuttal. In an effort to allow our readers to hear both sides of the story and form their own judgments, we have reproduced the response from Yankee Group research fellow Laura Didio here.

A version of this story was first published in Linux-Watch.

February 6, 2007
by sjvn01
0 comments

The real point of Unbreakable Linux: breaking Red Hat

Following my recent article in which I wrote that neither I, nor several financial analysis firms, were aware of any companies that were planning to deploy Oracle’s Unbreakable Linux, a handful of companies have told me that they are giving Unbreakable Linux a try.

What I think is interesting is why they’re giving it a try, and what it tells us about Oracle’s intentions towards Red Hat.

First, none of these companies was willing to go on the record with their names. Why? They didn’t want their names used because none of them wanted to get into trouble with Oracle. And, since all but one were Oracle customers, it’s easy to see why they wouldn’t want to get on Larry Ellison’s bad side.

The reason that most of them are trying Unbreakable Linux is that Oracle was offering an additional 50 percent discount — on top of the original 50 percent discount to RHEL (Red Hat Enterprise Linux) customers — to users who subscribed to the new operating system by January 28.

At this price, as several of the customers said, there’s no way that Oracle will break even on Unbreakable Linux sales. Oracle was also keeping this double-discount hush-hush. Not all customers or analysts knew that Oracle was being this aggressive with pricing.

That said, I’m still unable to find even a single customer who has replaced RHEL with Oracle Unbreakable. I did find several Oracle customers, however, using Unbreakable in new deployments.

At the same time, though, almost everyone I spoke with intends on using Oracle’s cut-throat pricing in price negotiations when it comes time to renew their Red Hat RHEL contracts.

The one exception was a mid-sized company that was not an Oracle customer. While this business was unhappy with Red Hat’s pricing, they didn’t find Oracle’s loss-leader pricing interesting. As one negotiator for the company said, “None of the Oracle reps could say anything except ‘We’re cheaper.’ In our discussions, Oracle was evasive on the service agreement, deployment details, levels of support, etc., etc.” They turned Oracle down.

To them, it appeared that Unbreakable Linux was nothing but an attempt by Oracle to undercut Red Hat pricing without anything of substance in and of itself. Since this company wanted a dependable enterprise operating system, and not a cheap operating system with open questions about service and support, they’re electing to stick with Red Hat and hire more in-house Linux IT staff.

As I look at the situation, it becomes ever clearer to me that Unbreakable Linux is really not a serious business operating system offering. It’s simply Oracle’s attempt to break Red Hat.

While this will certainly put short-term pressure on Red Hat, I doubt it will do much more than that. Low prices are all well and good, but if Oracle doesn’t back it up with service and support, no price will be low enough.

A version of this story first appeared in Linux-Watch.

February 5, 2007
by sjvn01
0 comments

Super Kernel Sunday score: Linux 2.6.20, Vista 1.0

In a message entitled, “Super Kernel Sunday!” to the LKML (Linux Kernel Mailing List), Linus Torvalds announced news far more important than the Colts beating the Bears — to serious Linux users, anyway. The newest stable version of the Linux kernel, version 2.6.20, has been released.

Torvalds, with his tongue firmly planted in his cheek, went on “Before downloading the actual new kernel, most avid kernel hackers have been involved in a 2-hour pre-kernel-compilation count-down, with some even spending the preceding week doing typing exercises and reciting PI to a thousand decimal places.”

And, Torvalds added, “As ICD head analyst Walter Dickweed put it: ‘Releasing a new kernel on Superbowl Sunday means that the important ‘pasty white nerd’ constituency finally has something to do while the rest of the country sits comatose in front of their [65-inch] plasma screens.'”

After some more fun, Torvalds moved on to business. “I tried rather hard to make 2.6.20 largely a ‘stabilization release.’ Unlike a lot of kernels lately, there aren’t really any big fundamental changes to some core infrastructure area, and while we always have bugs, I really am hoping that we fixed many more than we introduced,” wrote Torvalds.

The one major new addition to the 2.6.20 kernel is the long-awaited addition of KVM (Kernel-based virtual machine for Linux). KVM, like Xen and OpenVZ, is an open-source virtualization platform.

KVM works only on the latest x86 processors that include virtualization extensions. These include Intel’s VT (Virtualization Technology aka Vanderpool) and AMD’s AMD-V (aka Pacifica) technologies. With chips that support these technologies, such as Intel’s Core 2 Duo processor, virtualization programs that support these extensions can run much more efficiently.

With Linux 2.6.20, KVM consists of a loadable kernel module, kvm.ko, which implements the core virtualization infrastructure, along with a processor specific module, kvm-intel.ko or kvm-amd.ko, which supports the appropriate instruction set. At this time, KVM also requires a modified QEMU to work properly. QEMU is an open-source VM (virtual machine) monitor, or “hypervisor.”

With KVM and the right chips, users will be able run multiple VMs running both unmodified Linux and Windows. Each VM has its own private virtualized hardware: a network card, disk, graphics adapter, and so on.

In one early test of a Linux 2.6.20 with a KVM release candidate, “KVM was not the clear winner in all of the benchmarks,” according to a review by Michael Larabel on Phoronix.com. It did do well, however, and its strong points reportedly included “high performance, stable, no modifications of the guest operating system are necessary, and a great deal of other capabilities (e.g. using the Linux scheduler).”

Source code to Sunday’s Linux 2.6.20 release can be downloaded from kernel.org. Expect to see it begin showing up in your favorite distro in the coming months.

Vista, meanwhile remains at version 1.0.

A version of this story first appeared in Linux-Watch.

February 1, 2007
by sjvn01
0 comments

Microsoft & Novell’s Joint Lab

When Novell and Microsoft announced their unlikely partnership, a part of the arrangement that got little attention at the time was that they’d create a joint research facility, where both company’s technical experts would collaborate on new joint software solutions. Now, they’re staffing up.

According to Sam Ramji, Microsoft’s director of platform technology strategy, the companies are looking for a few good program managers and software engineers to populate that joint research facility.

In a Port 25 message, Ramji wrote that as a result of the partnership, two companies are opening a Joint Interoperability Lab, which “will be around for the long term, and will focus on interoperable virtualization between the Windows and SLES (SUSE Linux Enterprise Server). This lab will be part of the product engineering teams for both companies.”

In particular, the Lab will focus on several areas: “Virtualization, Office OpenXML/ODF interoperability, WS-Management interoperability, and directory federation.” Ramji, and his Novell colleagues, are looking for program managers and software design engineers. Depending on the particular job, one might work for Novell, while another would draw his or her pay-checks from Microsoft.

The job descriptions make it clear, though, that virtualization is at the top of the priority list for the two companies.

Specifically, Microsoft wants a “Software Design Engineer in Test, Linux Interoperability” and a “Program Manager, Linux Interoperability,” while Novell is seeking a “Software Design Engineer in Test, Windows Interoperability.”

For its software engineer, Microsoft wants an experienced “Software Development Engineer in Test who can take on the challenging role of qualifying Microsoft’s new Longhorn Server Hypervisor based virtual machine solution in a collaborative project with Novell. This position will require candidates with substantial knowledge of Microsoft’s device driver models; strong experience in developing and testing software written in C, C++ or C#; working knowledge of Linux (preferably SLES); and knowledge of Microsoft’s server class feature and applications.”

From this, it would seem that Microsoft and Novell have joint plans for XenSource Inc.‘s Xen virtualization system. Microsoft announced a strategic partnership with XenSource last July. At the time, Bob Muglia, the senior vice president for servers and tools at Microsoft, said, “Virtualization is an important trend in the industry as well as a specific area where there are great opportunities for interoperability because of the ability for an operating system such as Windows, with the virtualization technology we are building in, to support Linux in a very native and high-performance way.”

Novell, of course, has long partnered with XenSource. Xen is already working in SLES 10 (SUSE Linux Enterprise Server). Indeed, there have been recent rumors that >Novell XenSource.

In turn, Novell is looking for “an experienced Software Development Engineer in Test who can take on the challenging role of qualifying SLES10 based virtual machine solution in a collaborative project with Microsoft. This position will require candidates with substantial knowledge of Linux device driver models; strong experience in developing and testing software written in C, C++ and various scripting languages; working knowledge of Microsoft server environment; and knowledge of server class feature and applications on Linux.”

It seems clear that a virtualization solution that will run both on Longhorn, the next version of Windows server, and on future editions of SLES is in the works. Microsoft’s other job description, though, indicates that Microsoft isn’t replacing its own home-grown virtualization program, Viridian, with Xen.

Regarding its “Program Manager, Linux Interoperability” job opening, Microsoft says the person who is hired for that “highly visible senior program management position will have the opportunity to work in one of the core areas of growth for Microsoft.”

Microsoft adds that “The main focus of this position is to drive interoperability between Linux and Windows, including planning and leading the Microsoft/Novell Joint Interoperability Lab. This is a multi-million dollar, multi-year effort that will ensure high performance and availability of both SUSE Linux on Viridian and Longhorn Server on Xen.”

This job will require the manager to lead a small team of software engineers in virtualization product development with both Microsoft and Novell virtualization engineering teams; analyze Fortune 100 customers’ needs, engage with open-source communities, work on the Microsoft’s Interoperability Roadmap, and “Scale impact of interoperability work across the company, including worldwide field engagement.”

Need it be said that Microsoft is looking for a top-level technical leader for the job? In addition, this will be “a high visibility role that involves strategic and technical communication at all levels.”

One question, though, that any job seeker will have is still unanswered. Ramji said that the location for the Joint Interoperability Lab is still secret.