Practical Technology

for practical people.

October 11, 2006
by sjvn01
0 comments

Portland points desktop Linux at $10 billion market

Nearly a year in the making, the OSDL and freedesktop.org today announced general availability of Portland 1.0, the first set of common interfaces for GNOME and KDE desktops. This support may be a small step for GNOME and KDE, but it’s a giant leap for the Linux desktop.

These first common interfaces are a set of command line tools, xdg-utils. These first command line tools can be used by ISVs (independent software vendors) to help install software and provide access to the system while the application is running.

Specifically, these tools make installing and uninstalling menus, icons, and icon-resources easier for developers. They also can obtain the system’s settings on how to handle different file types, and program access to email, the root account, preferred applications, and the screensaver.

There’s nothing new in this kind of functionality. What is new is that developers can use these regardless of which desktop environment — KDE or GNOME — they’re targeting. This means ISVs can design programs much more easily for both environments.

Unlike some theoretical standards, Portland 1.0 environments are already available in several major community distributions, including Debian, Fedora, and openSUSE. The corporate Red Flag and Xandros distributions have also committed to including Portland in their next releases. Sources said that Linspire, Novell, and Turbolinux are also expected to announce Portland adoption shortly. TrollTech’s Qt 4.2, the primary KDE application framework, is also using Portland 1.0 to provide developers with tighter integration with the GNOME desktop environment.

John Cherry, the OSDL’s (Open Source Development Labs) Desktop Linux initiative manager, said that this support from the actual movers and shakers of desktop Linux is vital. “The important part of this release is that we have real distros and they’re putting the tools in their development trees.”

The release of Portland 1.0 is expected to accelerate adoption of Linux on the desktop. According to market analyst IDG, this will help the desktop Linux market grow to around $10 billion by 2008.

OSDL CEO Stuart Cohen stated, “For the first time, ISVs are able to port their applications to Linux regardless of desktop environment. This release gives ISVs the opportunity to increase their customer base and for users to gain access to new applications. Portland is a success story for vendors, developers and users alike — it’s a perfect example of how a common need, combined with a distinct community interest, produces collaboration and increased adoption of technology.”

Xandros CEO Andreas Typaldos added, “Portland 1.0 opens the way to the creation of a rich Linux application infrastructure that will address the diverse needs of our business clients,” said “We will see an accelerated rollout of real-world Linux solutions since third party software developers can now integrate their applications regardless of the desktop deployed.”

It’s not just the Linux businesses that are excited by Portland. Agustin Benito, development coordinator for mEDUXa, the educational Linux distribution of the Canary Islands Government, said that “Compatibility for educational applications and other free software projects across graphical desktop environments is critical, especially as we customize menus to include applications from other products. This makes the job OSDL is doing with the Portland Project so important for the community.”

This is only the first Portland release. A similar set of interface tools will be offered in the form of desktop services, in the form of a DAPI (Desktop Application Programming Interface) that applications can use via the DBUS message bus system.

In addition, through Portland, desktop developers are working on other ways to find common programming ground to make Linux more ISV friendly.

The Portland Project was born from the first OSDL Desktop Architects Meeting in December 2005. A Portland preview was made available in April 2006, and beta versions were released throughout the summer. The Portland Project is expected to be included in the Linux Standard Base (LSB), the industry standard for interoperability between applications and the Linux platform.

If it’s not already in your development tree or toolkit, xdg-utils is available for download.

October 9, 2006
by sjvn01
0 comments

Open Source Madness

I love free software. I use open-source programs and operating systems every day. But once in a while, I want to take some free software developers and shake them until their teeth rattle. At the moment, I’m ticked off because the Debian community’s recent hissy-fit over the Mozilla Corp.‘s trademarked Firefox logo has led them, and others, to forking the Firefox code to avoid the use of the logo.

Gnuzilla, part of the Free Software Foundation’s GNU Project, is creating “the ‘GNU/Linux’ version of same, to be dubbed ‘IceWeasel.'” This may, or may not, become the logo-free version of Firefox that Debian will ship in its next distribution.

Regardless of how this turns out, the Firefox “bug” has been removed from Debian.

What are these people thinking!

Don Armstrong, a Debian developer who is active in legal issues affecting free software, told me, “The issue here is purely the license on the firefox logo; all parts of the Debian distribution have to be modifiable by those to whom we distribute. The firefox logo cannot be modified, and so we cannot ship it. Instead, we have been shipping the logo which is freely licensed.”

I guess we can’t keep the Firefox baby if the bathwater of logo-hackers might be offended!

Yes, I know, I know, it’s against a strict interpretation of the Debian Social Contract. You know what. I don’t care for fundamentalists.

There’s also another problem. The Gnuzilla version is an honest-to-goodness fork of Firefox. The first change is an automatic block for Web sites that use zero-size images on other hosts to keep track of cookies. The second change alerts users when a site tries to rewrite the host name in links, which redirect the user to another site, to track clicks.

This is great. That’s just what we need, a fork of perhaps the single most important open-source application.

It will mean more work for programmers. It will mean more work for Firefox, or should I say IceWeasel, extension developers. It will be what all forks are: a major pain for both users and developers.

By winning this “battle,” the pedantic Debian developers have helped the proprietary forces of Microsoft and friends far more then the cause of Open Source.

Think. When you get a Windows box, you know Internet Explorer is the default browser. When you get Linux, there may be several browsers available to you, but you always know that Firefox will be one of your choices. Or, rather, it used to be.

If the IceWeasel forks gains popularity, we’ll also have people having to deal with two slightly different browsers.

This will make would-be Debian users just a touch confused. And, you know what? It doesn’t take much to befuddle users. A confused user isn’t a happy user.

Given a choice between the Firefox they already know about and “IceWeasel,” they’re going to go for Firefox. Or, maybe, just maybe, instead of dealing with this confusing Linux stuff, they’ll stick with Windows after all.

That will be just fine with some Linux users. You know the ones. The ones who grumble about that damn upstart Ubuntu, the ones who “correct” people that it’s not “Linux” it’s “GNU/Linux.” In short, it’s the ones who want Linux to stay a techie paradise and freedom trumps common sense.

Sorry, that’s not my crew. I want Linux to be as user-friendly as Mac OS X, as powerful as AIX, and without this nonsense of having different names and icons for the same blasted programs.

A version of this story first appeared in DesktopLinux.

October 4, 2006
by sjvn01
0 comments

Mandriva Corporate Server 4 misses the mark

Some people think that I’m always pro-Linux and anti-Microsoft. Nope. That’s not true. I’m pro what works, and I’m anti what doesn’t work. Most of the time, Linux has been a winner. But, sometimes, it isn’t. And, that brings me to Mandriva Corporate Server 4.

This distribution was to be Mandriva‘s big step up into the business Linux world. This was to be the Linux that would challenge SLES (SUSE Linux Enterprise Server) and RHEL (Red Hat Enterprise Linux) in the climb to the top of the corporate ladder.

Ah… it’s not.

For starters, as eWEEK Labs found out in its recent Mandriva review, it just doesn’t work that well.

One of Mandriva’s strong points was to be that it supported three of the major Linux-friendly virtualization hypervisors: Xen, OpenVZ, and VMware. Unfortunately, it doesn’t do well by any of them. SLED and RHEL both outdid it with Xen, Debian does OpenVZ better, and VMware… well, the VMware programs aren’t there yet, although they’re supposed to become available RSN (real soon now).

This is the kind of thing that usually annoys me about Microsoft products. When I buy software, I expect it to at least have all the parts in the box in somewhat working condition. Mandriva should have held off shipping this Linux until it was ready for prime-time.

The Labs also found that Mandriva’s three different system management and configuration tool sets were not at all well-integrated. I can live with different management tools when they handle different jobs. I’m not crazy about it, but I can live with it.

For what it’s worth, I agree with the Labs. Since Mandriva is making use of the popular free Webmin administration program set, they should just go ahead and make it their standard administration console.

Yes, Webmin, which is based on Perl, is a bit slow, but there’s little you can’t do with it once you have the right Webmin modules in place and you get the hang of the system.

As for Mandriva, the French Linux distributor still needs to get the hang of a business-ready server before they’ll be ready to compete with Novell/SUSE and Red Hat.

A version of this story first appeared in Linux-Watch.

October 3, 2006
by sjvn01
0 comments

Why Linux will Dominate the Future of Servers

George Weiss, Gartner’s open-source analyst, recently said that Microsoft Windows will not suffer irreparable damage on the server side at the hands of Linux over the next five years. He’s right. Microsoft will fall flat on its face all by itself, and Linux will pick up afterwards.

It’s very simple.

What does any business want from servers today?

Go, ahead, take a look at the latest server software and hardware news, I’ll wait for you.

Continue Reading →

October 3, 2006
by sjvn01
0 comments

HDMI 1.0 vs. HDMI 1.1

A friend was in the market for an HDTV recently and she ran into the perplexing question of: “Which is better HDMI (High-Definition Multimedia Interface) 1.0 or HDMI 1.1?”

It’s a good question, people who can talk intelligently about the differences between 1080i (interlaced) and 720p (progressive) can be stumped by this one.

Continue Reading →

October 1, 2006
by sjvn01
0 comments

The Old and New Virtualization Models are on their way to a Server near You

Virtualization has been used for decades in mid-range and mainframes, but now, thanks to faster and more powerful processors, virtualization is moving from IT-managed server rooms to the x86-powered server in the corner.

Virtualization enables a single computer run multiple operating systems simultaneously. This, in turn, enables companies to use a single server more efficiently and cost-effectively while reducing the number of servers. Today, not only can lower-end servers do this, but in addition software designers are experimenting with lightweight ‘containers,’ which virtualize applications spaces instead of an entire operating system.

Al Gillen, VP of system software at IDC, said that virtualization is “a hot topic and is on the minds of customers who want to know how this affects them and their products.”

Specifically, according to Enterprise Management Associates senior analyst Andi Mann, “Almost 75 percent of surveyed enterprises have already deployed virtualization in one form or another, and the virtualization market is increasing by approximately 26 percent on average. Less than 4 percent of surveyed enterprises have no virtualization, and do not plan to implement it.”

Several factors are driving virtualization’s growth. The strongest of these, everyone agrees, is server consolidation. By putting the work of several computers onto a single server in virtualized instances, companies can save significantly.

Another important part of the puzzle is disaster recovery. “I would say about 70% of people who are deploying virtualization for x86 servers are also doing disaster recovery for the first time for many of their servers. That is a very, very big deal,” said Tom Bittman, a Gartner VP.

“With pure server consolidation users get a much more efficient use of computing resources, and we see a three to six-month ROI,” said Dan Chu, senior director of developer and ISV products at leading x86-architecture vendor VMware.

As a result, enterprises are willing to spend on virtualization. Gillen expects virtualization spending to reach $15 billion by 2009.

There are other advantages to this new lightweight approach to virtualization. Mann said that virtualization takes a single computing resource, such as an operating system, an application, or storage device, appear as many logical resources. Thus, while server virtualization will remain the most popular kind of virtualization, Mann expects storage and file system virtualization to experience the highest growth rate in the next five years.

However, virtualization faces several challenges to widespread adoption. A major problem is that IT vendors are still unsure as to how they should bill for virtualization.

The $64-thousand question, said Gillen is: “How do you license for that?” While IBM has long had a metering model from its mainframe business, there’s no agreement on how a business should bill x86-based virtualization services. Many vendors find the idea “Of meter-based pricing is something that is complex and is frightening.”

Working examples of virtual machines go back to at least the mid 1960s and the joint work of IBM and MIT on the M44/44X Project, Soon thereafter, IBM created and began selling a series of virtual machine-enabled operating systems. The most well-known of these is probably VM/370. These were used to maximize usage of IBM’s mainframe systems.

Even in these early days, the most common-way of realizing virtualization was in place: the virtual machine monitor (VMM), the first hypervisor.

A hypervisor provides the guest operating system with a virtualization abstraction of the underlying computer hardware. The operating system then runs on this virtual machine with the hypervisor taking care of transferring its requests to the actual CPU, network and storage.

From a user’s viewpoint, the virtualized operating system should look and feel just as if it were running natively on the hardware. VMware’s VMware and Microsoft Virtual Server are well-known examples of visualization software that uses a hypervisor.

Almost all computer and operating system vendors are adopting virtualization. For example, the CPU manufacturers are getting into the act. Intel’s VT (Virtualization Technology) and AMD’s SVM (Secure Virtual Machine) CPU extensions provide virtualization software with API (application programming interfaces) that enables CPU memory management to be done by the CPU microcode.

With this traditional technique, each virtual machine has its own operating system instance and its applications. Of course, you might not need all of that, and that’s where the newer approaches to virtualization come in.

In the container concept, the developer tries to virtualize not the entire operating system but an application space to run a particular program. This saves on processor and memory use by sharing system calls with the host operating system.

For example, as implemented in Sun Microsystems’ Solaris 10, containers share a common kernel but run as separate entities. The Fair Share Scheduler hypervisor in the host Solaris kernel then apportions system resources from both the local server and other servers connected to it within a networked grid of Solaris boxes.

SWsoft’s Virtuozzo for Linux 3.0 uses different terminology–containers are VPSes (virtual private servers), in Virtuozzo jargon–but it’s the same idea.

One bad thing about this approach is that the applications running in the container must also run on the host operating system, according to Jason Brooks, senor eWEEK technical analyst. Thus, you couldn’t run a Solaris application on a Linux system or vice-versa.

Another popular, recent approach with a similar aim of reducing the hardware’s workload is paravirtulization. This is the path that XenSource’s Xen has taken.

According to Simon Crosby, Xen’s CTO, “Paravirtualization involves making the virtual server operating system aware of the fact that it is being virtualized, and enabling the two to collaborate to achieve optimal performance. On Linux, BSD, or Solaris x86 the paravirtualized operating system sees Xen as an idealized hardware layer–a new form of hardware. For Windows and other guests that are unaware of Xen, the hardware virtualization of Intel VT (virtualization technology), combined with paravirtualizing device drivers in Windows are used.”

This is done, according to Crosby, by a paravirtualizing hypervisor. Because paravirtualization provides a virtual machine for the guest operating system, users can run non-native applications on a given operating system. For example, Crosby said, “Xen-enabled Linux guests will run with full benefits of paravirtualization on the upcoming Windows Hypervisor, code named Viridian.”

How users implement these lightweight-virtualization approaches varies with the system. In Solaris’ containers, it’s done by setting them up with command-line-based scripts. Once, in place, they can be managed by the GUI-based Solaris Container Manager. SWsoft, however, gives users a choice of a fat-client graphical interface a Web-based administration portal, or the command line

With paravirtualization, the host operating system itself already has the hypervisor baked in. So, SLES (SUSE Linux Enterprise Server) 10 comes with Xen ready to go. To use it, a system administrator simply uses the Xen configuration module to install new, virtualized operating systems. Once in place, the virtual system is used just as if were the main operating system.

The business model for all these ways to approach virtualization remains the same as ever: provide services and software to businesses to maximize the value they get from their server hardware.

One of the ways that vendors are doing this is by providing virtualization deployment and management tools. At the same time many see a push towards tighter integration between systems and virtualization management.

Ron Rose, CIO at Priceline.com, believes that we’re on our way to a data center operating system. “Inevitably you will see unifying work from a variety of companies. This is already present in provisioning tools from BladeLogic and Opsware. Virtual Iron, XenSource and VMware are trying to address the distribution of a unit of work.

IBM, for example, is already supporting Xen on SLES with its Virtualization Engine. This, in turn, will enable users of IBM’s Enterprise Workload Manager to administer virtualized systems.

The problem of virtualization management though is far from solved. “The biggest unanswered question,” said Gillen, “is how these multiple operating systems will be managed in a virtualized environment, as well as whether these tools will be integrated into the operating system or offered by third-party vendors.”

SWsoft is also making container management tools for other virtualization solutions. Its Virtuozzo “management tools will include support for other virtualization solutions, including VMware virtual servers. This gives data center managers unprecedented control of virtualized resources and enables them to use various virtualization technologies without being tied to a single vendor’s management tools,” said Serguei Beloussov, SWsoft’s CEO,

The main reason to use containers or paravirtualization is because either way requires fewer computing resources than full-fledged virtual machines.

For example, such components of an operating system as its kernel and an application’s libraries must only be loaded into RAM once. By not requiring that every virtual application have its own complete copy of its operating environment, this saves on both memory and CPU usage.

According to Jeff Jaffe, Novell’s CTO, “By allowing portions of the guest operating system and application to be modified to exploit the new hardware, paravirtualization allows much higher performance than previous approaches.”

As for containers, SWsoft claims that its virtualization technology can be used to provide zero-downtime. This is done by a migration feature that captures the state of an existing virtual server and its contents and migrates it to a new physical sever. “With these new capabilities we are bringing the advantages of efficient virtualization technology to an even wider range of organizations by making it even easier to configure and administer highly available virtual servers,” said Beloussov.

D. Disadvantages

If virtualization is all so wonderful why aren’t we all using it already? Part of the problem is that there are so many different approaches.

Bob Shimp, the vice president of Oracle’s technology business unit, explained, Oracle believes “in one simple universal way to integrate a variety of virtualization solutions, and that is the way that Andrew Morton [the maintainer of the stable Linux kernel] wants to go.”

Indeed, “Oracle is losing its patience over this issue and we are going to be pushing harder and harder on everybody to come to the table with a realistic solution.”

Oracle isn’t the only one that sees it this way. Greg Kroah-Hartman, a Linux kernel maintainer, said “Xen and VMware both supply huge patch-sets and are both trying to do the same thing, but their technologies don’t work with one another, and we are telling them that we do not want to take one over the other, we want them to talk and work it out,”

Many companies are now working to solve this problem. Perhaps the most significant effort in this direction is the work of VMware, XenSource, IBM and Red Hat collaborating on a Linux kernel paravirtualization API. This API could then be used by any Linux-based hypervisor. However, at this time, there is little to show for these efforts.

Another problem with the newer methods, said VMware president Diane Greene, is that by tying an operating system to a particular virtualization technology, as is the case with Xen and Linux and the container approaches, “It could be at a very high cost in terms of providing freedom of choice.”

Users, Greene continued, “When a customer goes to deploy a service or an application, they’re focused on the application. They’re not focused on the operating system. So when they go to deploy that application, they just want to get the optimal software stack to run that application.”

Another problem has nothing to do with the technology per se, but everything to do with its pricing. The usual software business model assumes that a single operating system will run on a computer with one or more processors. Now CPUs not only have multiple cores, but thanks to virtualization, they also run multiple operating systems, and with containers, multiple instances of applications.

For instance, Brent Callinicos, Microsoft’s corporate VP for worldwide licensing, announced in 2005 that Microsoft licenses its virtualizatied operating systems and applications “by running instance, which is to say the number of images, installations and/or copies of the original software stored on a local or storage network.”

Only a few months later, though, Microsoft had somewhat backed off on this plan. The company will let users of its forthcoming Enterprise Edition of Windows Server 2003 R2 run up four instances of the operating system at no additional charge.

IBM, with easily the most experience of any company in virtualization, uses IBM Tivoli Usage and Accounting Manager, to charge customers by their software usage. Rich Lechner, IBM’s VP of virtualization explained that this solves the pricing problem, “A key inhibitor to widespread adoption of virtualization is the ability to track usage and accurately allocate costs of a shared infrastructure to internal departments, or in the case of IT outsourcing providers, to charge their clients for the amount of IT their are consuming.”

VMWare’s Green however sees virtualization licensing “working toward a per-virtual machine model that is slightly different from a per-server model,”

The truth is that there is still nothing like common-ground on virtualization billings.

Besides its use in servers, virtualization software, both traditional and paravirtualization, is also finding its way into desktop PCs. For example, Jason Perlow, a systems architect with Unisys’ open-source solutions, uses VMware Player, “to prototype systems in a jiffy, try new software, and run Windows programs on my Linux system.”

The question today isn’t what companies are selling virtualization solutions, but rather which ones aren’t.

Red Hat and Novell, the two top Linux sellers, for example, have begun building Xen virtualization software into their operating systems with paravirtualization. Specifically, Xen is already in SLES 10 and it will also be in the forthcoming RHEL (Red Hat Enterprise Linux) 5 in December 2006.

SWsoft besides making Virtuozzo container technology for Linux also has a Windows product. In addition, SWsoft recently released its core container code to the open-source OpenVZ project. OpenVZ, in turn, has been adopted by the Debian Linux community.

Microprocessor companies like AMD and Intel, hardware companies like IBM and Dell, and operating system companies like Sun and Microsoft, everyone is deploying virtualization solutions as fast as possible.

Because of this, Gillen thinks that the real conflict will be “about the managing and provisioning and tracking of all this layered software through its full life cycle, and that’s where the biggest competitive and financial battle is likely to come from going forward.”

Gartner’s Bittman also sees another use of virtualization gaining in popularity. This will be “Virtual relocation tools, which enable you to move a virtual machine clear off a physical server to another location will be big. VMware’s VMotion is the first of these, but others are quickly following.”

Another new approach for paravirtualization, ala Xen, according to Jaffe, is the creation of NetWare as a virtualized operating system under SLES. Novell expects this approach, which can keep older applications and operating systems running on modern systems, will find an audience

Finally, Crosby would like to point out that, “Xen is in its earliest days of product shipment, and the standard model for getting software into production in the enterprise will apply. Users will kick the tires, there will be tests, more tests and more tests. When users are ready to trust Xen with mission critical workloads, it will go into the data center.”

So, is virtualization on its way to almost all servers and many desktops? The answer is clearly yes. But, it’s not going to happen tomorrow. First, companies need to accept the new virtualization models, management tools must mature, and a commonly-agreed upon pricing system must come into effect. Then, and only then, will virtualization become as much a part of the IT infrastructure as operating systems are today.