Practical Technology

for practical people.

May 24, 2007
by sjvn01
0 comments

Ubuntu-powered Dell desktops and notebook arrive

On May 24, the rumors and speculation came to an end. Dell officially unveiled its three consumer systems — the XPS 410n and Dimension E520n desktops, and the Inspiron E1505n notebook — that come with the Ubuntu 7.04 Linux distribution factory installed.

I predicted that Dell would release Ubuntu-powered computers from these lines. I did not see, however, that rather than offering a variety of models, albeit not the full range from each line, that Dell would be offering a single system from each line.

These three systems will be made available in the U.S. after 4:00 PM CDT on May 24 at the Dell Linux site. These systems are meant to target Linux enthusiasts. The release of these Ubuntu-powered systems is the direct result of the outpouring of customer demand at Dell’s IdeaStorm site, the company’s Web site for fielding customers’ suggestions to improve products, services, and operations.

Continue Reading →

May 18, 2007
by sjvn01
1 Comment

My Great Linux System Repair Adventure

Thunder storms in the Blue Ridge Mountains can come fast. That’s why my main Linux desktop system was still up when one, two, three lightning bolts slammed near my home. Thus began my Great Linux System Repair Adventure.

Despite no fewer than three power surge protectors, including a master power protector for the entire house, just enough of a surge hit my Insignia 300a, an older Best Buy house-brand desktop PC with a 2.8GHz Pentium IV, GB of RAM, and an Ultra ATA/100 60GB hard drive, running SLED 10 (SUSE Linux Enterprise Desktop).

At first, everything looked OK. Then I began getting odd disk errors and programs started misbehaving. So, I used that master tool of all Linux/Unix file repair, fsck, to see what was wrong with my drive.

It wasn’t pretty. There were file system errors here, errors there, errors just about everywhere. I use the ReiserFS, because I really like its speed and space performance. On the other hand, when things go badly wrong, getting the ReiserFS fsck file tree rebuild to work properly can be very tricky.

For simple file problems, ReiserFS restores the file system by replaying its transaction log journal. That’s as it should be. The whole point of a journaling file system is that you can replay disk writes when something goes wrong.

I was well beyond simple problems, though. It was time to unmount the file system — reiserfsck won’t repair mounted systems — and get serious. So, I ran the command:

    reiserfsck  -- rebuild-tree

This option forces reiserfsck to, just like it says; rebuild its b-tree map of the file system.

It was humming along, when bang, it stopped. OK, this can happen. Maybe there’s a bad block. The simple-minded way to find out, without reaching for another tool, is to simply try the command again. If it breaks at the same spot, I’ve got bad blocks, actual sectors of the hard drive that can’t hold data reliably.

If that’s the case, I’d run:

    /sbin/badblocks [-b (reiserfs-block-size)] device

to get a list of bad blocks that reiserfsck can understand. After that, I’d run dd_rescue to create a backup of the file system without the bad blocks. Yes, you can try this with other tools — dd comes to mind immediately — but dd_rescue, unlike dd, doesn’t abort on errors. I could program around that, but dd_rescue does such a good job; so, why bother?

Unfortunately, reiserfsck blew up at a new location… farther along in the rebuilding. OK, so it wasn’t a bad block. This required some thought.

I decided to boot the system back up and see what the system looked like from my KDE 3.5 interface. From there, I planned on backing up my system, which I hadn’t done in a week, with KDar, the KDE disk archiver to a DVD-R. Unfortunately, I didn’t make it that far.

The system got about halfway to the desktop when the boot process failed. OK, now I was getting ticked. It was time to return to single user mode and the command line.

It was also time to get hard-core serious about this misbehaving drive. This time I ran:

    reiserfsck  -- rebuild-tree -S

This forces the b-tree to be rebuilt from any part of the directory and file system or b-tree leaves that may be lying anywhere on the partition. Unless you really — I mean really — know how file systems work, don’t try this home. Go to a friend’s house. It will be much safer there.

No, don’t. That was meant to be funny. Don’t try this anywhere, unless you really know what you’re doing.

Believe it or not, I do know file system internals so I ran it… and I got most of the way through when the process stopped and I got the message:

    The problem has occurred looks like a hardware problem (perhaps memory).

Oh no. Could the memory also be sour? The hard drive was fouled up, no question about that, but remember, I’d also seen strange problems with applications. That, I now remembered, is often a sign of bad memory.

I ran the command again. Yes, there was the same error message, but at a different point in the repair process. This was looking more and more like I actually had two problems.

So, I got up, turned off the system, and went to have lunch. When I came back, I turned the PC back on… and it wouldn’t boot at all.

This was turning into a really bad day.

So, now I pulled out my freshly burned copy of SystemRescueCd 0.35. SystemRescueCd, if you’ve never met it, is the best single CD bootable system repair disk I know.

This special purpose Linux distribution is based on the 2.6.20.7 Linux kernel. It includes:

  • GParted, a top-notch partition manager
  • PartImage, a great drive/partition imager tool
  • NTFS3 an open-source program that enables you to mount and read and write to a Windows NTFS

…and a host of file system repair tools and drivers. It also includes — and for me this is the cherry on top of the Sundae — network file tools like Samba and NFS. With those, you can send files from a near-dead machine to a network server for safe keeping.

So, I popped in SystemRescueCD, and, with its small memory footprint of 128MB, it appeared to load fine. This time I ran reiserfsck from SystemRescueCD and… it failed with a memory error, again. This time, at least, it almost completed the run.

OK, it was time to play with the hardware. When memory is going bad, you can sometimes keep it going for a while longer by slowing it down.

Normally, people only play with memory settings when they’re trying to turbo-charge a gaming system or the like. The same techniques, applied in reverse, can sometimes get some useful life from sick systems like mine.

Now, playing tricks with RAM is a subject unto itself. For more on that subject, visit sites like Extreme Tech and Tom’s Hardware and look for stories on overclocking.

I was going the other way; I was going to “underclock” my system’s memory. To do this, I went to my PC’s advanced BIOS section. For my purposes, I started with slowing down the CAS (column address strobe) latency. This setting determines how many clock cycles the system waits before issuing a CAS signal and outputting data from the memory chip. A higher value means more waiting, therefore a slower computer, and a bit more memory reliability.

After setting this up, I rebooted again with SystemRescueCD, ran reiserfsck with all the trimmings, and this time it worked. I once more had a viable file system.

Now, my problem was how to get the important files out of there before something else went wrong. Trying to repair the system was a task for another day. Today, I just wanted my files safe, snug and well away from that machine.

My new problem, though, was that my important files, in /home/sjvn, came to a whopping 22 gigabytes. Yes, I’m a file and email packrat.

22 gigabytes is way too much for burning to a DVD or a USB stick. For the first time, I found myself wishing for a Blu Ray disc burner on a PC. Even over my 100Mbps Fast Ethernet connection, I really didn’t want to waste time sending all that data.

The solution was clearly to compress my files down and put them into a more conveniently sized archive for shipping across the network. Linux is full of tools to do that, but tar, that old faithful, was the first program that came to mind.

So, I mounted the repaired partition, headed over to the /home/sjvn, and zapped a lot of junk files with “rm.” Then I hopped back up to the /home directory and ran:

    tar cvzf sjvn/sjvnhomedir.tar.gz sjvn

This created the compressed archive “sjvnhomedir.tar.gz” in /home/sjvn. The tar options were the basics: “c” for create; “v” for verbose (I wanted to know what was going on); “z” for compress files with gzip; and “f” to give the archive its name.

Now, I was left with only one final step: getting my important files, now zipped up in “sjvnhomedir.tar.gz,” to a healthy computer. I decided once more to go with easy, over other alternatives.

This time, that meant setting up an SSH (secure shell) server on the sick machine. To do this, I had to give the machine a root password; anything will do. Then, log in with it and run:

    /etc/init.d/sshd start

That starts up the SSH server. And that was the last thing I had to do on that system.

I then moved to another Linux system. In my case, that just mean I used my IOGEAR KVM (keyboard, video, and mouse) switch to click over to the MEPIS 6.5 system sitting right next to the sick SLED 10 box.

Once logged in on the MEPIS PC, I logged into the SLED system’s SSH server as root, and moved to the /home/sjvn directory. Once there, I used scp (secure copy) to copy sjvnhomedir.tar.gz to my MEPIS system, like so:

    scp sjvnhomedir.tar.gz sjvn@MEPIS:

At long last, I was done. I had my files safely stored away.

Today, the sick PC is back to working, albeit at a slower speed. I don’t trust it as a front-line system, so I replaced it with an HP Pavilion a6040n. That PC is now my main SLED system. On it, safe and sound, is every file I rescued from the sick computer.

My point in telling you of my misadventure is that, with a little knowledge and Linux tools, which SystemRescueCD brings together for you, you can save your files even from apparently hopeless situations.

Oh, and a final note: SystemRescueCD can also work the same magic on your Windows systems. I can’t recommend this mini-distribution enough for anyone who might face repairing any Unix, Linux, or Windows-based computer.

May 14, 2007
by sjvn01
0 comments

Microsoft reignites its war on Linux

Microsoft CEO Steve Ballmer on May 14 claimed that “Linux violates over 228 patents. Someday, for all countries that are entering WTO [the World Trade Organization], somebody will come and look for money to pay for the patent rights for that intellectual property.”

With that comment, Microsoft declared war against Linux and open source yesterday… Oh wait. My mistake, Ballmer made that attack in November 2004.

What Microsoft did yesterday, in an interview with Fortune, was to have Brad Smith, Microsoft’s general counsel, reiterate and elaborate those tired old claims. This time around, Microsoft claims that the Linux kernel violates 42 of its patents, while the Linux graphical user interfaces break another 65. In addition, the Open Office suite of programs infringes 45 more, an assortment of email programs violate 15 others, and an assortment of free and open-source programs allegedly transgress 68 more patents.

In a statement obtained by eWEEK, Microsoft’s vice president of intellectual property and licensing, Horacio Gutierrez claims that “Even the founder of the Free Software Foundation, Richard Stallman, noted last year that Linux infringes well over 200 patents from multiple companies The real question is not whether there exist substantial patent infringement issues, but what to do about them.”

How are these programs violating the patents? Heck, which patents are being violated? We don’t know. Microsoft isn’t saying.

Gosh, vague threatening IP (intellectual property) claims without any facts.

Where have we heard that before? Could it be from early days of the long discredited claims against Linux by SCO? Claims that have fallen from grandiose heights to 326 unimportant lines of code?

If we look closer at Microsoft’s claims, we see that we don’t need a court to dismiss them. Take a hard look, for example, at Stallman’s comment. He was referring to that same 2004 study I talked about at the start. The same study, where the author, Dan Ravicher, an attorney and executive director of PUBPAT (the Public Patent Foundation), told me at the time, “Microsoft is up to its usual FUD [fear, uncertainty and doubt].”

Ravicher’s point: Under today’s cockamamie patent law, “the number we found, to anyone familiar with this issue, is so average as to be boring; almost any piece of software potentially infringes at least that many patents.” He continued, “The point of the study was actually to eliminate the FUD about Linux’s alleged legal problems by attaching a quantifiable measure versus the speculation.”

So, while Microsoft’s latest claims may sound terrible to the layman, any attorney worth his or her salt will know that these are old and basically bogus statements. So, why is Microsoft trotting them back out again?

I believe it serves two purposes. One is to spread more FUD about Linux and open-source. For the first time, a major computer vendor, Dell, has committed to a consumer desktop Linux. More states, like California, are considering making laws that require the use of the ODF (Open Document Format). From Microsoft’s point of view, it was time to get people worried about open-source again.

The other purpose is to try to get leverage against the upcoming GNU GPLv3 (General Public License, version 3). The latest draft includes patent language that will make it much harder to make patent deals, such as the November 2006 Microsoft and Novell partnership.

Why should Microsoft care? Because, Microsoft, by distributing SLES (SUSE Linux Enterprise Server) certificates to customers such as Dell, as part of the Novell/Microsoft partnership, may have just placed any IP they might or might not have in Linux, under the GPL.

No, that’s not just open-source fanboy talk. Prominent open-source lawyers, like Eben Moglen, the executive director of the Software Freedom Law Center, believe that by distributing the SLES certificates, Microsoft has become a Linux distributor, and therefore subject to the GPL.

For Microsoft, being subject to the GPL in any way, shape, or form would be a nightmare scenario. If they can get some leverage in their fight to get away from the GPL by getting people frightened of open source, they will.

A version of this story first appeared in Linux-Watch.

May 9, 2007
by sjvn01
0 comments

Red Hat shows its Global Desktop Cards

Red Hat Inc. on May 9 announced the availability of a new client product, Red Hat Global Desktop, at its annual Red Hat Summit tradeshow in San Diego. This desktop aims to deliver a modern user experience with an enterprise-class suite of productivity applications.

Red Hat CTO Brian Stevens stated, “Users, requirements and technologies have changed so dramatically over the past few years that the traditional one-size-fits-all desktop paradigm is simply exhausted.”

“Commercial customers are still begging for desktop security and manageability for their knowledge workers; consumers are rapidly adopting new online services and applications; and developing nations are looking for affordable information technologies that bypass traditional desktops entirely. Our strategy is to deliver technologies that are specifically appropriate to these varied constituents,” Stevens continued.

Continue Reading →

May 2, 2007
by sjvn01
0 comments

What’s what with openSUSE, ZENworks, YaST

In late April, SUSE Project Manager Andreas Jaeger announced on the openSUSE list that “Beginning with the next alpha release of openSUSE 10.3, alpha 4, ZENworks will be gone. Instead, openSUSE “will use the native tools only — Zypper, openSUSE-updater, and YaST.”

Now, Bruce Lowry, Novell Inc.’s director of PR, explains in his blog what this means for Novell’s enterprise SUSE Linux operating systems.

Lowry opened by writing that, “openSUSE will now be focusing on native software management using YaST and ‘zypp,’ the package-management library. As a result, openSUSE 10.3 will not include the ZENworks Management Daemon.”

Starting with openSUSE 10.1, the openSUSE group, with Novell’s support, changed the default package management software from SUSE 9 and 10’s YOU (YaST Online Update) to zypp. This is a backend library that uses RPM (RPM Package Manager) packages for installing, removing, and querying program packages. This new program is an attempt by Novell to marry the best features of SUSE’s yast2 package manager and Ximian’s libredcarpet. It’s also used by ZMD to create the system-tray notification applet, zen-updater.

This change did not work well. There have been several updates to the new package management programs. Despite all these improvements, the combined package system did not work well.

Since openSUSE is the community Linux that becomes, after it’s tested out and matured, the basis for SLED and SLES (SUSE Linux Enterprise Desktop/Server), the question arises: “What does this mean for Novell’s enterprise Linuxes?”

Lowry explained that, “First, and most important, patch, update and software deployment will remain compatible between SUSE Linux Enterprise 10 and future Novell solutions, so that customers can rest easy that their existing update systems will work for the entire supported life-cycle of their SUSE Linux Enterprise investment.”

He went on, “The openSUSE team decided to focus their efforts on YaST and ‘zypp.’ Why? The short answer is that ZENworks is not necessary for openSUSE. OpenSUSE is targeted at the technical enthusiasts who want a cutting-edge distribution to sample the latest and greatest Linux technology. Most openSUSE users deploy one or two servers in their environment. They don’t need the capabilities inside ZENworks to manage those one or two servers. In order to patch one or two servers in a non-mission-critical environment, YaST and the ‘zypp’ tools are sufficient.”

Thus, the work for integrating the ZENworks client — zmd — and SUSE Linux is being given to Novell’s engineers. They will “continue to work on automatic detection and integration of SUSE Linux Enterprise systems into a ZENworks infrastructure, while maintaining the high standards of interoperability, scalability, security and performance customers expect from Novell technologies,” wrote Lowry.

“The ZENworks component ‘zmd,’ as well as its associated command line and graphic interface tools, remain available and supported for SUSE Linux Enterprise 10. Going forward, ZENworks Linux Management will remain Novell’s solution for enterprise-class resource management for desktops and servers,” Lowry continued.

“We are currently designing SUSE Linux Enterprise 11, which is targeted to provide “interface-compatible” utilities to rug – the command-line interface that complements the ZENworks software management environment. OpenSUSE delivers most of this interface compatibility in its “zypp” environment.

SUSE Linux Enterprise 11 will also include the well-known graphical interfaces [YAST] for software management,” concluded Lowry.

The result will be that the current ZENworks components will be changed by Novell’s engineers to work better in the commercial versions of Linux. In the meantime, openSUSE will continue its work with Zypper, openSUSE-updater, and YaST. Zypp will stay in both, as the core package update component.

A version of this story first appeared in Linux-Watch.