Practical Technology

for practical people.

October 16, 2005
by sjvn01
0 comments

Novell Investor Wants Company to Fire Employees, Sell Divisions

f Blum Partners, a minority Novell stockholder, has its way, Novell would cut $225 million in expenses and sell off Celerant, ZENWorks, GroupWise and Cambridge Technology Partners.

Blum Partners revealed this week that, disappointed by recent Novell results, it wants big changes at the NetWare and Linux vendor.

In an unusual move, the Blum Capital Partners LP investment firm publicly revealed it was very unhappy with Novells current direction.

The San Francisco-based company published several letters to Novell CEO Jack Messman detailing its complaints and suggesting changes in its SEC (Security and Exchange Commission) Schedule 13-D.

The 13-D is a form that a company or individual must file within 10 days of obtaining more than 5 percent of a companys stock.

Earlier this year, Blum only owned 1.33 percent of Novell stock.

Then, in late August, Blum acquired enough shares to put it over the 5 percent mark.

Blum had initially outlined a new path for Novell in May and June, but Messman disagreed with Blums prescription for Novell and did not implement them.

After Novells disappointing financial results for its third fiscal quarter, which ended on July 31, 2005, and were reported on Aug. 25, Blum apparently decided to take a larger position in Novell and go public with its plans.
lum wants Novell to make four changes in four broad areas.

In its Sept. 6 letter, Colin Lind, managing partner and Greg Jackson, partner wrote, “It is imperative that Novell must: (1) reduce costs to an appropriate level necessary to operate all of its businesses profitability; (2) divest non-core businesses; (3) become a leader in Linux and identity management through joint ventures and selective acquisitions; and (4) optimize its capital structure to maximize shareholder value.”

Specifically, Blum wants “funds for R&D, sales and marketing, and general and administrative functions” of over $225 millions to be cut in 2006.

“In addition, our $225 million estimate may prove conservative as we have identified only the obvious targets for costs savings, including the companys two corporate jets, an overstaffed R&D department, the redevelopment of legacy products such as ZENWorks and GroupWise, and the maintenance of over 400 NetWare engineers.”

Next, Blum wants to sell non-core businesses to “reduce unnecessary costs, monetize value, and redeploy funds more productively. We estimate that Novell could generate approximately $500 million of pretax cash (or equity value in spin offs) as follows: $175 million for Celerant, $150 million for ZENWorks/Tally Systems, $100 million for GroupWise, and $75 million for Cambridge Technology Partners.”

As for Linux, Blum wants to acquire or partner with other Linux-related software firms as soon as possible to help create “The Open Stack.”

This last is in line with Novells existing plans.
lum, however, wants Novell to move more quickly in this area.

Finally, Blum finds itself confounded as to why “Novell has yet to implement a major share repurchase program of $500 million.”

According to sources, Messman has little, or no, interest in going along with Blums suggestions.

Novell employees have told eWEEK.com that in an internal memo sent out late last week, Messman told employees that while Novells financial performance must be improved, current cut-cutting efforts and positive news about Open Enterprise Server, ZENWorks, identify management programs and Novells improved traction in India and China points to a brighter future.

Analysts are mixed about how well Blums suggestions would work.

Novell is making progress in its Linux and identity management software initiatives,” said Stacey Quandt, research director for the Aberdeen Group.

“However there is an imbalance between these new initiatives and the legacy NetWare business and associated projects in research and development that may not be as relevant today.”

In addition, “Novell should consider spinning off business units that do not contribute to the core business and it should make further acquisitions in the application software market to enhance its security portfolio,” said Quandt.

Dan Kusnetzky, IDC VP of system software, is puzzled by some of Blums suggestions on divisions that should be sold off.

“Since Novells focus is managing, integrating and securing IT environments, I was rather puzzled why this analyst would suggest that Novell stop developing this technology and to sell off what they have.”

And, as for cutting back on research, Kusnetzky had this to say: “Novell has been working on a hypervisor project for at least five years. Although the base technology has changed a number of times during that period, its pretty clear that Novell now plans to host both NetWare and Linux personalities on a Linux kernel using Xen.”

This makes Kusnetzky “question why someone would suggest that the R&D that produced this type of capabilities would be expendable.”

A version of this story first appeared in eWEEK.

September 29, 2005
by sjvn01
0 comments

There are too darned many Linuxes

By my count, there are one million, two hundred and seventy thousand, and four hundred and seventeen Linux distributions.

Nah, I’m kidding. There are only, by my quick count, one hundred and forty one Linux distributions. Currently shipping. For the Intel platform. In English.

Is it just me, or is something wrong here?

Now, I love operating systems. You name it — CP/M, TOPS-20, VAX/VMS, AmigaOS, VM/MVS, Windows from 1.0 up to Windows Embedded for Point of Service, and more shades of Unix and Linux than you can shake a stick at — chances are, I’ve run it.

For me, besides being having an odd hobby, part of my stock in trade as a technology journalist is the theory and practice of operating systems.

What’s your excuse?

Seriously. Why are there so many people wasting their time inventing and re-inventing Linux?

Yes, I said “wasting” and I meant it.

I mean, come on guys! Over one hundred Linux distributions?!

Don’t you think your time could be better spent on making the Linux mainstream better? Linux is an equal-opportunity operating system. If you can write the code, Linus can use it.

Or, if that’s too grand for you, why not help with OpenSUSE? It’s a new project and it’s a nifty Linux distribution. What’s not to like?

Not your speed? OK, well Fedora, Red Hat’s community distribution can also stand some help. Or, instead of working on lots of little Debian distributions, you could work on the main Debian tree?

If you’re building a distribution to learn the ins-and-outs of Linux, I think you’ll learn more by working on one of the established projects.

By reading over the various distribution development mailing lists — a must for any would-be Linux developer — you’ll quickly learn to avoid the potholes that have swallowed up countless programmers before you.

More to the point, how much are you really contributing to Linux and open-source by spending hours on rebuilding Linux?

There are a ton of open-source projects out there that could use your help. For example, Linux still needs drivers. Until half or more of PCs are running Linux, it probably always will.

Want to build some specific functionality? Maybe, there’s already a project out there working towards that goal that doesn’t involve rebuilding Linux at the same time. Check out SourceForge. You might just be surprised at what is already being worked on.

Absolutely sure that you have a new, wonderful idea for a distribution that no one else has ever had?

Well. I doubt it.

At the very least, wander through some of the distributions already out there on LinuxISO and DistroWatch. Between the two of them, you’ll find most of the current English-language Linux distributions.

If you still can’t find the distribution of your dreams, maybe you want to think whether it’s really that good an idea.

Don’t get me wrong. I’m not saying there isn’t a place for new or small Linux distributions. I’m extremely fond of MEPIS and Xandros.

I also always carry a couple of the live rescue Linux CDs with me like INSERT (Inside Security Rescue Toolkit) and SystemRescueCd. They’ve saved my friends’ Windows and Linux machines more times than I can tell.

On the other hand, you may not know it, but while Linux can go on forever, Linux distributions can, and do, die.

Immunix? Stampede Linux? Storm Linux? Those were noteworthy Linuxes in their day, and they’re all history now. Even the support of a major company, as was the case with Hewlett-Packard and HP Secure Linux, is no guarantee of success.

No, when you get right down to it there’s really very, very little need for yet another Linux distribution.

So, if you’re tempted, do the open-source world a favor — work on an existing project. In the end, you’ll be glad you did.

September 12, 2005
by sjvn01
0 comments

eBay Bids on Becoming VOIP Power

In a real shift, eBay, as is evidence by its purchase of Skype, wants to not just help you auction off your old stuff, but it wants to become your Internet phone company as well.

Good luck, guys.

When eBay CEO Meg Whitman said, “Communications is at the heart of e-commerce and community,” who can disagree with her? What I have trouble with is “combining the two leading e-commerce franchises, eBay and PayPal, with the leader in Internet voice communications … will create an extraordinarily powerful environment for business on the Net.”

I see no fewer than three problems in that last statement.

The first is that I dont see any natural synergy between Skype’s VOIP (voice over IP) and eBays auctioning and online purchasing systems. Do you?

Yes, you could have voice bids for online auctions, but why bother? eBay has honed its online auction system to a fine edge. For eBays core businesses, Skype doesnt really add anything.

Next, I thought eBay and PayPal made money from consumer-to-consumer or business-to-consumer plays, not business-to-business. When I think about business wheeling and dealing, I dont think of eBay, PayPal or, for that matter, Skype.

Finally, while Skypes 40 million customer base is bigger than that of Packet 8, VoicePulse and Vonage combined, theres serious—much more serious—competition coming into the field: Google, Microsoft and Yahoo.

Google has already dipped its toe into VOIP by introducing it into its new IM client, Google Talk Beta. Unlike Skype, which uses a proprietary protocol, Google Talk uses the open SIP (Session Initiation Protocol). Google is already working on interoperability with other SIP-compliant VOIP services such as EarthLinks Vling and the SIPphone teams Gizmo Project.

Microsoft is already deploying VOIP in businesses with LCS (Live Communication Server) 2005 and MSN Messenger for the hoi polloi. The company also recently purchased Teleo, a VOIP provider.

Yahoo isnt being left out of this charge either. Earlier this summer, Yahoo bought Dialpad Communications to add VOIP calls to the PSTN (Public Switched Telephone Network) to its existing Yahoo IM clients PC-to-PC call capabilities.

So, why is eBay doing this, instead of sticking to its strong points by focusing on its recent push to make Internet micropayments viable?

Part of it may be that the other companies are moving in on eBays home turf. While no one has mounted a credible threat against eBay in online auction in years, Google is getting ready to move against PayPal with its “Google Wallet” project.

And, as trite as it sounds, I dont think you can underestimate the “everyones doing it” factor. In turn, whats driving that is that broadband, necessary for practical VOIP use, is becoming increasingly more common. Informa Telecoms & Media predicts that the global broadband Internet market will be over 190 million users by the end of 2006.

Thats a lot of eyes looking at ads on a PC during a phone call.

eBay, even more so than Google and Microsoft, knows its users. The auction company doesnt know just what sites its users look at, it knows what they actually buy. That, in turn, means that eBay has the capability to place ads it knows its users will look at in front of them.

Thats not a small thing.

As Cisco CTO Charlie Giancarlo said recently, “The world in the future will be made up of the quick and the dead. In business terms, the quick are those who capture information about their market and suppliers and act on that information rapidly. The dead are people who dont know and cant act on information.”

eBay clearly wants to be quick. My questions boil down to: All things considered, will eBay be quick enough?

A version of this story first appeared in eWEEK.

September 9, 2005
by sjvn01
2 Comments

NUMA: Theory and Practice

Non-Uniform Memory Access (NUMA) used to be one of those concepts that only people who built massive multiprocessing machines cared about. Things have changed.

With today’s hottest processors easily passing the 3GHz mark and x86-based 64-bit chips like Intel’s Itanium II and AMD’s Opteron becoming easily available the need for faster, more powerful memory management has become more important. Indeed, as Symmetrical MultiProcessing (SMP) computing, clustering, and distributed computing become commonplace, the need for better memory management has become critical.

Why? The answer lies in an almost 40 year old computer engineering rule of thumb called Amdahl’s balanced system law: “A system needs a bit of I/O per second per instruction per second.” That works out to out 8 million-instruction per second (MIPS) for every Megabyte per second (MBps) of data throughput.Now, MIPS haven’t been regarded as the be-all and end-all of CPU benchmarking for years, but they’re still a useful approximation of overall performance. And, what’s important about using MIPS to justify NUMA is that MIPS do give interesting insights on the overall performance of memory and CPU together. So, for example, BiT-Technologies have found that a 2.4GHz Pentium IV (Northwood) runs marginally faster (168.73 MIPS) with Double Data Rate-Synchronous DRAM (DDR-SDRAM) memory than the same processor with Rambus memory (166.83 MIPS).

There are two morals to this performance story. The first is that even a single 32-bit hot, but already commonplace, processor is starting to push the limits of standard memory performance. The second is that even conventional memory types differences play a role in overall system performance. So it should come as no surprise that NUMA support is now in server operating systems like Microsoft’s Windows Server 2003 and in Linux 2.6 kernel.

How NUMA Works

Not all memory is created equal. Generally speaking the closer, in terms of access time, memory is to a processor; the faster the system’s overall I/O will go, thus improving all aspects of system performance.
The example we all probably know best is the use of cache to store frequently accessed data from a hard drive. Since even slow memory’s access speed is much faster than even the speediest hard drive, the end result is a much faster system.

The same trick works for processors and memory by using small amounts of fast RAM either on the chip itself (L1 or primary cache)-or immediately next to the CPU (L2 or secondary cache) to speed up main memory performance in the same way that hard drive cache speeds up disk performance.

NUMA takes cache’s basic concepts of memory locality and expands on them so that multi-processor systems can make effective use of not just their local memory but also of memory that are on different buses, or, for that matter, are only connected to the processor over a fast network.

Specifically, NUMA in the past has been hardware memory architecture in which every processor, or groups of processors, access other CPU’s memory. That does not mean they don’t have access to their own local memory, they do, but by sharing memory there’s much more capable of performing efficient parallel processing and dividing up massively data intensive tasks into manageable sizes.

Does this sound familiar? It should, SMP and clustering both try to do the same things for processing over either a system bus, backplane or a fast network connection. What NUMA does is add memory management to both of those technologies so they can gain better memory access for overall faster performance.

For instance, in SMP transport memory is usually shared by an interconnect bus. As the number of processors increases, so does the bus traffic and eventually throughput starts to decline.

NUMA machines, however, use multiple buses to handle memory thus making it much harder to slow a system down by throwing too much data at it. At the same time though, NUMA provides a linear memory address space that enables processors to directly address all memory. Distributed memory, a technique with similar aims, has to contend more with data replication overhead.

Within most NUMA system, as in an IBM NUMA-Q box, there is an additional area of fast memory called a L3 Cache. All of the processors on a given bus first access memory from this cache and them look to the bus’ main memory resources for data.

A NUMA machine’s main memory is its Uniform Memory Access (UMA) region. Together a set of local processors and its UMA are called a node. In a node, the processors share a common memory space or “local” memory. For example, an SMP system’s shared memory would make up a node. This memory, of course, provides the fastest non-cache memory access for the node’s CPU. Multiple nodes are then combined together to form a NUMA machine. Memory transfers between nodes are handled by routers. Memory that can only be accessed via a router is called ‘remote’ memory.

Needless to say, the name of the game in NUMA is to make those memory transfers as fast as possible to avoid a node’s processors working with out of date data. There’s a whole set of problems that are associated with this and they’re addressed by a variety of techniques deriving from cache coherency theory.

Programming for NUMA

One of the simplest ways to avoid coherency problems is the key to effectively programming on NUMA machines and that is to maximize references to local memory on the node while minimizing references to remote memory. After all, access to remote memory may take three to five times as long as access to local memory even in a well designed NUMA system. By programming in this way, and letting the underlying hardware architecture and operating system deal with NUMA’s memory coherence issues, developers can create effective programs.

That’s easier said than done. To do it today, most developers use tools like Multipath’s Fast Matrix Solver. These typically work by binding I/O threads to specific processors and nodes and by enabling you to explicitly place elements into local node memory.

A NUMA architecture or operating system tries to allocate memory efficiently for you, but its memory management works best for general cases and not for your specific application. As a NUMA system overall load increases, its memory management overhead takes up more time, thus resulting on overall slower performance.

Of course, NUMA doesn’t work magic. Just because your application ‘sees’ what appears to be an extraordinarily large memory space doesn’t mean that it knows how to use it properly. Your application must be written to make it NUMA aware. For example, if you’re old enough to recall writing applications using overlays to deal with the high likelihood of having to dip into virtual memory, those techniques might come in handy for dealing with remote memory.

To help you deal with remote memory, NUMA uses the concept of ‘distance’ between components such as CPUs and local and remote memory. In Linux circles, the metric usually used to describe ‘distance’ is hops. Less accurately, the terms bandwidth and latency are also bandied about. How the term is used varies, but generally speaking the lower the number of hops, the closer a component is to your local processor.

Sounds like a lot of trouble doesn’t it? So why use NUMA? The first reason for many of us is that NUMA can make scaling SMP and tightly-coupled cluster applications much easier. The second is that NUMA is ideal for highly parallel computations of the kind that super-computers thrive on such as weather modeling.

It’s the first, combined with the enormous data hunger of today’s SMP boxes, which has brought NUMA out of a few specialized niches into Server 2003 and Linux 2.6. While long term deployment of NUMA aware applications is probably still a few years away, as development tools come out that make NUMA memory management easier for application developers, it should emerge as the memory management technique of choice for high-end server application developers.

August 29, 2005
by sjvn01
0 comments

Five reasons NOT to use Linux

I love Linux. I use it on my servers, I use it on my desktops, and I use it on my entertainment center, where it powers my HDTV TiVo and my D-Link DSM-320 media player, which turns my network into a media library with terabytes of storage. Heck, I even run Linux on my Linksys WRT54G Wi-Fi access points, which hook the whole shebang together.

But, Linux isn’t for everyone. Seriously. Here are my top five reasons why you shouldn’t move to Linux . . .

Reason number one: Linux is too complicated

Even with the KDE and GNOME graphical windowing interfaces, it’s possible — not likely, but possible — that you’ll need to use a command line now and again, or edit a configuration file.

Compare that with Windows where, it’s possible — not likely, but possible — that you’ll need to use a command line now and again, or edit the Windows registry, where, as they like to tell you, one wrong move could destroy your system forever.

Continue Reading →

August 19, 2005
by sjvn01
0 comments

Mambo Executives, Developers Fight for Project Control

The executive leadership of Mambo, a popular open-source content management system, and the systems developers find themselves at odds on the organizational future of the project.

Earlier in August, Miro International, an Australian firm that owns some of the copyrights and trademarks to the open-source Mambo CMS (content management system), announced the establishment of the Mambo Foundation.

This organizations express purpose is to “to manage the development of the Mambo project, to promote Mambo worldwide and to co-ordinate the efforts of the community.”

So far, so good. But where the Mambo Foundation differs from other recently formed open-source organizations, like the Debian Common Core Alliance, is that some of its developers are publicly objecting to the foundations right to govern Mambo and the way in which the organization was set up.

These developers have set up a Web site, OpenSourceMatters, where they spell out their objections.

“We, the development team, have serious concerns about the Mambo Foundation and its relationship to the community. We believe the future of Mambo should be controlled by the demands of its users and the abilities of its developers.

“The Mambo Foundation is designed to grant that control to Miro, a design that makes cooperation between the Foundation and the community impossible,” the developers wrote.

Specifically, “The Mambo Foundation was formed without regard to the concerns of the core development teams. We, the community, have no voice in its government or the future direction of Mambo.”

“The Mambo Steering Committee made up of development team and Miro representatives authorized incorporation of the Foundation and should form the first Board.

“Miro CEO Peter Lamont has taken it upon himself to incorporate the Foundation and appoint the Board without consulting the two development team representatives, Andrew Eddie and Brian Teeman.”

In addition, the developers claim that “Mr. Lamont through the MSC (Mambo Steering Committee), promised to transfer the Mambo copyright to the Foundation, Miro now refuses to do so.”

“We dont see this issue as black and white Mambo v. Miro, [but] as weve said on the site, we dont believe the establishment of the Foundation was in the spirit of open-source software and its development,” said Andrew Eddie, Mambos project director, member of the Mambo Board of Regents and one of OpenSourceMatters founders.

The groups protest, according to Eddie, has been well-received by the community.

“The forum on the site has already had 500 members join up in less than 24 hours. The people who have so vocally supported us since weve taken the step are invariably asking to how they can help us,” said Eddie.

Lamont, who in addition to being Miros CEO leads the new Foundation, disagreed with the groups description of the Foundation.

While Lamont, Mambos founder, is on record as seeking for Miro to regain greater control over Mambo, he said, “The notion that the Foundation would do anything to hurt Mambo is just ludicrous. Unfortunately, the latter part of the OpenSourceMatters statement starts to slide into inaccuracies and innuendos.”

“The Mambo Foundation was established to create community involvement through organization, not control. If you refer to our Rules of Association, you can see that our position is all about achieving a vibrant, empowered community,” said Lamont.

“The idea for Miro to form the foundation actually came from the Dev team in April as a way to develop a better management structure for the project and an entity for accepting donations,” said Lamont.

“Im sad to see some of them go, but the beauty of open source software and the GPL [General Public License] is that everyone can share, and I wish the guys well in their new business. The only loser in this sort of disagreement is the community and new users to Mambo, so I hope that we can all just get back to working on a great piece of software very soon.”

In return, the developers have said that they will continue to work on the project.

They do not see it as forking Mambo into two different versions.

Instead, they plan on conducting a “rebranding effort that will continue to run largely on the existing codebase. Work is continuing on the project by the same team that has developed Mambo as you know it today. Therefore we see it as continuing development rather than a fork.”

“One point Id like to raise is a lot of people are using the term fork. We hardly see the whole team unanimously choosing a better path as forking. Same project, same team, were just mulling over the right name. The project will continue to develop to meet the communitys expectations,” said Eddie.

Thus, the developer group will be creating a CMS under a new name—thus avoiding Miros copyrights and trademarks—which will use the GPLed Mambo code and support “a high level of backward compatibility for the [existing] 4.5.x series” Mambo program.

“The kids are waking up and realizing that Lamont has picked their pockets, too. Apparently, theyre a little cranky,” observed Brian Connolly, president of Furthermore Inc., an online publishing venture and a division of the Literati Group.

Connolly has sparred with Miro and Mambo before over copyright issues.

A version of Mambo Executives, Developers Fight for Project Control was first published in eWEEK.