Practical Technology

for practical people.

October 26, 2005
by sjvn01
0 comments

Google Opens Up About Open Source

The rumors never stop.

Google Inc. and Sun Microsystems Inc. will release a Web-based StarOffice desktop suite. Google will soon announce a new operating system.

The truth isnt anything as dramatic, but it does show a company that not only supports open source, but relies on it every day to keep the best-known search engine and allied businesses running.

In a Ziff Davis Internet interview, Chris DiBona, Googles open source program manager, said that while he cant “talk about any future products,” he also added that, to the best of his knowledge “Google has no plans to release an operating system or an office suite.”

“I like the ideas of thin-client office programs, but I cant address products,” he added.

That said, though, DiBona added, “We do support and use open-source programs. For example, we hired people to help make OpenOffice.org better.”

For instance, Google employees, DiBona said, helped make OpenOffice.org 2.0, load faster.

Still, while Google has no plans to release end-user open-source programs, it actually already releases open-source code and programs that developers find useful.

DiBona cited Googles release of its AJAX (Asynchronous JavaScript and XML)-based AJAXSLT as an example.

“It may not be interesting to most people, but AJAX is mellow for developers. It lets them code more flexible user interfaces for Web browsers. Were trying to release more of this kind of code.”

Looking ahead, DiBona sees Google releasing “more development tools. We like showing people some of the cool things we do. We want to share more code, but programming tools like our Core Dumper or CPU profiler dont get the hype.”

All of Googles current open-source projects can be found at Google Code, the companys software site.

In the future, existing Google programs, like the Google Toolbar, Google Talk and Google Desktop may be made open source.

But, for now, “Google is focusing on releasing underlying technologies and concentrating on lower-level functionality programs,” said DiBona.

Still, he said that Google has “a long list of software to open source and weve got to start somewhere.”

When? “We have no timelines,” replied DiBona.

Within the company itself, “most Google developers use Linux desktops.” Its not just the technical staff that is Linux and open-source users and supporters. It comes from the top.

Sergey Brin and Larry Page, Googles founders, are both “passionate about open source” according to DiBona.

Googles open-source staff itself is very small. There are only four or five employees tasked to the office according to DiBona. But, that doesnt tell the whole story.

“We leverage our engineers. With so many engineers at Google who are involved in open-source and Linux, many of them use part of their time to work on open-source projects.”

A recent example of Google hiring open-source friendly developers is its recent hire of Sean Egan, lead developer for the popular open-source GAIM IM client, to work on Google Talk.

DiBona also expects the open-source staff to grow.

“Were tasked to make more open-source code,” said DiBona, who has been with Google since August 2004.

“We like to do this open-source stuff for both our industry and for our users. It makes a level playing field for everyone.”

That even includes proprietary programs. For instance, “once Firefox started really competing, Microsoft was forced to make Internet Explorer better for its users,” said DiBona.

“Before Firefox, IE was stagnant. Now that Microsoft has competition, theyve started to improve it. This kind of competition is good for all of us.”

And this kind of improvement by competition doesnt happen just on the PC side, according to DiBona.

“If you look at rise of Apache web server to dominance, you can see how Microsoft has had to make IIS (Internet Information Server) better. You can see why other Web servers have disappeared.”

According to NetCrafts September 2005 statistics, Apache has almost 50 million Web servers operating on the Internet, while IIS, a distant number two, has about 14.5 million servers.

Google has also been supporting open source by encouraging students to develop it. The most prominent example of that was its $2 million “Summer of Code.”

“It was my thought that through a program like this we could infuse new blood into some long-established projects,” said DiBona.

So Google gave more than 400 young programmers a chance to work on open-source projects over the summer of 2005. More than 40 open-source groups received help from the new programmers.

The Apache Software Foundation led the way with 38 projects, followed by KDE with 24 and FreeBSD with 20.

The results were “remarkable and blew pass anything I was expecting. We saw a 93 to 94 percent success rate. They did some amazing work.”

To read about Google merging map and local sites, click here.

At this time, there are no hard plans for a summer of code 2006, but DiBona wants to do one. “Were still evaluating everything but I want to do another one with new students.”

Indeed, “we had thought about doing a winter of code, but the students are busy with classes.”

In the meantime, though, Google has donated $350,000 to a joint open-source technology initiative at Oregon State University and Portland State University.

“Supporting the projects and institutions advancing open-source software and hardware helps ensure the continued success and advancement of open-source technologies.

“The teams at Oregon State and Portland State have done great open-source work in the past and were excited to back their joint efforts,” said DiBona.

“This partnership between Google and important research universities is yet another indicator of the continued evolution and maturity of the Linux and open-source markets,” said Dan Kusnetzky, IDCs VP of system software research in a statement.

In addition to fostering open-source developers efforts and releasing code, Googles open-source office, which is under Google engineering department, is “making sure people are using open-source software properly in their code. We also have a training mission to make sure they understand what you can, and cant do, legally with the code according to its license.”

So it is that Google, while not producing the kind of headlines that some people wish that it were with its open-source efforts, is nonetheless strongly supporting, producing and using open-source.

A version of this story first appeared in eWEEK.

October 20, 2005
by sjvn01
0 comments

When is Debian not Debian?

There are times when I just want to crack some open-source heads together.

Take, please take, for example, the current fit in Debian circles over whether the DCC Alliance can use the Debian name or trademark.

On one side, you have the DCC Alliance members: Credativ GmbH, Knoppix, LinEx, Linspire, MEPIS LLC, Progeny Linux Systems Inc., Sun Wah Linux Ltd., UserLinux, and Xandros Inc.

What do they have in common? Ding! Ding! Ding!

That’s right, they all build Linux distributions around Debian. They all employ Debian developers. They’re all about — say it with me — Debian!

Who is the DCC Alliance’s fearless leader?

Why none other than Ian Murdock! You know? The guy who founded Debian.

The DCC Alliance’s goal? To create an LSB (Linux Standard Base) 3.0-compliant, Debian 3.1 (“Sarge”)-based core distribution. This, in turn, is designed to serve as the basis for DCCA members’ custom Linux distributions. The code is also being released back to the Debian community.

The problem?

Some Debian developers are very, very upset that people are getting confused about the difference between the DCC Alliance and Debian. They’re afraid that they’ll lose their trademark.

Now, keep in mind that “they” also includes both people who are for the DCC Alliance and those who are against it.

In short, what we have here is an internal fight over who controls the Debian trademark and logo.

So, the DCC Alliance — and now you know why I haven’t been spelling it out — decided to make peace and just drop “Debian” from their original name, the Debian Common Core Alliance.

As for the Debian logo, well the DCC Alliance (DCCA) is still using that because they can. The logo’s license reads: “This logo or a modified version may be used by anyone to refer to the Debian project, but does not indicate endorsement by the project.” That seems to cover it, to me.

So, that’s it, right?

I wish!

Now, some Debian figures have decided to take the mess public under the guise of reporting. David ‘cdlu’ Graham, a member and officer of the board of directors of Software in the Public Interest Inc., the Debian Project’s legal side, reported on this tempest in a tea cup in NewsForge.

Graham then followed up with a short interview with Murdock, in which Murdock explained why the DCC Alliance hadn’t made a big deal about the name change.

“We haven’t refused to issue a press release; we just felt it wasn’t the appropriate venue for such an announcement. I posted a message to my blog because I know members of the press who have appropriate context follow it, and if they thought their readers would consider the announcement news they would write about it,” said Murdock.

Now, as it happens, I’m one of those members of the press who follow Debian, the DCC Alliance, and many of its member companies, and I agree with Murdock. This isn’t news.

What may become news though, if these cranky anti-DCC Alliance people can’t get their act together, is a pointless Debian civil war.

To quote one anonymous NewsForge letter-writer, “If the Debian developers spent as much time worrying about the details of their release as they do about this nonsense with the DCCA, we’d have a better, more frequently released Debian.”

Exactly.

It’s more than that, though. A lot more.

It’s these kinds of petty fights that ensure that Bill Gates and Microsoft rule the software world.

Even now, some people are still refusing to let the issue go. One declared that Murdock’s blog announcement of the name change “is some kind of insulting joke.”

Let me spell it out for you. Anyone who knows enough about Debian to care about the trademark knows enough to know that the DCC Alliance does not equal Debian.

Kissing cousins, yes. Identical twins, no.

And as for everyone else, if they don’t know, they certainly don’t care!

What do the vast majority of Linux users, never mind computer users at large, see? They see a petty, power struggle.

They see all the things that Microsoft sales people say about Linux and its developers — “they’re immature, and they don’t know how to run a business” — in action.

Thanks folks. We needed that the same way we needed the OSF (Open Software Foundation) versus UI (Unix International) fights.

Don’t remember them? Of course not, they’re what killed off any chance Unix had of beating Windows to the punch back in the late 80s and 90s.

If you’re involved in this mess and you really want to help Debian, first, handle your disputes privately and quietly. Second, realize that there is nothing about this conflict that really matters to 99.9999% of most people, and it really shouldn’t matter that much to you. Finally, as that same anonymous reader I quoted above, put it so succinctly: “Shut up and code!”

A version of this story first appeared in Linux-Watch.

October 16, 2005
by sjvn01
0 comments

Novell Investor Wants Company to Fire Employees, Sell Divisions

f Blum Partners, a minority Novell stockholder, has its way, Novell would cut $225 million in expenses and sell off Celerant, ZENWorks, GroupWise and Cambridge Technology Partners.

Blum Partners revealed this week that, disappointed by recent Novell results, it wants big changes at the NetWare and Linux vendor.

In an unusual move, the Blum Capital Partners LP investment firm publicly revealed it was very unhappy with Novells current direction.

The San Francisco-based company published several letters to Novell CEO Jack Messman detailing its complaints and suggesting changes in its SEC (Security and Exchange Commission) Schedule 13-D.

The 13-D is a form that a company or individual must file within 10 days of obtaining more than 5 percent of a companys stock.

Earlier this year, Blum only owned 1.33 percent of Novell stock.

Then, in late August, Blum acquired enough shares to put it over the 5 percent mark.

Blum had initially outlined a new path for Novell in May and June, but Messman disagreed with Blums prescription for Novell and did not implement them.

After Novells disappointing financial results for its third fiscal quarter, which ended on July 31, 2005, and were reported on Aug. 25, Blum apparently decided to take a larger position in Novell and go public with its plans.
lum wants Novell to make four changes in four broad areas.

In its Sept. 6 letter, Colin Lind, managing partner and Greg Jackson, partner wrote, “It is imperative that Novell must: (1) reduce costs to an appropriate level necessary to operate all of its businesses profitability; (2) divest non-core businesses; (3) become a leader in Linux and identity management through joint ventures and selective acquisitions; and (4) optimize its capital structure to maximize shareholder value.”

Specifically, Blum wants “funds for R&D, sales and marketing, and general and administrative functions” of over $225 millions to be cut in 2006.

“In addition, our $225 million estimate may prove conservative as we have identified only the obvious targets for costs savings, including the companys two corporate jets, an overstaffed R&D department, the redevelopment of legacy products such as ZENWorks and GroupWise, and the maintenance of over 400 NetWare engineers.”

Next, Blum wants to sell non-core businesses to “reduce unnecessary costs, monetize value, and redeploy funds more productively. We estimate that Novell could generate approximately $500 million of pretax cash (or equity value in spin offs) as follows: $175 million for Celerant, $150 million for ZENWorks/Tally Systems, $100 million for GroupWise, and $75 million for Cambridge Technology Partners.”

As for Linux, Blum wants to acquire or partner with other Linux-related software firms as soon as possible to help create “The Open Stack.”

This last is in line with Novells existing plans.
lum, however, wants Novell to move more quickly in this area.

Finally, Blum finds itself confounded as to why “Novell has yet to implement a major share repurchase program of $500 million.”

According to sources, Messman has little, or no, interest in going along with Blums suggestions.

Novell employees have told eWEEK.com that in an internal memo sent out late last week, Messman told employees that while Novells financial performance must be improved, current cut-cutting efforts and positive news about Open Enterprise Server, ZENWorks, identify management programs and Novells improved traction in India and China points to a brighter future.

Analysts are mixed about how well Blums suggestions would work.

Novell is making progress in its Linux and identity management software initiatives,” said Stacey Quandt, research director for the Aberdeen Group.

“However there is an imbalance between these new initiatives and the legacy NetWare business and associated projects in research and development that may not be as relevant today.”

In addition, “Novell should consider spinning off business units that do not contribute to the core business and it should make further acquisitions in the application software market to enhance its security portfolio,” said Quandt.

Dan Kusnetzky, IDC VP of system software, is puzzled by some of Blums suggestions on divisions that should be sold off.

“Since Novells focus is managing, integrating and securing IT environments, I was rather puzzled why this analyst would suggest that Novell stop developing this technology and to sell off what they have.”

And, as for cutting back on research, Kusnetzky had this to say: “Novell has been working on a hypervisor project for at least five years. Although the base technology has changed a number of times during that period, its pretty clear that Novell now plans to host both NetWare and Linux personalities on a Linux kernel using Xen.”

This makes Kusnetzky “question why someone would suggest that the R&D that produced this type of capabilities would be expendable.”

A version of this story first appeared in eWEEK.

September 29, 2005
by sjvn01
0 comments

There are too darned many Linuxes

By my count, there are one million, two hundred and seventy thousand, and four hundred and seventeen Linux distributions.

Nah, I’m kidding. There are only, by my quick count, one hundred and forty one Linux distributions. Currently shipping. For the Intel platform. In English.

Is it just me, or is something wrong here?

Now, I love operating systems. You name it — CP/M, TOPS-20, VAX/VMS, AmigaOS, VM/MVS, Windows from 1.0 up to Windows Embedded for Point of Service, and more shades of Unix and Linux than you can shake a stick at — chances are, I’ve run it.

For me, besides being having an odd hobby, part of my stock in trade as a technology journalist is the theory and practice of operating systems.

What’s your excuse?

Seriously. Why are there so many people wasting their time inventing and re-inventing Linux?

Yes, I said “wasting” and I meant it.

I mean, come on guys! Over one hundred Linux distributions?!

Don’t you think your time could be better spent on making the Linux mainstream better? Linux is an equal-opportunity operating system. If you can write the code, Linus can use it.

Or, if that’s too grand for you, why not help with OpenSUSE? It’s a new project and it’s a nifty Linux distribution. What’s not to like?

Not your speed? OK, well Fedora, Red Hat’s community distribution can also stand some help. Or, instead of working on lots of little Debian distributions, you could work on the main Debian tree?

If you’re building a distribution to learn the ins-and-outs of Linux, I think you’ll learn more by working on one of the established projects.

By reading over the various distribution development mailing lists — a must for any would-be Linux developer — you’ll quickly learn to avoid the potholes that have swallowed up countless programmers before you.

More to the point, how much are you really contributing to Linux and open-source by spending hours on rebuilding Linux?

There are a ton of open-source projects out there that could use your help. For example, Linux still needs drivers. Until half or more of PCs are running Linux, it probably always will.

Want to build some specific functionality? Maybe, there’s already a project out there working towards that goal that doesn’t involve rebuilding Linux at the same time. Check out SourceForge. You might just be surprised at what is already being worked on.

Absolutely sure that you have a new, wonderful idea for a distribution that no one else has ever had?

Well. I doubt it.

At the very least, wander through some of the distributions already out there on LinuxISO and DistroWatch. Between the two of them, you’ll find most of the current English-language Linux distributions.

If you still can’t find the distribution of your dreams, maybe you want to think whether it’s really that good an idea.

Don’t get me wrong. I’m not saying there isn’t a place for new or small Linux distributions. I’m extremely fond of MEPIS and Xandros.

I also always carry a couple of the live rescue Linux CDs with me like INSERT (Inside Security Rescue Toolkit) and SystemRescueCd. They’ve saved my friends’ Windows and Linux machines more times than I can tell.

On the other hand, you may not know it, but while Linux can go on forever, Linux distributions can, and do, die.

Immunix? Stampede Linux? Storm Linux? Those were noteworthy Linuxes in their day, and they’re all history now. Even the support of a major company, as was the case with Hewlett-Packard and HP Secure Linux, is no guarantee of success.

No, when you get right down to it there’s really very, very little need for yet another Linux distribution.

So, if you’re tempted, do the open-source world a favor — work on an existing project. In the end, you’ll be glad you did.

September 12, 2005
by sjvn01
0 comments

eBay Bids on Becoming VOIP Power

In a real shift, eBay, as is evidence by its purchase of Skype, wants to not just help you auction off your old stuff, but it wants to become your Internet phone company as well.

Good luck, guys.

When eBay CEO Meg Whitman said, “Communications is at the heart of e-commerce and community,” who can disagree with her? What I have trouble with is “combining the two leading e-commerce franchises, eBay and PayPal, with the leader in Internet voice communications … will create an extraordinarily powerful environment for business on the Net.”

I see no fewer than three problems in that last statement.

The first is that I dont see any natural synergy between Skype’s VOIP (voice over IP) and eBays auctioning and online purchasing systems. Do you?

Yes, you could have voice bids for online auctions, but why bother? eBay has honed its online auction system to a fine edge. For eBays core businesses, Skype doesnt really add anything.

Next, I thought eBay and PayPal made money from consumer-to-consumer or business-to-consumer plays, not business-to-business. When I think about business wheeling and dealing, I dont think of eBay, PayPal or, for that matter, Skype.

Finally, while Skypes 40 million customer base is bigger than that of Packet 8, VoicePulse and Vonage combined, theres serious—much more serious—competition coming into the field: Google, Microsoft and Yahoo.

Google has already dipped its toe into VOIP by introducing it into its new IM client, Google Talk Beta. Unlike Skype, which uses a proprietary protocol, Google Talk uses the open SIP (Session Initiation Protocol). Google is already working on interoperability with other SIP-compliant VOIP services such as EarthLinks Vling and the SIPphone teams Gizmo Project.

Microsoft is already deploying VOIP in businesses with LCS (Live Communication Server) 2005 and MSN Messenger for the hoi polloi. The company also recently purchased Teleo, a VOIP provider.

Yahoo isnt being left out of this charge either. Earlier this summer, Yahoo bought Dialpad Communications to add VOIP calls to the PSTN (Public Switched Telephone Network) to its existing Yahoo IM clients PC-to-PC call capabilities.

So, why is eBay doing this, instead of sticking to its strong points by focusing on its recent push to make Internet micropayments viable?

Part of it may be that the other companies are moving in on eBays home turf. While no one has mounted a credible threat against eBay in online auction in years, Google is getting ready to move against PayPal with its “Google Wallet” project.

And, as trite as it sounds, I dont think you can underestimate the “everyones doing it” factor. In turn, whats driving that is that broadband, necessary for practical VOIP use, is becoming increasingly more common. Informa Telecoms & Media predicts that the global broadband Internet market will be over 190 million users by the end of 2006.

Thats a lot of eyes looking at ads on a PC during a phone call.

eBay, even more so than Google and Microsoft, knows its users. The auction company doesnt know just what sites its users look at, it knows what they actually buy. That, in turn, means that eBay has the capability to place ads it knows its users will look at in front of them.

Thats not a small thing.

As Cisco CTO Charlie Giancarlo said recently, “The world in the future will be made up of the quick and the dead. In business terms, the quick are those who capture information about their market and suppliers and act on that information rapidly. The dead are people who dont know and cant act on information.”

eBay clearly wants to be quick. My questions boil down to: All things considered, will eBay be quick enough?

A version of this story first appeared in eWEEK.

September 9, 2005
by sjvn01
2 Comments

NUMA: Theory and Practice

Non-Uniform Memory Access (NUMA) used to be one of those concepts that only people who built massive multiprocessing machines cared about. Things have changed.

With today’s hottest processors easily passing the 3GHz mark and x86-based 64-bit chips like Intel’s Itanium II and AMD’s Opteron becoming easily available the need for faster, more powerful memory management has become more important. Indeed, as Symmetrical MultiProcessing (SMP) computing, clustering, and distributed computing become commonplace, the need for better memory management has become critical.

Why? The answer lies in an almost 40 year old computer engineering rule of thumb called Amdahl’s balanced system law: “A system needs a bit of I/O per second per instruction per second.” That works out to out 8 million-instruction per second (MIPS) for every Megabyte per second (MBps) of data throughput.Now, MIPS haven’t been regarded as the be-all and end-all of CPU benchmarking for years, but they’re still a useful approximation of overall performance. And, what’s important about using MIPS to justify NUMA is that MIPS do give interesting insights on the overall performance of memory and CPU together. So, for example, BiT-Technologies have found that a 2.4GHz Pentium IV (Northwood) runs marginally faster (168.73 MIPS) with Double Data Rate-Synchronous DRAM (DDR-SDRAM) memory than the same processor with Rambus memory (166.83 MIPS).

There are two morals to this performance story. The first is that even a single 32-bit hot, but already commonplace, processor is starting to push the limits of standard memory performance. The second is that even conventional memory types differences play a role in overall system performance. So it should come as no surprise that NUMA support is now in server operating systems like Microsoft’s Windows Server 2003 and in Linux 2.6 kernel.

How NUMA Works

Not all memory is created equal. Generally speaking the closer, in terms of access time, memory is to a processor; the faster the system’s overall I/O will go, thus improving all aspects of system performance.
The example we all probably know best is the use of cache to store frequently accessed data from a hard drive. Since even slow memory’s access speed is much faster than even the speediest hard drive, the end result is a much faster system.

The same trick works for processors and memory by using small amounts of fast RAM either on the chip itself (L1 or primary cache)-or immediately next to the CPU (L2 or secondary cache) to speed up main memory performance in the same way that hard drive cache speeds up disk performance.

NUMA takes cache’s basic concepts of memory locality and expands on them so that multi-processor systems can make effective use of not just their local memory but also of memory that are on different buses, or, for that matter, are only connected to the processor over a fast network.

Specifically, NUMA in the past has been hardware memory architecture in which every processor, or groups of processors, access other CPU’s memory. That does not mean they don’t have access to their own local memory, they do, but by sharing memory there’s much more capable of performing efficient parallel processing and dividing up massively data intensive tasks into manageable sizes.

Does this sound familiar? It should, SMP and clustering both try to do the same things for processing over either a system bus, backplane or a fast network connection. What NUMA does is add memory management to both of those technologies so they can gain better memory access for overall faster performance.

For instance, in SMP transport memory is usually shared by an interconnect bus. As the number of processors increases, so does the bus traffic and eventually throughput starts to decline.

NUMA machines, however, use multiple buses to handle memory thus making it much harder to slow a system down by throwing too much data at it. At the same time though, NUMA provides a linear memory address space that enables processors to directly address all memory. Distributed memory, a technique with similar aims, has to contend more with data replication overhead.

Within most NUMA system, as in an IBM NUMA-Q box, there is an additional area of fast memory called a L3 Cache. All of the processors on a given bus first access memory from this cache and them look to the bus’ main memory resources for data.

A NUMA machine’s main memory is its Uniform Memory Access (UMA) region. Together a set of local processors and its UMA are called a node. In a node, the processors share a common memory space or “local” memory. For example, an SMP system’s shared memory would make up a node. This memory, of course, provides the fastest non-cache memory access for the node’s CPU. Multiple nodes are then combined together to form a NUMA machine. Memory transfers between nodes are handled by routers. Memory that can only be accessed via a router is called ‘remote’ memory.

Needless to say, the name of the game in NUMA is to make those memory transfers as fast as possible to avoid a node’s processors working with out of date data. There’s a whole set of problems that are associated with this and they’re addressed by a variety of techniques deriving from cache coherency theory.

Programming for NUMA

One of the simplest ways to avoid coherency problems is the key to effectively programming on NUMA machines and that is to maximize references to local memory on the node while minimizing references to remote memory. After all, access to remote memory may take three to five times as long as access to local memory even in a well designed NUMA system. By programming in this way, and letting the underlying hardware architecture and operating system deal with NUMA’s memory coherence issues, developers can create effective programs.

That’s easier said than done. To do it today, most developers use tools like Multipath’s Fast Matrix Solver. These typically work by binding I/O threads to specific processors and nodes and by enabling you to explicitly place elements into local node memory.

A NUMA architecture or operating system tries to allocate memory efficiently for you, but its memory management works best for general cases and not for your specific application. As a NUMA system overall load increases, its memory management overhead takes up more time, thus resulting on overall slower performance.

Of course, NUMA doesn’t work magic. Just because your application ‘sees’ what appears to be an extraordinarily large memory space doesn’t mean that it knows how to use it properly. Your application must be written to make it NUMA aware. For example, if you’re old enough to recall writing applications using overlays to deal with the high likelihood of having to dip into virtual memory, those techniques might come in handy for dealing with remote memory.

To help you deal with remote memory, NUMA uses the concept of ‘distance’ between components such as CPUs and local and remote memory. In Linux circles, the metric usually used to describe ‘distance’ is hops. Less accurately, the terms bandwidth and latency are also bandied about. How the term is used varies, but generally speaking the lower the number of hops, the closer a component is to your local processor.

Sounds like a lot of trouble doesn’t it? So why use NUMA? The first reason for many of us is that NUMA can make scaling SMP and tightly-coupled cluster applications much easier. The second is that NUMA is ideal for highly parallel computations of the kind that super-computers thrive on such as weather modeling.

It’s the first, combined with the enormous data hunger of today’s SMP boxes, which has brought NUMA out of a few specialized niches into Server 2003 and Linux 2.6. While long term deployment of NUMA aware applications is probably still a few years away, as development tools come out that make NUMA memory management easier for application developers, it should emerge as the memory management technique of choice for high-end server application developers.