Practical Technology

for practical people.

June 12, 2003
by sjvn01
0 comments

Cyber Cynic: SCO’s Hands in the Source Jar

I’ve known for about a week now-known, not assumed, not puzzled it out, known-that SCO had mixed Linux code into Unix. I know it because a source I trusted who was in a position to know had told me that had been the case.I haven’t written it up as news though because the person who’s told me this doesn’t want their name used and I haven’t been able to get anyone else who was at SCO in those days to confirm or deny the story.

For that matter, I can’t get anyone working at SCO today to confirm or deny that SCO did some code mixing of their own.

Those outside of SCO are reluctant because, quite frankly, they don’t want to be sued by SCO. Those inside either don’t want to lose their jobs, or at the top, they must not have a good explanation because they don’t have a good answer. And why should management want to answer it? SCO’s own public statements indicate that they were mixing Unix and Linux together. They can’t deny it but they can’t confirm it either without shooting their law suit in the foot.

Now, however, Peter Galli at eWeek has reported that “parts of the Linux kernel code were copied into the Unix System V source tree by former or current SCO employees.”

While no one yet has put their name behind these accusations, this news can’t come as any surprise to anyone who works on operating system compatibility issues.

SCO created the Linux Kernel Personality (LKP) to enable SCO OpenUnix, now back to its old name of UnixWare, users to run Linux binaries at the application binary interface (ABI) level. As SCO puts it, The LKP for UnixWare 7.1.3 and Open UNIX® 8 (UnixWare 7.1.2) provides a complete Linux system hosted on the UnixWare kernel.”

In laymen’s terms that means you could take a complied program for Linux, drop in on LKP-powered UnixWare and run it. No fuss, no muss, no recompiling from source code, you’d just run your Linux program on UnixWare.

Now, I’m not much of a coder, but if you want Linux binaries to run directly on Unix and not in a virtual machine mode-the way that VMware enables people to run Windows on Linux-you need to retrofit Linux code into Unix. Other programs, such as CodeWeavers CrossOverOffice and its open source ancestor Wine, work by emulating Windows’ application programming interface (API). Both are difficult tricks to pull off, but neither requires any Linux code to be placed in the Unix kernel. For LKP to do its job though you must merge some Unix and Linux code at the kernel level and that’s exactly what I’m told SCO did.

Specifically, at a minimum SCO programmers have to take merge Linux code with the Unix kernel to deal with kernel threads, networking, and inter-process communication (IPC). In addition, vital system calls like clone, ipc, and socketcall, had to be cut, slightly modified and pasted for LKP to work. And, yes, LKP does work.

What all that means is that SCO’s intellectual property case against IBM and threats against Linux vendors has an LKP sized hole in it. I can’t make anyone talk to me; I can’t make anyone let me use their name. IBM’s attorneys can though and I’m sure they will if the case comes to court.

I still doubt that it will. SCO’s best chance, as it always has been, is to get bought out. I think they know they can’t win in court. But the longer they can drag matters out-and they can probably afford to do so for a long time with their contingency payment plan with their law firm and the money they got from Microsoft for IP rights-the more likely it is that they can get bought out by IBM or another company.

Technically speaking, I know that SCO is in the wrong, but from a purely pragmatic business viewpoint, the longer SCO can drag this out, the more Linux, and the companies that support it, will be hurt in the marketplace. And, of course, SCO’s management is gambling that this will pay off in big bucks for SCO’s current administration and owners. Of course, SCO’s current customers and Linux users everywhere are getting the short end of the stick, but SCO clearly doesn’t care about them. SCO’s in the law suit business now, not the IT business.

May 23, 2003
by sjvn01
0 comments

Cyber Cynic: Self Destructive DVDs and New Business Models

Walt Disney’s home video unit Buena Vista divsion, using Flexplay Technologies technology, is going to start selling DVDs that self-destuct after two days in August. It’s both an incrediblly good and an incredibly stupid idea

The technology, ez-D, is elegant and simple. The discs stop working because they change from a DVD-readable red to an unreadable black because of oxidation. You open them up, letting the oxygen in, watch them and in 48 hours you have a coaster instead of a DVD.

Buena Vista has two motives for these novel DVDs. The buisiness is that since buyers don’t have to return DVDs, they can sell DVDs pretty much anywhere. While Buena Vista isn’t telling, it’s clear from their language that they’re going to be pricing these disposible DVDs at close to current rental rates.

For idiots like me who waste money by being cogentially unable to get a DVD back to Blockbusters in time, ‘rental’ DVDs make perfect sense. Better still for Disney, there are enough people like me, or people who’d pick up a DVD as an impulse buy if it were three to five bucks at the local 7-11, that this technology will almost certainly give the financially struggling mouse a financial boost. That’s the good idea.

The bad idea, the incredibly stupid idea, that some people at Disney, not Flexplay, has is that ex-D is somehow an anti-cracker technology. Oh please!

The fact that the disc has a limited lifespan because of a chemical reaction instead of a software based Digital Rights Management (DRM) scheme somehow will stop hackers from getting at its contents is nonsense. With 48 hours to crack the DVD, and anti-cracking and DVD copying software commonplace, ez-D is no more a effective copy protection than the shrinkwrap the DVD comes in.

Besides, even though legal action against DVD encryption and copying software compaines like Internet Enterprises Inc., RDestiny LLC, HowtocopyDVDs.com, DVDBackupbuddy.com and DVDSqueeze.com is heating up with multiple law suits from Paramount and Twentieth Century Fox, the studios don’t seem to understand that breaking copy protection per se isn’t really the problem. The DVD copying companies claim that they’re simply enabling legal owners of a DVD ability to make backup copies of their DVDs. The studios reply that breaking a DVD’s copy protection under the 1998 Digital Millennium Copyright Act (DMCA) is illegal regardless of the copy’s use.

Of course, the real problem is that technology has fundamentally broken the business model of high-priced restricted access to copyright material. No copy protection scheme will stand against copy cracking efforts. No law suits will then stop the copy protection breaking software from spreading.

Technology has opened Pandora’s copyright protection box forever. Neither technology or the law can close it.

There is another way though. Embrace the new models. Sound impossible? Think again. Apple seems to have done pretty well with its iTunes Music Store haven’t they?

May 23, 2003
by sjvn01
0 comments

Cache Coherency: Now More Than Ever

Caching used to be so simple. You used caching on uniprocessor, single user systems to free yourself from the prison of slow I/O, whether that I/O was from the system bus to the processor or, from the hard drive to the bus. As multiple-CPU systems, multi-user systems and the Web became commonplace, simply placing fast RAM between system components with varying I/O throughputs was no longer enough. System designers had to contend with making sure that the available data to the processor was the real data and so it was that the almost intractable problems of cache coherency emerged.

Cache Coherency Basics

Over the last few years, cache coherency theory and practice has become more critical than ever. With processors far outpacing the bus in terms of speed, and the rapid rise of Symmetric Multiprocessing (SMP) operating systems like Windows 2000 and 2003 Server, Solaris, and Linux not to mention the growing popularity of clustering systems such as Scyld Beowulf and Multicomputer Operating System for UniX. (MOSIX), memory and disk access has become the bottleneck in high performance computing.

So it is that now, more then ever, system designers must deal with the fundamental problem of cache coherency, displaying and writing current data without error. Cache coherency theory and practice offers many options to handle these problems, but none of them are perfect.

The core problem with reading data is deciding when to kick some data out of the cache and what should be thrown out. The most common solution is to use a least recently used algorithm to decide which data to exile. However, this isn’t fool, or algorithm, proof.

More polished caching software and firmware use a least frequently used algorithm. At the price of memory and processing overhead, these schemes ensure that frequently accessed data will stay in the cache even if it hasn’t been called on recently.

Cache Writing

Another, probably even more important, issue is how a cache contends with data writing. Most caches will check, using a variety of algorithms, to see if the write request will force the cache to waste time overwrite identical data that has already been written. At the same time, anything that optimizes the data-writing process, thus minimizing the mechanical and therefore most time consuming element of writing data to disk is clearly advantageous to system performance.

Write-through designs, which automatically write data to disk as soon as possible, have the advantage of making sure that the cache writes data to disk as soon as possible. This reduces the probability of data corruption. Unfortunately, write-through caches are not the most processor-efficient means of handling data writes. By preempting clock cycles and occupying bus bandwidth at times when they’re needed for ongoing processes, write-through caches slow down the system.

Because of this, write-through caching tends only to be used in the most mission critical systems like Online Transaction Processing (OLTP) with servers running extremely fast processors, drivers and data buses.

The alternative approach, deferred writes delays disk writes until the system is free from other activities. This used to be, and still is, a favorite method in non-mission critical mainframe environments. But its users speed advantage has overcome data integrity concerns and so end-user operating systems, like Windows XP, use delay disk writes by default.

With the rise of non-stop systems, though, delayed write is making a comeback even in the OLTP world. Oracle’s Oracle 9i DBMS, for example, utilizes delayed write even in its clustering software: Real Application Clusters. In this case, the underlying system is presumed to be good enough at automatically stopping data writing conflicts that delayed write caching’s speed advantages outweighs data integrity concerns.

Eventually, as data is changed, the cache copy will be updated. One of the troubles, designers must avoid here is wasting time updating cache copies that won’t be needed again.

On the flip side, if you’re using a write-invalidate system, in which the most recent changes invalidates any other existing copies of the data and your co-worker Esther, makes other changes, ideally, the cache coherency protocol will validates and updates her copy from your cached version of the data.

Sound easy, but it’s not. For example, if you and Esther are simultaneously updating a database, your changes invalidate her copy, and her changes invalidate yours. Then both caches start registering misses because neither copy can be updated until it has been re-validated from the other. Missed reads and validations can quickly begin to clog up data communications. In the worse case, you end up with a virtual traffic jam on the bus or network, in what’s called the ping-pong effect.

Avoiding this usually requires data locking similar to that deployed in most DBMSs. Co-coordinating cache data locking and DBMS locking, as you might imagine, can be a real challenge.

Another fundamental problem is how to determine the optimal cache size for a system. No master key or algorithm exists to solve this one. You might think that bigger is always better, but you’d be wrong. As cache size increases, at first cache misses decrease, but not in a linear fashion. And eventually, cache misses increase. The ratio of misses to hits shrinks until it reaches the data-pollution point. When this occurs–determined by the size of the cache and its component memory blocks–misses begin to increase. Using smaller memory blocks, or pages, delays the data pollution in large memory caches, but it won’t stop it.

Multiprocessor Cache Concerns

Multiprocessor systems must also contend with three other issues: memory, communications, and bandwidth contention.

At first glance it might seem that simply enabling processors to share a common cache would be a grand idea. The advantage of this approach is that applications that can make good use of shared data and code will need less memory and will execute more efficiently. The fatal disadvantage is that it only works well with each processor having direct, immediate access to the cache, a situation that only exists in well-designed SMP systems.

Even then, memory contention, which arises when multiple processors call on a single memory location, can cause serious performance problems.

Another problem is that bandwidth contention, which can occur whether the processors are located on a network or on a common bus. The larger a multiprocessor system or cluster becomes, the more time it requires to carry out data communications, and the more likely it is that there will be a data traffic jam on the bus.

So it is that the most popular approach is to furnish every system processor with its own cache. Private caches avoid most memory-contention problems. However, this approach increases communications and bandwidth contention because cache coherency must be kept between each cache. And, needless to say, you then face the complexities of maintaining coherency across multiple caches.

Maintaining Coherency with Multiple Caches

So, how do you go about maintaining coherency with multiple caches? There are two main approaches: snoopy protocols, which constantly monitor bus transactions, and directory based protocols, which keep a map of data memory locations.

In both cases each cache controller looks for cache-consistency commands in the data stream. When it finds them, the cache controller then processes it to see if it applies to its own cache.

The disadvantage to these bus approaches is that out buses have limited bandwidth. In particular, with the snoopy approach, the more often it “snoops” on the bus, the less bandwidth is available for the bus’ main job of ferrying information back and forth. In addition, the broadcast nature of cache-consistency commands requires even more valuable bandwidth. Thus, you typically find snoopy protocols used in situations with very high bus speeds such as Massively Parallel Processing (MPP) and Non-Uniform Memory Access (NUMA) systems.

Directory-based protocols can track what’s in the cache either in a centralized location or with a distributed direction. Typically, even though this requires additional hardware resources to act as multi-system memory manager, the directory approach scales much better than snooping and so lends itself to most multiprocessor systems even including lightly-coupled clusters and even Web caching.

More precisely, a cache directory maps every cached data block’s location. Each directory record holds a flag bit and pointers to the location of every copy of a data block. The flag bit, usually called the dirty bit, indicates whether a particular cache can write to a particular data block

Many different directory plans exist. From a structural point of view, there are three kinds: full map, limited, and linked list. These almost always have one element in common: control structures that ensure that only a single cache can write to a data block at a given time. When that happens, the writing cache is said to have ownership of the data block. Each directory type also provides a way of transmitting a data-change notification through main memory and the caches.

Full-Map Directory

Full-map directories have the singular advantage of being able to store information on every block in memory. This style of directory has pointers equal in number to the number of processors on the system plus two status bits and the dirty bit. The bits that correspond to a processor indicate whether a particular block is present in that processor’s cache. The status bits, which are duplicated in the cache, show whether the block can be written to and whether it is a valid block.

As you might predict, full-map directories have problems. Because the directory is almost always centralized, processors tend to compete for it and this can lead to a performance bottleneck. Another concern is that searching and updating the directory can put an undue burden on both bus and caching performance.

Limited Directory

Limited directories avoid the memory troubles of full-map protocols. By requiring that only a limited number of pointers be maintained, they keep memory overhead within bounds. Otherwise, the structure resembles that of a full-map directory. When the number of copies of a particular memory block exceeds the number of pointers assigned to track them, the newest pointer replaces an earlier one in a process called eviction using such methods as first-in/last-out, to determine which pointer is discarded.

The problem here is that while statistically, a limited directory works well; additional processing overhead is needed to avoid dumping data blocks with primary storage-directory information. Processor thrashing may also happen if the number of pointers is smaller than the number of processors referencing a limited set of data blocks.

Another problem with limited directories is that on scalable point-to-point interconnection networks, it’s very difficult to ensure that the point-to-point broadcast will reach every applicable cache. While thriftier with system resources than other directory or snoopy schemes, limited directories should be deployed cautiously.

Linked-List Directory

The most successful directory scheme is the IEEE Scalable Coherent Interface (SCI). This is based on a double-linked list approach. While single link list approaches are doable, SCI has become the pre-dominant directory approach. In part this is because double-linking enables the system to send invalidation messages up and down the chain. This results in is faster, more flexible adjustments to cache changes.

With SCI, when a processor calls for a data block, it doesn’t receive a copy from main memory. Instead, it receives a pointer to the most recently added cache copy. The requesting cache then asks for the data block from the cache owning the freshest version of the data. This cache replies by setting a pointer to the requesting cache and transmitting the data block. When multiple data-block requests are received, they are handled in first-in/first-out order.

Of course, hybrid cache coherency systems that use both snooping and directory methods are not only possible, they’re commonly implemented. In 2003, there is no golden rule to determine which approach, or which mix, is ideal for general computing cases.

Cashing in on Cache

As you can tell, cache coherency is difficult to do well. But, without cache coherency methods insuring that there is a consistent global view of cached data, data corruption and lousy system performance are not far away. Like it or lump it, successful high performance computing, or with today’s 3GHz and faster processors, even single CPU computing requires successful cache coherency implementations. It’s as simple, and as complex, as that.

May 21, 2003
by sjvn01
0 comments

Cyber Cynic: The Microsoft-SCO Connection

What is Microsoft really up to by licensing Unix from SCO for between 10 to 30 million dollars?

I think the answer’s quite simple: they want to hurt Linux. Anything that damages Linux’s reputation, which lending support to SCO’s Unix intellectual property claims does, is to Microsoft’s advantage. Mary Jo Foley, top reporter of Microsoft Watch agrees with me. She tells me, “This is just Microsoft making sure the Linux waters get muddier They are doing this to hurt Linux and keep customers off balance. Eric Raymond, president of the Open Source Initative agrees and adds “Any money they (Microsoft) give SCO helps SCO hurt Linux. I think it’s that simple.”Dan Kusnetzky, IDC vice president for system software research, also believes that Microsoft winning can be the only sure result from SCO’s legal maneuvering. But, he also thinks that whether SCO wins, loses, or draws, Microsoft will get blamed for SCO’s actions.

He’s right. People are already accusing Microsoft of bankrolling SCO’s attacks on IBM and Linux.

But is there more to it? Is Microsoft actually in cahoots with SCO? I don’t think so. Before this deal, both SCO and Caldera have had long, rancorous histories with Microsoft

While Microsoft certainly benefits from any doubt thrown Linux’s way, despite rumors to the contrary Microsoft no longer owns any share of SCO and hasn’t for years. In fact, Microsoft’s last official dealing with Caldera/SCO was in early January 2000, when Microsoft paid approximately $60 million to Caldera to settle Caldera’s claims that Microsoft had tried to destroy DR-DOS. While Microsoft never admitted to wrong-doing, the pay-off speaks louder than words.

The deal didn’t make SCO/Caldera feel any kinder towards Microsoft. A typical example of SCO’s view of Microsoft until recently can be found in the title of such marketing white papers as “Caldera vs. Microsoft: Attacking the Soft Underbelly” from February 2002.

Historically, Microsoft licensed the Unix code from AT&T in 1980 to make its own version of Unix: Xenix. At the time, the plan was that Xenix would be Microsoft’s 16-bit operating system. Microsoft quickly found they couldn’t do it on their own, and so started work with what was then a small Unix porting company, SCO. By 1983, SCO XENIX System V had arrived for 8086 and 8088 chips and both companies were marketing it.

It didn’t take long though for Microsoft to decide that Xenix wasn’t for them. In 1984, the combination of AT&T licensing fees and the rise of MS-DOS, made Microsoft decide to start moving out of the Unix business.

Microsoft and SCO were far from done with each other yet though. By 1988, Microsoft and IBM were at loggerheads over the next generation of operating systems: OS/2 and Unix. Microsoft saw IBM’s support of the Open Software Foundation (OSF), an attempt to come up with a common AIX-based Unix to battle the alliance of AT&T and Sun, which was to lead to Solaris.

Microsoft saw this as working against their plans for IBM and Microsoft’s joint operating system project, OS/2 and their own plans for Windows. Microsoft thought briefly about joining the OSF, but decided not to. Instead Bill Gates and company hedged their operating systems bets by buying about 16% of SCO, an OSF member, in March 1989

In January 2000, Microsoft finally divested the last of their SCO stock. Even before Caldera bought out SCO though in August 2000, Microsoft and SCO continued to fight with each other. The last such battle was in 1997, when they finally settled a squabble over European Xenix technology royalties that SCO had been paying Microsoft since the 80s.

Despite their long, bad history, no one calling the shots in today’s SCO has anything to do with either the old SCO or Caldera. I also though think that there hasn’t been enough time for SCO and Microsoft to cuddle up close enough for joint efforts against IBM and Linux.

I also think that it’s doubtful that Microsoft would buy SCO with the hopes of launching licensing and legal battles against IBM, Sun and the Linux companies. They’re still too close to their own monopoly trials. Remember, even though they ended up only being slapped on the wrist, they did lose the trial. Buying the ability to attack their rivals’ operating systems could only give Microsoft a world of hurt.

Besides, as Eric Raymond in the Open Source Initiative’s position paper on SCO vs. IBM and Bruce Perens’ “The FUD War against Linux,” point out, it’s not like SCO has a great case.

Indeed, as Perens told me the other day, in addition to all the points that has already been made about SCO’s weak case, SCO made most 16-bit Unix and 32V Unix source code freely available. To be precise, on January 23, 2002, Caldera wrote, “Caldera International, Inc. hereby grants a fee free license that includes the rights use, modify and distribute this named source code, including creating derived binary products created from the source code.” Although not mentioned by name, the letter seems to me to put these operating systems under the BSD license. While System III and System V code are specifically not included, it certainly makes SCO’s case even murkier.

SCO has since taken down its own ‘Ancient Unix’ source code site, but the code and the letter remain available at many mirror sites.

Given all this, I think Microsoft has done all they’re going to do with SCO. They’ve helped spread more FUD for a minimal investment. To try more could only entangle them in further legal problems. No, SCO alone is responsible for our current Unix/Linux situation and alone SCO will have to face its day in court.

May 18, 2003
by sjvn01
0 comments

Sue Me? Sue You? SCO, Linux & Unix

On May 12, 2003, SCO tried to break all ties with its Linux past and declared war on the entire Linux community. Specifically, underneath the legal language, SCO claims Linux includes Unix code stolen, or directly developed, from their SCO Unix code. “Therefore legal liability that may arise from the Linux development process may also rest with the end user.” Therefore, “Similar to analogous efforts underway in the music industry, we are prepared to take all actions necessary to stop the ongoing violation of our intellectual property or other rights.”This is a complete turn around from March when SCO CEO Darl McBride said SCO’s IBM law suit had nothing to do with Linux or open source. Right. The long and short of it now is that SCO is coming after Linux companies and users with hammer and tongs.”

McBride immediately backed off, saying, according to an Information Week report, that SCO doesn’t really want to sue Linux users and it wants companies to voluntarily comply with SCO’s intellectual property requirements. As for the language in the May 12th letter? That’s only there because their in-house counsel and their law firm, Boies Schiller & Flexner, made them do it. “We’re the messenger in this case.”

Most people will see this as a difference that makes no difference. While McBride was trying to make nice with users, Chris Sontag, SCO’s VP and general manager of SCOsource was busy on the interview circuit, serving notice to Linux companies, particularly SuSE and Red Hat, that they’re next. In particular, Sontag has made a point of saying that, despite the UnitedLinux agreements, after reviewing SCO’s SuSE agreements, “I would not characterize them in any form whatsoever as providing SuSE with any rights to our Unix intellectual property.”

SuSE is telling its customers not to sweat SCO’s threats. Joseph Eckert, SuSE’s VP of corporate communications, says “the UnitedLinux code base — jointly designed and developed by SuSE Linux, Turbolinux, Conectiva and SCO — will continue to be supported unconditionally by SuSE Linux. We will honor all UnitedLinux commitments to customers and partners, regardless of any actions that SCO may take or even allegations they may make.”

Eckert goes on, “SCO’s actions are again indeed curious. We have asked SCO for clarification of their public statements, SCO has declined. We are not aware, nor has SCO made any attempt to make us aware, of any specific unauthorized code in any SuSE Linux product. As a matter of policy, we have diligent processes for ensuring that appropriate licensing arrangements, open source or otherwise, are in place for all code used in our products.”

The SCO Splash

Few people expect SCO to win in the courts. Tom Carey, a partner at Bromberg & Sunstein, a Boston intellectual property boutique law firm, who doesn’t expect the case to even make it to court, says, “If you’ve ever played hearts, there’s a tactic called ‘shooting the moon.’ That’s what SCO is trying to do. This is a very low possibility, high reward strategy that has a ripple effect that goes far beyond IBM and impacts the Linux community as well.”

What kind of impact? Dan Kusnetzky, IDC vice president for system software research, observes, “Typically when you talk about FUD it’s about one vendor talking about another product to get them to buy their product. This is the first time that I know of where a vendor basically put out notice that they’re coming after everyone: Hardware, software, and customers.”

The result of this, he believes, will be to “slow down the adoption of Linux in the US. The main winner will be Microsoft, or possibly Sun with Solaris on Intel or the BSD community.” The big loser will be Linux.

Linux users see all of this too. And, they’re outraged. Indeed some of them are challenging SCO to sue them too.

Sue Me? Sue You!

They’re looking at this the wrong way. I expect that in a few days or weeks SCO will reap what they’ve sowed and be sued over and over again by other companies.

I am not a lawyer, but SCO has just told their UnitedLinux business and technology partners that the distribution they made together, based primarily on SuSE Linux with considerable help from SCO German Linux developers who had moved from SCO to SuSE, is illegal. And, clearly, SCO is threatening them with legal action. If you were in their shoes, wouldn’t you consider suing first?

But there’s more. SCO has also threatened Red Hat, Penguin Computing, HP, Dell, Sun, and dozens of other Linux-related companies’ customers. Given this, I won’t be a bit surprised if one or more of these Linux companies got SCO to court first.

Who did what with the code?

But there may be more reasons that SCO will be looking at legal troubles. Even before Caldera bought out SCO’s Unix, SCO was adding Linux functionality to UnixWare.

Specifically, SCO added Linux compatibility to its Unix properties with operating system packages like UnixWare’s Linux Kernel Personality (LKP). The LKP enables UnixWare to run Linux binaries.

So SCO was adding Linux functionality to its own Unix products, and was also considering bringing Linux functionality to its older OpenServer Unix. Given SCO’s own reasoning, could all this Linux functionality be added to Unix without introducing Linux code into Unix?

Look at the history. When Caldera first bought SCO in August 2000, it suggested that it was going to open source a good deal of Unix. That never happened.

But what Caldera did do, as described in a Caldera white paper dated March 8, 2001, with the then new tag-phrase of “Linux and UNIX are coming Together” by Dean R. Zimmerman, a SCO writer, was to try to merge the best features of both operating systems. Early on there’s a line that fits perfectly with open source gospel. “For a programmer, access to source code is the greatest gift that can be bestowed.” And then, getting straight to the point, Caldera declares: “Caldera has begun the task of uniting the strengths of UNIX technology, which include stability, scalability, security, and performance with the strengths of Linux, which include Internet-readiness, networking, new application support, and new hardware support. Caldera’s solution is to unite in the UNIX kernel a Linux Kernel Personality (LKP), and then provide the additional APIs needed for high-end scalability. The result is an application ‘deploy on’ platform with the performance, scalability, and confidence of UNIX and the industry momentum of Linux.”

Isn’t this exactly what SCO is accusing IBM of doing? In SCO’s March 2003 filing, SCO states, “Prior to IBM’s involvement, Linux was the software equivalent of a bicycle. UNIX was the software equivalent of a luxury car. To make Linux of necessary quality for use by enterprise customers, it must be re-designed so that Linux also becomes the software equivalent of a luxury car. This re-design is not technologically feasible or even possible at the enterprise level without (1) a high degree of design coordination, (2) access to expensive and sophisticated design and testing equipment; (3) access to UNIX code, methods and concepts; (4) UNIX architectural experience; and (5) a very significant financial investment.”

Isn’t this what SCO had said they were doing? I don’t see any significant difference. Do you?

Who really owns the Code?

Clearly, before its latest management team arrived, SCO was striving to merge the best of Unix and Linux together. Before former CEO and co-founder Ransom Love left the firm, SCO made a point of telling the world that they were an active open source contributor to Linux; one of the founding members of the Linux Standard Base, a group dedicated to Linux distributions and application comparibility; and, of course, they were a major player in the creation of UnitedLinux.

So, given SCO’s cross-breeding of Linux and Unix, is it not possible that if there is GPL code within Unix? This remains unproven, but if it turns out to be true, couldn’t that code have been put there by SCO programmers? After all, SCO still refuses to say exactly what Unix code was incorporated into Linux.

But what about putting the shoe on the other foot? How much did SCO borrow from GPL protected Linux for their Unix operating systems? UnixWare, at the least, was given the power to natively run Linux programs with the LKP.

Is that a violation of the GPL? I don’t know. No one has ever claimed it was. But then, until recently SCO wasn’t claiming that that Linux was using Unix’s intellectual property. By opening up this can of worms, it’s possible that SCO may, in the end, find that instead of controlling Linux’s intellectual property rights, they’ll find themselves out of business and Unix’s intellectual property rights under the GPL.

April 25, 2003
by sjvn01
0 comments

Getting Closer to 99.9999% Network Uptime

Back in May 2002, Cisco Systems announced new software for its 12000 series routers, Globally Resilient IP (GRIP). GRIP is meant to eliminate data loss on the network even if there are circuit failures or human errors. Not a bad trick if they can do it.

Juniper Networks, of course, immediately replied that said they have had “zero packet loss” on their routers since 1998. Without making that boastful claim, Alcatel had announced that its non-stop ACEIS router would be out later this year.

What is going on here? Has information loss become so crucial? Is there a real demand for highly available networks? Or, are the network OEMs, fighting against a horrid economy, simply creating a high-tech version of the washing powder commercials of my early 1970s childhood when Tide or Bold would roll out their “new-improved” products.

Some of it is smoke and mirrors. Sorry Juniper, no one always gets a perfect score in packet exchanging. As for Cisco and Alcatel? They won’t have product out until the 3rd quarter. Other companies, better known for their work with ATM, such as Nortel, and Terabit switching, Avivi, are also moving into the extreme reliability market.

Is there really a market for IP networking that strives to reach, and surpass, 99.9999% reliability and uptime? As a matter of fact there it is, but it’s not the traditional Internet, or even extranet or intranet markets. It’s the carrier market that supplying the demand for previously unheard of uptimes for IP.

In the past, IP has always been a best-effort networking solution. IP would work on anything-even carrier pigeons (RFC-3043)-but while the message more often than not would get through, sometimes your connection would be slow, sometimes it would be fast, and sometimes you’d wonder if the backbone provider really was using carrier pigeons.

Because of its low cost and increase in reliability, IP has become more and more attractive to all carriers though, and not just ISPs and LAN administrators. So it is that, according to David Willis, Meta Group’s VP of Global Networking Strategy, that there really is “clearly a need for more reliable IP network. Carriers are more migrating more and more of their traffic to IP traffic. AT&T carrying more and more voice on their IP backbone.”

In the past, even backbone providers and major ISPs have used the slogan, “every router needs a buddy.” What that meant for those who don’t spend their times in American Network Operations Centers is that every router had a hot spare running beside it. That was fine, if when a failure finally happens your customers could live with minutes or hours as the stand-by routers built up its route tables and took over the network load.

When your customer is Cable & Wireless, Equant, Infonet, or WorldCom, though, minutes don’t cut it and hours are completely unacceptable. An acceptable outage time in SONET for voice, according to Seamus Crehan Senior Analyst, Dell’Oro group, is a sub-15millisecond recovery. Why such a harsh standard? Because breaks in voice traffic directly impact voice-carriers’ bottom line.

Unfortunately, according to Tim Smith, an analyst for Gartner Dataquest, IP that can handle that kind of load isn’t here yet. Willis agrees, “This wave of carrier-grade IP services is new movement. It’s not fully ready yet.

So why are the carriers making this move? Cost. IP, even when running on the highest of high-end routers, looks to be cheaper than SONET and ATM in the long run. As Val Oliva L2/3 Product Marketing Manager for Foundry Networks and a member of the 10 GEA (10-Gigabit Ethernet Alliance) Board of Directors says, “Carriers looking for ways to increase profit, they will need a new set of systems that can lower their system cost and cost of operations to increase returns.”

And, sooner rather than later, IP will provide those new systems. Willis thinks, “Cisco, which always leads in anything related to core routing, will lead. Juniper, which had been taking market share from Cisco, but is now falling back, will also be in the hunt. Alcatel and Nortel, which already have strong carrier relations thanks to their ATM switching background, will also be in the race.” He also adds that any company that’s very active in MultiProtocol Label Switching (MPLS) will have a shot.

Another question in carrier-grade IP will be whether the future will be IP running on top of existing ATM and SONET networks or on Ethernet. “Ethernet,” You ask? Yes, Ethernet.

10-Gigabit Ethernet (GbE) will be emerging as a serious competitor for OC-192 SONET by the end of 2002. If anything, 10-GbE will be in place before high reliability IP routers become commonplace. In the event, both technologies are likely to support each other in advancing IP’s case to carrier-grade customers.

Only time will tell if Oliva is right when she says that, “Ethernet is IP’s best friend, and in the end, IP wants Ethernet.” What is certain is that IP is coming to the carrier world.