Practical Technology

for practical people.

September 30, 2002
by sjvn01
0 comments

The 802.11 Business

Once upon a time, there was talk of HomeRF or HiperLAN becoming the dominant wireless LAN (WLAN) technology. Not any more, you can stick a fork in them, they’re done. 802.11 technology is well on its way to dominating the WLAN market the way Ethernet had LANs

Why? It’s faster and has more range than HomeRF, it’s much farther along in the market than HiperLAN, and despite the fact that it’s built-in security protocol, Wired Equivalent Privacy (WEP), can be broken relatively easily, it’s enormously popular. Analysts universally predict that the future of WLANs belong to 802.11. Ken Haase, general chairman of the HomeRF Working Group, even tacitly admits that HomeRF’s future is in telephony, not data networking.

Big Market Getting Bigger

WLANs aren’t just a niche market anymore. Allen Nogee, In-Stat/MDR‘s Senior Analyst for Wireless Component Technology says, “So far this year (first half, 2002) the numbers for all hardware (access point (AP) and Network Interface Card (NIC)) is 7.24 million for 802.11b, and 210 thousand for 802.11a.” For all of 2002, he predicts that, “it will probably be 16 – 17M.”

Sound impressive? Some think In-Stat’s numbers are on the low side. Tim Mahon, a CS First Boston analyst, sees 1-½ million 802.11a chips alone shipping in 2002, and another 200,000 combo chips. In 2002, he’s saying 11 million combo chips will ship next year. Richard Redelfs, CEO & President, Atheros, the 802.11a chip OEM, thinks Mahon’s numbers are closer to the truth. Smiling, he says, “It’s higher than half-a-million, we would be out of business if it was that low.”

Today, the market belongs to 802.11b. Navin Sabharwal, Director of Residential & Networking Technologies for Allied Business Intelligence observes that, “The market is basically 95% 11b, that will change. 11a will continue to make modest progress. 2003 will see a radical shift with 11b (solo NICs and APs) going down to 55%. The bulk of this growth will be in dual band.”

Chris Neal, Research Director for Sage Research, agrees. In Sage’s focus group studies of medium and large businesses, Sage found that CIOs are “thinking about multi-standard cards. They’re cautious about 802.11a, because of limits on 2002 capital expenditures so they’re dragging feet until spring of next year. Eventually, though, the bandwidth needs are such that they’ll go there, but not immediately.”

Driving 802.11

Several factors are driving 802.11. Gemma Paulo, senior analyst, Enterprise and Residential Communications for In-Stat/MDR, said, “In the ‘business’ space, WLANs are popular in the verticals, namely education, healthcare, retail, manufacturing/warehousing, etc.” Sabharwal thinks though that enterprise players are also driving the business market. He says, the “Number one motivation for the enterprise is that wireless is designed for notebooks and PDAs. This means that workers are no longer tied to their desktops. As companies move to laptops this lets them compute on the road, campus, or conference room. It doesn’t make sense to tether that person with an Ethernet cable.”

Paulo goes on to say that, “In the ‘home/SOHO’ space, WLANs are very popular among broadband users worldwide, with Asia Pacific really driving growth, especially Japan and South Korea. 802.11b equipment is cheap & reliable, and until consumer electronics companies actively try to embed 802.11a chipsets into CE equipment, 802.11b or 802.11g will work well for the home, as 2.4 GHz WLAN technologies are a bit more robust going through walls. We expect that eventually the desire to distribute multimedia in the home will drive a mass embedding of WLAN into CE devices, but this will not happen until 3 or 4 years down the road.”

Redelfs doesn’t think it’s going to take anything like that long and 802.11a, with its 54Mbps, not b, will move wireless ‘networking’ into consumer electronics. “There will be a market for ‘a’ in video distribution in situations because b isn’t fast enough. Sony has already done demonstrations of an 802.11 empowered video server with TiVo style technology to transmit video and audio to flat screen televisions, thus avoiding co-axial costs and hassles.” He goes on to say that Sony is on the verge of announcing 802.11a empowered home theater products.

As a result, WLAN chipset sales have increased dramatically since 802.11b’s introduction in 1999 and it shows no signs of slowing down. Businesses are the largest market for WLAN chipsets, but residential use is growing even faster. Indeed, Sabharwal says that the consumer/SOHO markets have grown explosively” with the growth “in the retail channel seems split between small business and residential.”

The Players

Many companies want a part of this product. The leading chip OEMS are Texas Instruments (TI), Intersil, and Atheros. Many other companies, more than 40 altogether according to Sabharwal want a piece of the WLAN pie. These include Resonext, LinCom Wireless, Agere Systems, Atmel, Cirrus Logic, Cisco, Intel, Philips, RF Microdevices Raytheon and Symbol Technologies.

Not all of them will make it. Ed Sperling, editor in chief of ElectronicNews sees “A lot of investment, but no one seems to be profiting that much. Sabharwal comes right out and says “the ones who are shipping today, Intersil, Philips, Atmel, Agrere, TI, Maxim, Atheros are in a better position and that Intel will be a player if they keep funding it.” But, it will be “hard for others to break out. Some won’t even be able to ship. We’re starting to see a shakeout. One thing I’ve seen is that they’re no more interest in new people entering this space since the middle of this year. In Q4/Q1 we’ll see a shakeout.”

The competition in the WLAN-chipset marketplace will continue to be fierce. With 802.11b profit margins already razor thin, grabbing market share in the ‘a,’ ‘g’ and combo chip market will become all-important in 2003. And, being to the first to market with combo chips will play an important role. Sperling thinks “most vendors will likely support chipsets that support all the standards since the cost of chips is doesn’t add much to cost of devices and customers don’t care about standards, they just want devices that work.”

Once upon a time, there was talk of HomeRF or HiperLAN becoming the dominant wireless LAN (WLAN) technology. Not any more, you can stick a fork in them, they’re done. 802.11 technology is well on its way to dominating the WLAN market the way Ethernet had LANs

Why? It’s faster and has more range than HomeRF, it’s much farther along in the market than HiperLAN, and despite the fact that it’s built-in security protocol, Wired Equivalent Privacy (WEP), can be broken relatively easily, it’s enormously popular. Analysts universally predict that the future of WLANs belong to 802.11. Ken Haase, general chairman of the HomeRF Working Group, even tacitly admits that HomeRF’s future is in telephony, not data networking.

Big Market Getting Bigger

WLANs aren’t just a niche market anymore. Allen Nogee, In-Stat/MDR‘s Senior Analyst for Wireless Component Technology says, “So far this year (first half, 2002) the numbers for all hardware (access point (AP) and Network Interface Card (NIC)) is 7.24 million for 802.11b, and 210 thousand for 802.11a.” For all of 2002, he predicts that, “it will probably be 16 – 17M.”

Sound impressive? Some think In-Stat’s numbers are on the low side. Tim Mahon, a CS First Boston analyst, sees 1-½ million 802.11a chips alone shipping in 2002, and another 200,000 combo chips. In 2002, he’s saying 11 million combo chips will ship next year. Richard Redelfs, CEO & President, Atheros, the 802.11a chip OEM, thinks Mahon’s numbers are closer to the truth. Smiling, he says, “It’s higher than half-a-million, we would be out of business if it was that low.”

Today, the market belongs to 802.11b. Navin Sabharwal, Director of Residential & Networking Technologies for Allied Business Intelligence observes that, “The market is basically 95% 11b, that will change. 11a will continue to make modest progress. 2003 will see a radical shift with 11b (solo NICs and APs) going down to 55%. The bulk of this growth will be in dual band.”

Chris Neal, Research Director for Sage Research, agrees. In Sage’s focus group studies of medium and large businesses, Sage found that CIOs are “thinking about multi-standard cards. They’re cautious about 802.11a, because of limits on 2002 capital expenditures so they’re dragging feet until spring of next year. Eventually, though, the bandwidth needs are such that they’ll go there, but not immediately.”

Driving 802.11

Several factors are driving 802.11. Gemma Paulo, senior analyst, Enterprise and Residential Communications for In-Stat/MDR, said, “In the ‘business’ space, WLANs are popular in the verticals, namely education, healthcare, retail, manufacturing/warehousing, etc.” Sabharwal thinks though that enterprise players are also driving the business market. He says, the “Number one motivation for the enterprise is that wireless is designed for notebooks and PDAs. This means that workers are no longer tied to their desktops. As companies move to laptops this lets them compute on the road, campus, or conference room. It doesn’t make sense to tether that person with an Ethernet cable.”

Paulo goes on to say that, “In the ‘home/SOHO’ space, WLANs are very popular among broadband users worldwide, with Asia Pacific really driving growth, especially Japan and South Korea. 802.11b equipment is cheap & reliable, and until consumer electronics companies actively try to embed 802.11a chipsets into CE equipment, 802.11b or 802.11g will work well for the home, as 2.4 GHz WLAN technologies are a bit more robust going through walls. We expect that eventually the desire to distribute multimedia in the home will drive a mass embedding of WLAN into CE devices, but this will not happen until 3 or 4 years down the road.”

Redelfs doesn’t think it’s going to take anything like that long and 802.11a, with its 54Mbps, not b, will move wireless ‘networking’ into consumer electronics. “There will be a market for ‘a’ in video distribution in situations because b isn’t fast enough. Sony has already done demonstrations of an 802.11 empowered video server with TiVo style technology to transmit video and audio to flat screen televisions, thus avoiding co-axial costs and hassles.” He goes on to say that Sony is on the verge of announcing 802.11a empowered home theater products.

As a result, WLAN chipset sales have increased dramatically since 802.11b’s introduction in 1999 and it shows no signs of slowing down. Businesses are the largest market for WLAN chipsets, but residential use is growing even faster. Indeed, Sabharwal says that the consumer/SOHO markets have grown explosively” with the growth “in the retail channel seems split between small business and residential.”

The Players

Many companies want a part of this product. The leading chip OEMS are Texas Instruments (TI), Intersil, and Atheros. Many other companies, more than 40 altogether according to Sabharwal want a piece of the WLAN pie. These include Resonext, LinCom Wireless, Agere Systems, Atmel, Cirrus Logic, Cisco, Intel, Philips, RF Microdevices Raytheon and Symbol Technologies.

Not all of them will make it. Ed Sperling, editor in chief of ElectronicNews sees “A lot of investment, but no one seems to be profiting that much. Sabharwal comes right out and says “the ones who are shipping today, Intersil, Philips, Atmel, Agrere, TI, Maxim, Atheros are in a better position and that Intel will be a player if they keep funding it.” But, it will be “hard for others to break out. Some won’t even be able to ship. We’re starting to see a shakeout. One thing I’ve seen is that they’re no more interest in new people entering this space since the middle of this year. In Q4/Q1 we’ll see a shakeout.”

The competition in the WLAN-chipset marketplace will continue to be fierce. With 802.11b profit margins already razor thin, grabbing market share in the ‘a,’ ‘g’ and combo chip market will become all-important in 2003. And, being to the first to market with combo chips will play an important role. Sperling thinks “most vendors will likely support chipsets that support all the standards since the cost of chips is doesn’t add much to cost of devices and customers don’t care about standards, they just want devices that work.”

The Technologies

802.11b uses Complementary Code Keying (CCK) to achieve bit transfer rates of 5.5 and 11Mbps in the 2.4Ghz range. CCK works by using Differential Quadrature Phase Shift Keying (DQPSK), which encodes data by four phase shifts, and making each DQPSK ‘word’ carry additional data. Unfortunately, the 2.4 range already must contend with interference from microwave ovens, 2.4GHz telephones and other 802.11b networks. Because 802.11b only has three channels for simultaneous data transfers its throughput can plummet with multiple users.

On the other hand, 802.11b is shipping. 802.11g, another 2.4GHz standard, is another matter. After a long standards fight between Texas Instruments and Intersil over whose technology should be used to deliver service at the 22Mbps range, 802.11g still hasn’t achieved final IEEE approval. Approval is now expected in May 2003.

To finally get 802.11g out the door, Intersil and TI agreed to disagree. So it is that ‘g’ has two mandatory standards CCK at 2.4GHz for ‘b’ compatibility and 802.11a’s Orthogonal Frequency Division Multiplexing (ODFM) for a maximum of 54Mbps. ODFM achieves higher throughput by breaking a single wide frequency channel into several, multiplexed sub-channels.

In addition, ‘g’ comes with a pair of optional, and incompatible, modes to achieve throughput ranges in the 22Mbps range. These are Intersil’s CCK-OFDM mode with a maximum throughput of 33Mbps and TI’s Packet Binary Convolutional Coding (PBCC-22), with a throughput range of 6 to 54Mbps. PBCC combines codeword scrambling with binary convolutional coding to achieve its higher theoretical throughput rates.

By sticking to the 2.4GHz range, ‘g’ has the same interference problems of ‘b.’ 802.11a, on the other hand, uses ODFM at the interference free 5GHz range. In addition, as Redelfs observes, the 2nd generation of 802.11a supports 13 channels in North America and 19 in Europe giving it far more effective bandwidth and making it much more scaleable for IT uses.

In practice, no 802.11 actually hits its maximum possible throughput. Network overhead and distance attenuation combine to knock ‘b”s speed to about 2 to 4Mbps in most office environments and ‘a”s to 20Mbps. The exact performance and maximum range depends on the distance between AP and NICs, the housing structure’s materials, and interference. A site survey is a must for any large-scale WLAN deployment.

Typically, any 802.11 chipset consists of a radio, baseband processor, Medium Access Controller (MAC), internal logic to encode and decode its supported communications mode, RF noise filters, an amplifier, and support for both PCI and Cardbus interfaces.

At this point, this is usually done with two or more chips. But, most major vendors are working towards single chip CMOS designs that will require a smaller die size and less power and space on boards. In addition, zero conversion designs, like Intersil’s PRISM 3 chips, can perform direct down-conversion (DDC) RF signals, without requiring a step-down to intermediate frequency stages before conversion to the actual baseband signal. This also reduces the chipset cost and helps to make single CMOS chip designs possible.

This will be especially helpful in combo chips, which most analysts think displace pure ‘b’ chips be the best sellers in 2003 and 2004. At this time, though, sample quantities of combo chip sets are available. Atheros claims to have the lead, and TI and Intersil, devoted to their ‘g’ chip families, are known to be lagging behind other vendors.

Nogee, though, thinks, “802.11g does have a place. It will eventually replace most 802.11b. It has better range than “a” and is backward compatible with “b.” If you could get higher speeds with very little price penalty, wouldn’t you go for it?”

Some vendors though want to deliver higher throughput today without the IEEE’s blessing. TI, for example, has recently shipped a PBCC enabled 802.11b chipset, aka ‘b+,’ with claims that compatible ‘b+’ AP and NICs will reach up to 22Mbps speeds. And, since its beginning Atheros has supported ‘turbo’ modes that enable its ‘a’ chipsets to hit over 100Mbps speeds. Of course, these are all proprietary and incompatible approaches.

WLAN Futures

So it is that it seems clear that one-way or the other one 802.11 variant or the other will emerge on top. The question: which one? Some analysts think combo chips-perhaps even supporting b, g, and a on a single chip-will be the wave of the future. While others think that, delays and all, g, with built-in ‘b’ backwards compatibility will yet win out.

While WLAN chipset volume is high, and will go only higher, profits are another matter. One trouble spot is that although vendors shipped more WLAN chipsets in 2001, the revenue the chipsets generated fell because their per-unit price dropped.

The reason, according to Nogee, is that “Once the Taiwanese manufacturers started creating chips in large numbers, prices dropped fast. The rate of the fall in prices has slowed, but prices continue to drop. Also, with many more players now, there is much competition, and generally its on price more than anything else.”

Another potential problem is users becoming frustrated with incompatibilities. Nogee notes that, “The hodge-podge mix of security standards is the biggest problem now. Five 802.11b devices might all use a different security standard, and it’s almost impossible to know if one will work with the other. I think most people know that 802.11a at 5GHz won’t work with 802.11b at 2.4GHz. But Wireless Ethernet Compatibility Alliance’s (WECA) stance with first calling 802.11a Wi-Fi5 and now using Wi-Fi for all 802.11 technologies, hasn’t helped with the standards confusion.”

Eventually UltraWideBand (UWB), a non-IEEE technology, may play a role in the WLAN space. Indeed, Redelfs says that “UWB is interesting technology,” and that Atheros isn’t an 802 company, we view ourselves as a wireless company, we’ll build UWB if it makes sense.”

For the next few years, though, one way or the other, one standard or the other, 802.11 will rule both home and business WLAN.

The Reseller Take

For resellers, the bottom line is that you should look for NICs and AP with combination chipsets. I can’t see any one of these technologies dominating the marketplace anytime soon.

As the price-points drop, I expect the wireless market to pick up even if the rest of the economy continues to flop around like a fish out of water. Whether they like it or not, customers will need to replace their aging network infrastructures and move to new offices. When they do so, the new high-speed wireless standards will attract customers who don’t want to spend any more time dragging cable.

Specifically, I believe that combination a/g or a/b+ devices will sell well. I also expect though that the margins will shrink quickly. The key to making money at WLANs thus won’t be in selling equipment; it will be in network installation, integration and maintenance. With a complete installation and administration package in hand, I foresee a nicely profitable year ahead for network integrators.

September 21, 2002
by sjvn01
0 comments

Instant Messaging Clients

Somehow, some way, people who are new to Linux have gotten the idea
that Linux has limited IM choices. Since the Unix family was the first
to have popular IM clients (with “talk” leading the way), that’s more
than a little silly. It is true that if you want the latest AOL
Instant Messenger (AIM) features or MSN Messenger you’re out of luck,
but there are many other clients to choose from, and some will let you
talk to your buddies whether they’re on AIM, MSN, or even Yahoo!.

That last part is very important. People sometimes think that IM
clients are chosen for their technical excellence or features. No,
they’re not. Forthcoming research from Ferris Research shows conclusively
that we choose our IM clients based on what services our friends and
coworkers are already using.

If it’s for business, many other factors come into play, such as
security, message archiving, logging, and interoperability. Even so,
I suspect that the services that the CIOs and CEOs use at home
probably get as serious a look as the corporate IM packages.

Linux IM clients do tend to have fewer features than their Windows
counterparts. On the other hand, some IM users aren’t crazy about IM
clients that include video conferencing, file transfers, games, N’Sync
wallpaper, and the kitchen sink. If all you really want is good, solid
IM service that will let you talk to the people you want to talk to,
then there’s sure to be a Linux IM client for you.

How I tested

While this isn’t a comprehensive survey of Linux IM clients (by my
count, there are at least a dozen ICQ clients alone), it is an
overview of some of the best and most notable IM clients available
today.

I connected, when supported, with users on AIM, Jabber, MSN, and
Yahoo! servers for chatting and any other basic functionality that
the client claimed. To all the developers’ credits, I rarely found a
feature claim that their clients didn’t back up to at least a usable
level.

That said, most of these programs are still in beta, and you will run
into glitches. None of these blemishes ever got in the way of using
the clients for their main purpose, but if you push their limits,
don’t be surprised if you run into some oddities. None of them,
however, were so bad that simply closing and reopening the client
didn’t repair the problem of the moment.

Another thing to keep in mind is that if you expect these text
messaging clients to also work as videophones (as their Windows
cousins do), you’re in for a disappointment. Yes, you can do text
messaging. Yes, you can do file transfers with most of the clients.
But if you want to see and hear grandma in Nebraska, none of these
clients are able to do that job as well as the Windows clients. For
my tastes, that’s not a problem. If I want to do video-conferencing, I
want a real video-conferencing program, not an overloaded IM
client. Your viewage may vary.

The programs

AIM

If you want it, you can have AIM
running on your Linux workstation today. You might not want to,
though. The latest Linux edition dates from August 2001 and is several
iterations behind its Windows big brother. Last spring and summer, AOL
was taking AIM on Linux seriously. Today is a different story. While
no one at AOL would come right out and say the project is dead, it’s
pushing up posies.

Still, once installed, with a modified GTK library, it does give you
all the AIM basics, and if that’s all you need, it’s all you need. It
should run on most Linux desktops of July 2001 or later vintage.

Gaim

You may not want bother with AIM for Linux, though, because Gaim out-performs
it. This Open Source IM client works with AIM, ICQ, Jabber, MSN,
Yahoo! Messenger, and more IM systems than any of the others. If you
want one client to talk to everyone, Gaim is it.

The newest version, despite its beta number, is much more stable than
previous versions and is now as steady as any IM client I’ve
encountered. While Gaim is meant for GNOME, you can run it flawlessly
with KDE with the appropriate tweaks.

It’s not as feature-packed or as pretty as Trillian on Windows or Epicware‘s Open Source Fire.app on
Mac OS X, but another point in Gaim’s favor is that you can add to its
utility with plugins like a spelling checker or an RC5 encryption
system. You can only encrypt sessions between Gaim clients, but more
and more business users want that kind of protection, and relatively
few clients deliver it today.

Everybuddy

Everybuddy is
another Open Source project that tries to talk to all major IM systems
and it does a pretty fair job of it. While not as full-featured as
Gaim and slightly rougher around the edges, it’s a fine program in its
own right.

Transferring files over it can be daunting, with some uploads and
downloads unsupported. For example, the site says you can download
files from your MSN friends, but, in the current RPM, you can’t. That
feature is still being tested.

On the other hand, Everybuddy does have a few interesting features
that, to the best of my knowledge, are all its own. For example, you
can use a filter with it so that you can try to talk to someone
in another language using AltaVista’s Babelfish as your translator.
Thanks to Babelfish, this is more amusing than useful, but it does
show that the developers have an eye to the future of IM.

Gabber

Gabber is a
GNOME-based IM client for the Jabber family of Open
Source, XML-based IM programs. Jabber is the most popular IM service
after the corporate threesome of AOL, Microsoft, and Yahoo!. You can
use gateway programs with it to talk to AIM, MSN, and Yahoo! users,
with the usual proviso that these may not always work.

Gabber, unlike Gaim, is much more picky about running in GNOME than
KDE. While I was finally successful in getting it to run in KDE, more
casual users should be running GNOME if they want to use Gabber. That
said, once up, Gabber ran fine, although I did run into some odd
crashes. Still, Gabber is a young program and shows promise,
especially with its very attractive user interface.

Kinkatta

Kinkatta is a
pure AOL IM client without the other IM service trimmings. It is a
solid, reliable client.

A labor of love by chief developer Benjamin Meyer, Kinkatta has a few
features that other clients don’t have, including the ability to print
directly from the chat window and some advanced message logging
features.

Licq

The ICQ for Linux
site
declares that Licq is the best ICQ
client around. Who am I to argue? I’ve tried the others, too, and if I
had to have one client and it had to be just for ICQ, Licq is the one
I’d pick.

Why? Well, not to put too fine a point on it, but the program just
shows that more elbow grease has been applied to its look,
functionality, and speed than most of the other ICQ
clients. Specifically, I like the skin support, the file transfer
mechanism, the user search capacity, and on and on. If you want an ICQ
feature and it’s not in Licq, it may not be worth the having.

Yahoo! Messenger

One of the major IM companies, Yahoo!, is taking Linux IM
seriously. Its neat Linux
client
, while not as fancy as its Windows clients, includes such
bells and whistles as access to user profiles, stock reports, and
email access.

Of course, using the Web and HTTP for this instead of packing it all
into an IM protocol helps a lot. The program is, however, optimized
for Netscape. If you use Konqueror or another browser, you may have to
do some tweaking to get Yahoo! Messenger and the browser to
work well together.

Yahoo!, unlike AOL, is updating its Linux client on a regular
basis. The latest build is from this summer, and I’m told newer
versions will be out shortly. Alas, it’s not Open Source. Despite
that, YM actually has the widest OS support of all these programs. In
addition to its Linux builds, YM also comes in versions for SunOS and
FreeBSD.

YM is also limited in that it only works with Yahoo!. The good news
about this, though, is that YM will still be working come the day
that the more universal IM clients are having fits with the major IM
services.

The one for you?

Again, it really depends on what services your friends are on. For ICQ
fans, Licq is the one to beat. For personal use, though, and as
someone with friends and coworkers on all the IM networks, Gaim is my
favorite.

If you want a business IM client that works with the outside world, YM
deserves your careful attention because, unlike AIM- and MSN-dependent
clients, it’s the most likely to work today and tomorrow.

You should also think about Gabber because Jabber seems posed to
become a major, open IM server force in its own right. Its other
advantage is that if you want an internal business IM system, you can
simply install a Jabber server, and, with the right firewall settings,
you can have your own internal Open Source IM system for a minimal
investment.

Mission interoperability improbable

Gaim’s one big problem, along with all the universal clients, is that
the major IM server companies, AOL (which owns both AIM and ICQ),
Microsoft, and Yahoo!, have no reason to want unauthorized clients to
use their systems. Worse still, they have several good reasons to
block IM programs like Gaim, Everybuddy, and Kinkatta from their
servers: advertising revenue from their proprietary clients, traffic
on the last mile to their servers, perceived security holes, and
cross-licensing deals with other software vendors, such as AOL
enabling Lotus to use AIM servers with Lotus’s Sametime client.

Because of this, AOL and Microsoft have both blocked access to their
servers from non-authorized clients at times. While users want
interoperability, IM companies aren’t interested in supporting clients
that don’t contribute to their bottom line.

Currently, message protocol changes are used to block unwanted
clients. AOL, for example, uses two protocols for AIM. OSCAR is the
proprietary protocol, and there is no published specification for it.
IM clients that use OSCAR, like Gaim, can find themselves blocked from
the service if AOL fiddles with the protocol. So far, the free IM
client programmers have been successful in reverse engineering these
changes so that new versions of their clients will work with AIM
again. TOC, on the other hand, is a simpler, well-documented Java
protocol that AOL uses in its Java-based “Quickbuddy” AIM client, so
many free AIM clients, like Everybuddy, use TOC.

The bad news for TOC developers is that AOL hasn’t worked on TOC for
some time, and it’s not nearly as functional as OSCAR. For example, a
TOC-based client can’t support buddy icons or voice. On the other
hand, AOL hasn’t shown any signs of blocking access to its AIM servers
with TOC, while OSCAR is changed fairly often.

While open protocols like Jabber and Simple
(which is likely to be adopted by AOL and Microsoft) offer a way out
of this programming problem, it’s not likely to stop IM service
providers from eventually blocking non-authorized IM clients using
server-based authorization schemes, like Microsoft Password and
Project Liberty, which is supported by AOL.

Editor’s addendum: Console Clients

This review was adapted from a NewsForge article which only
discusses GUI IM clients, so I want to put in a word for some of the
more popular console clients.

The Unix console, of course, is where realtime Internet
communication began, and you’ll find talk/ytalk/etc. in most *nix
distributions if you need it to talk to someone or if you and a friend
just want to feel retro.

The more mainstream console IM clients, like the GUI clients, are
divided between those that are made to connect to one IM system and
those that work with several.

mICQ

mICQ is the
grand-daddy of console ICQ clients. It still has the same minimalist
look and IRC-like interface that it had when I used it years ago, but
has developed a respectable feature set over the years. If you want
simple ICQ access, it may be for you.

naim

naim is a
wonderful AIM client with a unique and easy-to-use interface. Since
it uses TOC, it’s continued to work while other clients have scrambled
to catch up to protocol changes. If you want a simple, stable AIM
client, naim is it.

Ari’s Yahoo Client and gnuyahoo

The traditional choice for dedicated Yahoo! chatting on the console
is Ari’s
Yahoo Client
. Unfortunately, it’s recently become a victim of
Yahoo!’s ceasing to support clients based on older versions of
Messenger. Until that’s worked out, you may want to try your luck
with the newer gnuyahoo.

centericq

IMHO, the greatest of all console IM clients is the multi-protocol
centericq.

I’ve sometimes wondered why a good console Jabber client has never
appeared. The only one that’s even worked for me is IMCom, which is rather
primitive. I was using Everybuddy when I first heard about Jabber,
and I expected a console Jabber client to appear quickly; I thought it
would be a common geek itch waiting to be scratched. Unfortunately,
nothing surfaced, and when I cut back on my use of GUI applications, I
ended up with three screen windows devoted to IM clients.

centericq, despite what the name implies, came to my rescue at
last, not as a Jabber client, but as the child of someone who was
willing to add support for many protocols on his own. centericq now
boasts support for ICQ, Yahoo!, MSN, AIM, and IRC, and has an
excellent interface and a huge number of features and configuration
options. The centericq mailing list is lively, and development is
continuous. No matter which IM service(s) you use, give centericq a
try.

A version of Instant Messaging Clients was first published in FreshMeat.

August 2, 2002
by sjvn01
0 comments

Caldera Buys SCO Unix & Professional Services

It’s the beginning of a new era and the end of an old one. Last night, Caldera, a leading Linux distributor, bought the Server (Unix) and Professional Services Division of Santa Cruz Operations (SCO), the long-standing leader of Intel Unix. With this single move, Linux and Unix are unified for the first time under one company. The enterprise operating system world will never be the same.As reported earlier by Smart Partner , the two companies have been close to a deal for weeks. In the end, Caldera gained not only the Unix-on-Intel Server Software Division–which consists of UnixWare and OpenServer along with SCO’s existing reseller partnerships and contracts in SCO’s traditional global markets of small and midsize businesses, retail, telecommunications and government–but also the Professional Services division, as well.

This Professional Services division provides consulting, support and installation/integration support to both direct customers and SCO’s partners. This division, under Jim Wilt, current president of the SCO Professional Services division, will operate as a separate business unit of Caldera.

As for joining together time-tested Unix and low-cost, open-source Linux, Caldera states that it offer the industry’s first comprehensive Open Internet Platform (OIP) combining Linux and UNIX server solutions and services globally. Specifically, that will include Caldera’s Linux line and SCO’s OpenServer and UnixWare lines. The combined OIP product line will be unveiled at SCO’s premier partner show, Forum2000, on Aug. 23 and made available through SCO and Caldera’s existing 15,000 worldwide partners.

The Fine Print

Caldera Systems is forming a new holding company, Caldera Inc., to acquire the two divisions. This deal includes those division’s employees, products and channel resources.

In the deal, SCO will receive 28 percent of Caldera Inc. This is estimated to be an aggregate of approximately 17.54 million shares of Caldera stock (including approximately 2 million SCO options shares reserved for SCO employees joining Caldera), and $7 million in cash. Simultaneously, Ray Noorda’s venture capital firm, The Canopy Group, Caldera Systems’ major stockholder, has agreed to loan $18 million to SCO. Presuming that Caldera, Inc.’s stock is valued at $5.00 a share–the average price of Caldera Systems and SCO stock before the market opens on Aug. 2–the deal is worth approximately $84 million in stock for a total, not counting the loan, of $91 million.

SCO also will retain its highly valued Tarantella, a popular application service provider (ASP) division, and the SCO OpenServer revenue stream and intellectual properties. In the last quarter, SCO OpenServer revenue amounted to $11.1 million. After expenses, the net proceeds to SCO will be approximately 55 percent of future SCO OpenServer revenues. The investment banks of Chase H&Q for SCO and Broadview for Caldera Systems helped arrange the deal.

Both firms’ boards of directors have unanimously approved the acquisition. The final decision is subject to the approval of Caldera Systems and SCO’s stockholders. If all goes well, the deal should close in October 2000.

Caldera Inc. will be headquartered in Orem, Utah. Following the acquisition, Ransom Love, current president and CEO of Caldera Systems, will become CEO of Caldera Inc.; and David McCrabb, current president of the SCO Server Software Division, will become president and COO of Caldera Inc. Finally, Doug Michels, president and CEO of SCO will become a member of Caldera Inc.’s board of directors.

A View From The Top

Leadership at both companies is happy about the deal. Love described the deal as an “industry-changing event that puts Caldera front and center as the answer to the enterprise question.” For those who might worry about what this means for their existing SCO operating systems, he states that “Caldera is fully committed to supporting and servicing the SCO OpenServer and UnixWare communities.”

David McCrabb elaborates on Love’s statements saying that “Caldera Inc. will incorporate a worldwide network of sales and support offices, a strong commercial Unix system business and a rapidly growing open-source company. This combination will be a force to contend with in the worldwide market for Internet solutions on high volume platforms.”

As for what remains of SCO, Tarantella, Michels believes that, “this transaction enables us to invest in the exciting growth opportunities … by the continued attractiveness of thin-client computing and by the accelerating adoption of the ASP” model.

The Reaction

Some are welcoming the change with open arms. Many believe that uniting Unix with Linux can only strengthen Linux’s rising tide. According to IDC, Linux has 1.4 million server licenses, far more than all Unixes put together, but SCO’s servers are located in the core of many older businesses. While Linux’s strength is on the Internet, SCO still runs many small to midsize businesses, governmental departments and vertical markets.

From a Caldera supporter’s viewpoint, Caldera gets a strong foothold in traditional brick-and-mortar businesses while maintaining the place in the Linux e-commerce and Internet strongholds. In addition, Caldera gets access to an outstanding, albeit somewhat ragged after the difficult ties of the last year, reseller channel.

Some people, including one senior executive at a disenchanted SCO reseller, however, don’t think Caldera will be getting much of a deal. He says that, “Anyone who has attended the last few SCO Forums can tell you that the SCO channel has just about disappeared. The combination of competition from NT, the attempt to force Open Server customers to convert to UnixWare and direct selling by SCO has just about killed off every VAR and integrator.”

Others–such as David Gloria, SCO Premier Reseller with Computer Integrators and president of Ixorg, an organization of SCO resellers–are more optimistic. Gloria believes “that the real value that Caldera will get from the deal is not the UNIX name, not the customer base, not even the technologies. It is the Reseller Channel.”

At this early stage, Caldera certainly shows every sign of supporting the channel. Caldera Systems will join SCO in hosting Forum2000, SCO’s premier partnership event starting on Aug. 20 at the University of California, Santa Cruz. The company also plans to unveil its updated product offering at the show.

Chris Clabaugh, Allegix’s CEO, agrees that the signs look good, “I view the [proposed] SCO/Caldera mix to be a good thing for customers like us.” He believes that because the “consolidation of operating-system platforms choices will mean simpler product choices for both customers and channel partners.” Still, he thinks that the deal also needs to result in “the combination of SVR 3/4/5 enterprise features with Linux” to be a real success.

Red Hat CEO Matthew Szulik would agree with that, although in a harsher manner. “This validates what we and the IDC numbers have been saying all along about the death of the proprietary Unix market. As advocates of open source, we look forward to Caldera’s support of open sourcing SCO’s proprietary Unix technology to the entire open-source community.”

Still, Caldera’s path isn’t one that Red Hat would choose. Szulik goes on to say that, “Red Hat has made nine acquisitions in nine months–nine acquisitions that emphasize our focus on new and emerging markets. We believe that our focus on these markets, rather than the rehabilitation of old markets, is what will help Red Hat continue its role as a leading innovator in the open-source technology industry.”

The lines are drawn. Only time, and the marketplace, will tell whether the marriage of Unix and Linux will be a happy one.

Why SCO Made The Deal

Frankly, SCO had little choice but to make some kind of deal. While according to IDC’s research manager for system software Al Gillen SCO is the leading Unix distributor holding 37 percent of the Unix market with 313,000 software license shipments in 1999, compared to Sun’s 22 percent of the market, SCO and the Server division were still losing money hand over fist.

While the Server division, according to Doug Michels in his third quarter report to stockholders, accounted for 92 percent of the company’s total revenue, SCO was in deep trouble. In its just reported third quarter 2000, revenues had dropped to an alarming $26,931,000, from $57,060,000 for the same quarter last year. Even more painful, the net loss for the quarter was $19,240,000, or $0.54 per share, compared with a net profit of $4,535,000, or $0.13 per share.

The picture didn’t get any brighter from a broader perspective. The overall revenues for the nine-month period ending June 30, 2000, were $116,126,000, compared with $165,504,000 for the same period in fiscal 1999. The net loss for this period was $36,174,000, or $1.02 per share, basic and diluted, compared with a profit of $11,483,000, or $0.33 per share, basic and $0.32 per share. Anyone could foresee what would happen next: By July 31, SCO stock reached a near all-time low of 3 and an eighth.

SCO’s financials really had only one silver lining: Tarantella. While the Professional Services division showed a slight dip, Michels commented that “We were particularly pleased to see that the Tarantella division was able to increase revenues by 59 percent from the prior quarter.”

Outside observers thought SCO’s financial troubles were due to several factors. Michael Foster, VP of marketing for ASP infrastructure provider and SCO customer Allegrix Inc., and former SCO director of communications, comments that, “It could be that SCO’s caught in a perfect storm of their own. … The Linux front, combined with the Windows2000 front, combined with the Internet e-business front [IBM, Solaris, etc] all of which has SCO bobbing up and down.”

Dan Kusnetzky, VP for system software research for IDC, thinks that part of the problem was that, “Unix is a mature market. SCO is trying to transform their business and it was in their other two divisions, Professional Services and Tarantella, that they still have a good chance at growth.”

Another problem with the Unix division was that, while technically SCO Unix products were excellent, Kusnetzky also comments that, “SCO was in the middle of a three-front marketing war with Linux moving into their bread-and-butter of low-end systems, Solaris moving down into IA-64 and Windows 2000.” He also noted that while SCO is the leading Unix provider that because they let the SCO brand disappear underneath customers’ names like Compaq and Oracle, so that few people actually knew they were running SCO operating systems.

Michels, on the other hand, put the blame for SCO’s fall on the impact of last year’s Y2K problem. He said that, “It had quite an impact on our business. And, were slowly, much more slowly than expected, gaining business back.” But, he also admitted that, “The Internet, Web-centric, ASP models and Linux are starting to have an impact on the industry and will continue to do so in the future.”

The future turned out to be only a few days away.

First Published in Ziff-Davis’ Sm@rt Partner

July 26, 2002
by sjvn01
0 comments

Can Linux do Web Services?

Is Linux ready to move beyond file and Web servers to application and Web services servers? The answer, if IBM has anything to do with it, is an unqualified yes

IBM has been a major Linux supporter for years. And, with the arrival of UnitedLinux, both IBM’s internal programming efforts and its independent software vendor (ISV) partners will have much less trouble and spend less money porting and building Linux applications.

Given all this, it should come as no surprise that of the more than 300 IBM middleware products available, more than 50 are now available on Linux on IBM’s Intel-based xSeries servers and 20 are ready to go on the mainframe zSeries.

IBM’s WebSphere, its Java 2 Enterprise Edition (J2EE) application server and leading middleware product, has long been available on Linux for the X and Z series, and is in late beta on the iSeries (formerly the AS/400 line) and the pSeries (also known as the RS/6000 line). WebSphere on these platforms is expected to be available in the first quarter of 2003, with the iSeries version appearing first.

Other core IBM middleware products — its DB2 database offering, Domino, and the MQSeries — are also already available on the X and Z lines. Simultaneously, IBM ISV partners — such as AccPac, Computer Associates, Sage, SAP, and SAS — either have brought their business applications and middleware products to IBM’s Linux lines or are in the process of doing so.

Why are they bringing their middleware to all these different platforms? The simple answer is that different customers have different needs. Adam Jollans, IBM’s Linux strategy manager, says a small shop or a decentralized company might go with the xSeries, whereas a larger company that’s comfortable with centralized computing might go with the midrange iSeries or mainframe zSeries.

The one platform that isn’t getting much attention is the pSeries. While acknowledging that most RS/6000 administrators are happy with the more mature AIX 5L, Jollans says IBM plans to bring its middleware offerings to Linux on this platform as well. He foresees a day when Linux-only shops will want to move to the eight-way processing power of the pSeries, and they’ll want to take their Linux middleware-based applications with them.

Jollans says customers want enterprise middleware on Linux for workload consolidation, server consolidation, and Web applications. “Two to three years ago, it was technical people wanting Linux for file and Web servers,” he says. “Now it’s IT managers and CIOs looking for a good, stable operating system and middleware.” Cost is another consideration. “Time and time again our customers are looking at Linux as a way to save money,” Jollans says. “Rarely do other options come to play.”

Ed Lynch, IBM’s Linux systems manager, says another long-term driver is making IT departments look to Linux middleware. “What keeps CIOs up at night? ‘I’ve got too much work to do and not enough bodies to do it.’ So where are people with skills and what skills do they have? There’s a natural wave to ride on and that’s Linux,” he says.

Another reason, according to Dan Kusnetzky, IDC vice president for system software research, is that IBM sees Linux as an emerging market. With little additional effort, he says, IBM can move its AIX efforts to Linux while supporting Linux unification. “With this, IBM can ride Linux into new markets and into places where IBM hasn’t been able to get into for years.” Kusnetzky considers this Linux strategy a wise move because for IBM, “the more platforms it can play on, the more revenue it can attain.”

Exactly how much revenue IBM gets from Linux middleware (and Linux in general) is almost impossible to determine. IBM refuses to reveal exact revenues generated from its famous billion-dollar investment in Linux and AIX. Kusnetzky believes that AIX drives most of the revenue, but that in two to three years, Linux and its middleware will be IBM’s most profitable line. Who else is backing Linux

IBM isn’t the only big software vendor to throw its weight behind Linux.

Oracle is also making a big Linux middleware push. While IBM is essentially working with two major Linux vendors (UnitedLinux and Red Hat), Oracle and Red Hat are working hand-in-glove in a three-way play in which Dell provides hardware, Red Hat develops an Oracle-friendly Linux (Red Hat Advanced Server), and Oracle adds Real Application Clusters and support for Red Hat’s clustering file system to its Oracle 9i database. In this way, the three companies can give customers a complete package of hardware, operating system, and middleware — just as IBM and Sun already do.

Hewlett-Packard is also making a hard enterprise Linux push. But after being one of the first to embrace Linux middleware, HP appears to be moving out of the middleware business. Sun is also quickly moving into Linux middleware. In early February, Ed Zander, COO and president for Sun at the time, announced that Sun will port “the entire Sun Open Network Environment (ONE) implementation to Linux.”

Pure application server companies are also on the Linux bandwagon. For example, BEA Systems’ new high-end Java Virtual Machine, WebLogic Jrockit, runs on Linux as does WebLogic Server 7, the company’s flagship J2EE application server.

It’s not just about Linux

The move to Linux middleware represents a true change, says IDC’s Kusnetzky, and in the end helps to unify the Unix platform. “HP is talking about how HP-UX will be able to run Linux applications; so is Sun with Solaris,” he says. “ISVs are going to be asking themselves, ‘Why should I bother to develop for a specific Unix if I can develop for Linux and it will run on almost all Unix platforms?'” Kusnetzky says that IBM is smart in betting big that Linux will become the universal enterprise Unix platform of tomorrow.

Not only are IBM, Oracle, and other middleware vendors embracing Linux, they’re also embracing J2EE — almost all Linux middleware products are based on J2EE application servers. This isn’t an about-face, but it does form a strong bond between Linux and J2EE.

One could even argue, as Kusnetzky does, that CIOs decide what database and middleware they need before they decide which operating system to support, rather than vice versa. “You don’t want to lock yourself into hardware and operating systems, because that only makes it harder to migrate as technology improves,” he explains. This approach is especially safe considering that middleware development has largely divided into two camps: Net supporters (Microsoft) and J2EE supporters (everyone else).

A version of this story was first published in ZDNet.

July 23, 2002
by sjvn01
0 comments

Warp Speed! 10-Gigabit Ethernet is on its way

When Bob Metcalfe was working for Xerox at its Palo Alto Research Center (PARC), his problem was how to create a network that would enable PARC computers to use the world’s first laser printer. His answer, summarized in a May 22, 1973 memo, was to create Ethernet running at 2.94Mbps. No one knew that Ethernet would become the most popular LAN technology around.

Fast-forward to 2002, and major Ethernet companies researchers and engineers are readying the final version of the 10-gigabit Ethernet (10GbE) standard.Today, 10GbE is a reality, after its IEEE standard IEEE 802.3ae was finally approved in early June. Cisco, Extreme Networks, Foundry Networks and Nortel are already shipping in trial runs, pre-standard 10GbE equipment and the June Supercomm show featured two dozen companies showing it doing its stuff.

Even before it was approved, the technology was already being used in some products. This is because, Val Oliva L2/3 product marketing manager for Foundry and member of the 10 Gigabit Ethernet Alliance (GEA) Board of Directors, explains, ‘It was designed this way because the development of 802.3ae is built upon an ‘alliance,’ which includes vendors from chip to system vendors. No other standard body (GEA) has ever performed a task that created a standard, ensure that there is a working form of the standard, and ensured that there is “inter-operations” of the standard.” In short, 802.3ae was meant for pre-standard, early adoption.

10GbE isn’t you dad’s Ethernet though. For starters, it only runs on fiber-optic at this time. Besides simply running at 10Gbps, 10GbE also has traditional Ethernet’s Media Access Control (MAC) protocol and its frame format, minimum and maximum frame size. However, since 10GbE is full-duplex only-there is no half-duplex option-Carrier Sense Multiple Access/Collision Detection (CSMA/CD) isn’t needed or implemented. In addition, 10GbE works only on optical fibre. Besides being faster, 10GbE has greater range than its ancestors. The 10GBASE-EW version can reach up to 40 kilometers.

Don’t look to deploy10GbE from your client’s office wiring closet to their desktops anytime soon. The office still belongs to Ethernet and Fast Ethernet. No, according to the GEA white paper, 10GbE Overview White Paper, 10GbE will server as a backbone switch to switch and server to switch technology that will enable administrators to extend gigabit to the desktop. This will greatly enhance applications such as videoconferencing, streaming video, high-end graphics and medical imaging.

Seamus Crehan, senior analyst for the Dell’Oro Group, also thinks that, “Storage Area Networks (SANs) could be a really big application market for 10GbE. Indeed, Adaptec, Intel, HP, and QLogic engineers think that Internet small computer system interface (iSCSI) could be used to transport data I/O over 10GbE-borne IP.

Where many 10GbE proponents really see 10GbE coming into its own though is outside of Ethernet’s traditionally stronghold of LANs in the wider world of metropolitan area network (MAN)s and wide area network (WAN). In this arena, a unified, Ethernet-based network topology, could ease network management and reduce network-related costs. According to Bruce Tolley, VP of GEA and manager of emerging technologies for Cisco, 10GbE reduces the need for non-Ethernet technologies like ATM and “leverages the installed base of 250 million Ethernet ports” making Ethernet cheaper than other transport technologies and “In the end, economics always matter.”

Will this new Ethernet make the grade? Crehan thinks so; “It will be very successful in LANs and very likely to be highly successful in MANs and WANs.”

The Why of 10GbE

According to Crehan, what’s driving 10 GbE’s development is that since “Ethernet has become an extremely prevalent technology, because it’s inexpensive and easy to use, 10GbE was just the next evolutionary step. Although there isn’t a huge amount of bandwidth drivers now, as Gigabit switches become more prevalent, 10GbE will come up.” He goes on, “people can’t wait until the bottleneck exists, they need to work on standards today.” Thus, 10GbE was a “forward thinking move.”

On the WAN and beyond service providers will be able to have entire networks running on Ethernet, thereby creating a unified network topology. Oliva point blanks says that 10GbE is meant “to push Ethernet in the MAN and WAN, everywhere. In the end, Ethernet will dominate L2, just like IP dominated L3.”

Frits Reip, Senior Product Marketing Manager for Nortel, also has said that 10GbE will have a 84% interface cost saving over SONET and will giving a savings of 30 to 60% per user over other managed network services. In the end, 10GbE adoption will be as much driven by market performance as throughput performance.

10GbE Technology

Besides the technical differences mentioned earlier, 10GbE also differ from its forefathers in two significant ways. The first is that the optical transceiver or physical medium dependent (PMD) interface for single mode fiber that will work with both the LAN physical layer (PHY) or WAN PHY.

This has caused some people to think that there are two different kinds of 10GbE. That’s not true. Oliva says, “There’s one version and it’s 802.3ae. There are several 802.3ae PMDs (Physical Media Dependent) and they are classified into two categories: WAN PMDs (“W”) and LAN PMDs (“R”).

Another point of difference is that if you go with the WAN PHY option, you transparently transport 10GbE over existing SONET OC-192c fibre. This optional PHY incorporates a simple, inexpensive SONET framer and operates at a data rate compatible with OC-192’s 9.953Gbps speed. The long and short of this OC-192 compatible asynchronous Ethernet interface is that it will enable service providers to seamlessly run 10GbE over existing OC-192 compatible fibre and transponders.

Or, you could even, as Richard Cunningham, a Cahners In-Stat analyst, points out, “run SONET on some wavelengths on a single fiber and run 10-GE on other wavelengths on the same fiber, as long as you don’t have optical amplifiers in the chain.”

Most of 10GbE runs over single fibre optics, but one type, LX4 PMD, is based on Wide Wave Division Multiplexing (WWDM). LX4 uses four wavelengths of light over a single pair of fiber optic cables. With this, the four wavelengths are “bonded” to create 10 Gigabit speeds.

One advantage that 10GbE has over SONET and other optical technologies is that it can be run on dark fibre. Dark fibre is the unused capacity available on dense wave division multiplexing (DWDM) equipment. By making use of this untapped resource, 10GbE will enable service providers to get more use out of their existing fibre.

New Uses For a New Ethernet

Oliva sees the MAN and WAN as 10GbE’s destination. “The challenges for Ethernet’s acceptance into the MAN and WAN are several, and the 802.3ae (10 Gigabit Ethernet) resolves two of those challenges – bandwidth and distance. To move Ethernet’s into the ‘final frontier,’ which is WAN, it needs to make sure that those challenges are knocked off. There’s more challenges, but 802.3ae is a move in the right direction.”

Tolley, agrees writing “with 10 Gigabit Ethernet backbone networks, service providers will be able to offer native 10/100/1000 Mbps Ethernet as a public service to customers, namely offering the customer twice the bandwidth of the fastest public MAN services OC-3 (155 Mbps) or OC-12 (622 Mbps) with no need for the added complexity of SONET or ATM and no need for protocol conversion.” But, the LAN is also important. He notes that 10GbE switches will be used for server interconnect, campus backbones, and aggregating Gigabit switches.

From the outside looking in, Crehan agrees for the most part, but he also cautions that
“There’s already a large, recent installed base of SONET equipment and service providers aren’t looking to junk that anytime soon.”

Crehan also expects that “many switches will have multiple interfaces ATM to OC48 to 10GbE.” In any case, “you won’t see companies fork-lifting new equipment in.” Instead “service providers, who are already trying to squeeze out as much revenue out of what equipment they already have,” will gradually migrate to 10GbE.

He sees the first “significant volume deployment will be dependent on the enterprise as LAN backbone and maybe dark fibre in campus situations.” But, the “price needs to be more attractive then it is right now.”

At first, the prices won’t be that alluring. Crehan says that the first prices you’ll see will be from “20K to 90K list price per port on 10GbE with the shorter reach will be much cheaper than the long reach versions. At 10K distances,” for example, he says, you’ll see: 70 to 80 grand today.” But he expects, “aggressive price drops of from 30 to 40% price will be seen quickly. We’re very early now, but as more vendors come in the prices will continue drop aggressively. On a dollars per gigabit, at least as cheap as gigabit Ethernet.” He thinks that soon you’ll “see a single 10-gigabit Ethernet port at prices that will be price comparable to 10 1-gigabit ports.” By 2005, he expects 10GbE to have become a billion-dollar market.

Quality of service (QoS) is also a concern. Cunningham says, “SONET offers a good deal of redundancy, up to and including a full restoration path, so that the system can sense a path failure, and then reroute the signal to the destination via another path with 50 milliseconds. The 50-ms figure, also known as latency, is a carryover from voice traffic; if someone’s speech is broken up in chunks and shipped from here to there with more than a 50-ms differential delay, the result is garbled to the human ear.” Ethernet on its own, plain old Ethernet or 10GbE, doesn’t have that kind of redundancy. Some question though remains as to whether 10GbE needs these kinds of voice-carrier QoS features.

Some observers also wonder whether even at 10Gbps whether 10GbE is fast enough for the long run. A few of them cite OC-3072 SONET, which would run at 160 Gbits per second, as an example of a technology that would blow 10GbE’s doors off. Cunningham replies, “Don’t hold your breath for OC-3072. That’s a long way in the future. Granted, Lucent and the Heinrich-Hertz-Institut in Berlin will be doing some experimental work on in-ground fiber near Darmstadt this summer, but it’s hard to imagine any 160-G stuff deployed before 2010, and I’m being optimistic there. There are a huge number of physics problems to be overcome before that can get out of the laboratory.”

But as for 10GbE, it’s only real problem, according to our experts is price. Crehan expects 10GbE pricing to drop to affordable levels even for a tight economy. If it doesn’t, he foresees a far less rosy future for 10GbE.

Oliva though is sure that 10GbE will prosper. As she says, “Ethernet is IP’s best friend, and in the end, IP wants Ethernet.”

SIDEBAR: Gigabit to the Desktop?

While there may be few deployments of 10Gbe in the next few months, where it is deployed, you might be able to turn this into additional work by suggesting the deployment of gigabit Ethernet to the desktop. With sufficient bandwidth now available over campus WAN networks, gigabit to the desk is much more attractive since a few desktops can’t potentially eat an entire network bandwidth with one gigantic, ill-timed file download gulp.

But, for that to happen, you’ll need to check your gigabit network cards and drivers very carefully. Many network integrators have complained to me that gigabit cards have an abnormally high failure rate and that the drivers, especially on Windows, aren’t all they’re cracked up to be.

All that said, with proper testing, you may be able to persuade customers that so long as they’re ramping up their WAN backbone that they might want to consider giving their most highly used LANs a Gigabit steroid injection.

June 10, 2002
by sjvn01
0 comments

Can’t give up your Windows applications? Win4Lin 4.0 is for you

You say you love Linux, but you absolutely must have your Microsoft Office and Quicken, too? Well, you’re in luck, NetTraverse’s latest Win4Lin 4.0 Workstation lets you run Office XP, Quicken, Lotus Notes, PhotoShop and a host of other common-and not so common-office programs.
Of course, there are other ways to bring Microsoft Windows to Linux, and recently Codeweavers, with Crossover Office, is providing a way for Microsoft Office lovers to use Office, and a few other programs including the Lotus Notes 5.0x client, on their Linux machines. That said, for stability, speed, and sheer range of Windows applications supported, it’s hard to beat Win4Lin 4.0.

Win4Lin enables you to run Windows 95, 98, 98 Second Edition (SE) or Windows ME on any of its supported Linux distributions. To do so, of course, you actually need to have an installable copy of Windows in hand. An update Windows disc won’t do the job any more than it would on a PC without an operating system. You can also forget about installing NT, 2000 or XP.

For my money, your best choice is Windows 98SE. ME will run just as well as ME ever does, which is just another way of saying you’re better off with 98SE.

On the Linux side, you will need to use one of NeTraverse’s modified kernels or modify the kernel yourself. Some people have asked that NeTraverse just issue a modified kernel for every Linux implementation that comes down the pike. Because that’s expensive and NeTraverse just announced its support for UnitedLinux, that’s not going to happen. In any case, with support for most popular desktop Linuxes, the vast majority of Linux users will never even notice.

If you already have Win4Lin 3, it’s an easy upgrade path to version 4. You will, however, need to purchase a new installation license. This costs $49.99. If you’re new to Win4Lin, you can download and run it for $89.99. A boxed version is available for $99.99.

Whichever way you get a copy, installation is a breeze. I installed Win4Lin on both a HP Pavilion running Red Hat 7.1 with a 1.4 GHz Athlon XP and an HP Pavilion with SuSE 8.0 and a 1GHz Pentium III. With the Red Hat machine, I set up fist Windows ME and then 98SE. On the SuSE system, I stuck with 98SE.

Once Windows was on these machines though, I did run into some customization problems. For the most part, these were commonplace Windows ME/98SE problems that I’ve seen before in ordinary Windows installations.

The one exception was in setting the machine to use a Virtual Network (VNET) so that I could more easily use the Windows Network Neighborhood to hook up to my network drives and printers. My problem was that I couldn’t connect with my network Dynamic Host Configuration Protocol (DHCP) IP address server. A quick look through the release notes quickly set me right. In any case, had I chosen to give the VNET adapter a static IP, I wouldn’t have run into any trouble at all.

To see what Win4Lin could really do, I decided to take the logical step of installing my full application workload on the SuSE system and work on it for a week.

So I installed all of Office 2000, except Outlook; FrontPage 2002 from Office XP; Pegasus Mail 4.01; Adobe Acrobat 5; RealPlayer 6; Adobe PhotoShop 5.5; Lotus Organizer 6.0; Internet Explorer 5.5 SP2; Macromedia DreamWeaver 4.0 and that blast from word processing’s past, WordStar 7.0.

But, especially with the Microsoft products, that was only the start. I also had to update and patch many of these programs to block security holes. This can often go wrong when you’re running one operating system on top of another, but Win4Lin delivered and installed the updates like a champ.

In fact, this is when I noticed one of the advantages of running Win4Lin with 98SE over running 98SE directly on the hardware. Shutting down and booting the Windows operating system went much faster on Linux than Win98SE did on the same system running by its self.

The real proof of any system though is how well it does when handling your day-to-day work, not just looking good in a benchmark suite. Once more, Win4Lin proved itself a winner. Over a week, the system never froze — although Windows had locked up on that very same hardware running the same workload. I can’t claim that everyone will find Win98SE more stable on Linux, but that was my experience.

I also found the system to be very fast. The total system RAM was 256MBs, and with this edition of Win4Lin, I can access up to 128MBs of RAM. This made a real difference when running such memory hungry applications as PhotoShop and DreamWeaver at the same time, which is something I do often. While not as fast as they would have been running natively on the hardware, they were perfectly usable.

And, for my day-to-day work combination of Word, Excel, Lotus Organizer, Pegasus Mail, and Internet Explorer, I didn’t really notice that Linux was also running … if it wasn’t for the fact that I also had my typical Linux application package of KDE 3.0, Netscape 6.2, Konqueror, elm, and vi running.

Now to make Win4Lin perform like this, you do need to pay attention to the release notes. In particular, you must enable backing store in XFree86-4.0.x. In most distributions the default is to turn this off, but turning it on gives Win4Lin a real graphics kick in the pants.

You may also want to give Windows access to more memory using the winsetup configuration set up utility located in /usr/bin. By default, for example, Win98SE only has access to 24MBs of RAM. Simply adding more RAM to your Window session may not actually help you, though. As is always the case with system performance tuning, you should read all the instructions and not simply assume that the maximum values are the best values.

For my purposes, daily office work, Win4Lin is a keeper.

It may not, however, be for you. While it now has support for wheeled mice, it still lacks support for USB, FireWire, DirectX, CD-writing and many other useful, but not necessarily essential, hardware and software additions.

But, if what you want is a solid way to bring most Windows office and home applications to either your desk or to your workplace, Win4Lin is for you.

A version of this story first appeared in NewsForge.