Practical Technology

for practical people.

May 19, 2014
by sjvn01
0 comments

Windows 8.1 Update 1, now with less annoyance

Some disasters are easy to see coming. All you had to do was look at Windows 8 and its Metro — excuse me, Modern — interface back in its beta days and you knew it was going to fly like a pigeon with concrete overshoes. Years went by and Windows 8.1 was — better, but still basically awful. Now Windows 8.1 Update 1 is here, and I’ve been trying it for the past few weeks and, ah, the best I can say is it sucks less.

Windows 8.1 Update 1, now with less annoyance. More>

April 24, 2014
by sjvn01
0 comments

Here comes the black market for XP patches

Hey buddy.”

“Yeah?”

What do you think I’m looking for?”

“XP SP 4.”

“Jeeze. Really?”

“Yeah, but it will cost you…”

I expect conversations along those lines to happen in real life on, say, May 13, 2014. Why that date? It will be the first Patch Tuesday when Microsoft will no longer be releasing patches for Windows XP — unless you’re one of those big XP customers, such as the IRS, that didn’t leave themselves enough time to get off of XP. If you’re one of those, then Microsoft will allow you to buy XP support for at least another year via its Custom Support plan.

Here comes the black market for XP patches. More>

April 21, 2014
by sjvn01
0 comments

What’s the best smartphone? That’s the wrong question.

As my town’s resident technology expert, I am sometimes approached by complete strangers with tech questions. Lately, 9 out of 10 ask, “What’s the best smartphone?”

I can see in their faces that they would like a straightforward answer, something along the lines of, “Buy Samsung’s Galaxy S5; you’ll love it.” But I can’t say that. That’s not because the Galaxy S5 isn’t an excellent phone. Everything I’ve heard about it tells me that it is.

The only answer I can ever give is, “It depends.” And I follow that up with, “What are you going to be using the phone for? What do you really need from the phone? What can’t you do without in a smartphone?”

What’s the best smartphone? That’s the wrong question. More>

April 16, 2014
by sjvn01
0 comments

Show Me The Money: The New Open Source Motivation

If you think open source programming is still about developers working on projects for love or to scratch an itch, think again. A recent Linux Foundation survey found that today’s free software developers are in it for the money.

One stereotype of an open source programmer is the guy in his parent’s basement using EMACS to write obscure code for a GitHub game project. Another common image is that of a passionate hacker pounding away on a Linux kernel coding program, motivated by nothing but her sheer love of programming. People like that do exist, but they’re not representative of 2014’s open-source developers.

No one was more surprised by this discovery than was The Linux Foundation, which conducted a survey of 686 software developers and business managers at some of the world’s largest businesses. These included such companies as Cisco, Fujitsu, HP, IBM, Intel, Google, NEC, Oracle, Qualcomm, and Samsung. Most of the respondents work at organizations with $500 million or more in annual revenue (69%) and more than 500 employees (76%). In short, the people who responded to the market research survey were programmers and team leads from enterprise companies.

The report, Collaborative Development Trends Report 2014, looked at open source based collaborative development projects such as the software-defined networking (SDN) OpenDaylight, virtualization’s Xen Project, and the OpenStack cloud project.

Given that these collaborative projects sprang from corporations’ needs instead of individuals’, some of the survey’s findings weren’t surprising. Today, enterprise business managers recognize open source software as a business imperative and are taking the lead in initiating open source participation, says the report.

For example, it’s no shock that 91% of business managers and executives surveyed reported that open source, collaborative software development was somewhat important or very important to their business. As Linux Foundation director Jim Zemlin said at the Linux Collaboration Summit in March 2014, “A new business model has emerged in which companies are joining together across industries to share development resources and build common open source code bases on which they can differentiate their own products and services. … In the past, collaboration was done by standards committees; now it’s being done by open source foundations.”

Ten years ago, as the Linux Foundation reported, open source software was largely a grassroots movement. “Developers undertook and contributed to open source projects and brought them into the workplace. Business managers oftentimes weren’t even aware that their products were being built with open source tools and components.”

That isn’t the way it is now. Sure, developers do still get started by contributing to an open source project on their own time; almost 35% said they started contributing in their free time. But even more developers are introduced to open source projects on their jobs, with 44% of the respondents reporting job requirements were the primary reason they started contributing.

The situation definitely changed a decade back. Enterprise software developers with 10 or more years of experience were more likely to have started in their free time. Developers with fewer than 10 years of experience largely started due to job requirements.

That’s right. Today, people get into open source because the job requires it. You can’t get much more mainstream than that.

This has happened, the survey reveals, because of the rise of Linux and open source software within the enterprise. Sixty-one percent of the software developers agreed that open source software and/or collaborative development are “on the incline” to become the de-facto way to build software; almost 33% said Linux and open source “dominate” software practices today.

First people use open source. Then, some are motivated to contribute or even to create their own projects. As the report found, “For companies that have embraced Linux and open source software, collaborative development is the natural next step along the spectrum of open source participation that begins with consumption – using open source tools and components – and progresses to contributions with increasing levels of commitment through fixing bugs, writing code and finally starting and leading new projects.”

It’s not just open source projects in and of themselves that’s driving this trend. The survey also showed that the use of open source software development tools is pervasive. Almost 96% use common open source software, such as Git and Subversion for things like version control. And both developers (93%) and managers (91%) use open source development tools to participate in the open source community.

No wonder open source jobs are hotter than hot. Nowadays, if you want to work in enterprise computing, it’s a necessary skill.

Show Me The Money: The New Open Source Motivation. A version of this story first appeared on SmartBear.

April 8, 2014
by sjvn01
0 comments

Why Containers Instead of Hypervisors?

Our cloud-based IT world is founded on hypervisors. It doesn’t have to be that way – and, some say, it shouldn’t be. Containers can deliver more services using the same hardware you’re now using for virtual machines, said one speaker at the Linux Collaboration Summit, and that spells more profits for both data centers and cloud services.

I confess that I’ve long been a little confused about the differences between virtual machine (VM) hypervisors and containers. But at the Linux Collaboration Summit in March 2014, James Bottomley, Parallels‘ CTO of server virtualization and a leading Linux kernel developer, finally set me straight.

Before I go farther I should dispel a misconception you might have. Yes, Parallels is best known for Parallels Desktop for Mac; it enables you to run Windows VMs on Macs and yes, that is a hypervisor-based system. But where Parallels makes its real money is with its Linux server oriented container business. Windows on Macs is sexier, so it gets the headlines.

So why should you care about hypervisors vs. containers? Bottomley explains that hypervisors, such as Hyper-V, KVM, and Xen, all have one thing in common: “They’re based on emulating virtual hardware.” That means they’re fat in terms of system requirements.

Bottomley also sees hypervisors as ungainly and not terribly efficient. He compares them to a Dalek from Dr. Who. Yes, they’re good at “EXTERMINATE,” but earlier models could be flummoxed by a simple set of stairs and include way too much extra gear.

Containers, on the other hand, are based on shared operating systems. They are much skinner and more efficient than hypervisors. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application,” says Bottomley.

That has implications for application density. According to Bottomley, using a totally tuned-up container system, you should expect to see four-to-six times as many server instances as you can using Xen or KVM VMs. Even without making extra effort, he asserts, you can run approximately twice as many instances on the same hardware. Impressive!

Lest you think this sounds like science fiction compared to the hypervisors you’ve been using for years, Bottomley reminds us that “Google invested in containers early on. Anything you do on Google today is done in a container—whether it’s Search, Gmail, Google Docs—you get a container of your own for each service.”

To use containers in Linux you use the LXC userspace tools. With this, applications can run in their own container. As far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on.

So far that sounds remarkably how a VM looks to an application. The key difference is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.

LXC’s entire point is to “create an environment as close as possible as a standard Linux installation but without the need for a separate kernel,” says Bottomley. To do this it uses these Linux kernel features:

  • Kernel namespaces (ipc, uts, mount, pid, network, and user)
  • AppArmor and SELinux profiles
  • Seccomp policies
  • Chroots (using pivot_root)
  • Kernel capabilities
  • Control groups (cgroups)

The one thing that hypervisors can do that containers can’t, according to Bottomley, is to use different operating systems or kernels. For example, you can use VMware vSphere to run instances of Linux and Windows at the same time. With LXC, all containers must use the same operating system and kernel. In short, you can’t mix and match containers the way you can VMs.

That said, except for testing purposes, how often in a production environment do you really want to run multiple operating system VMs on a server? I’d say “Not very damn often.”

You might think that this all sounds nice, but some developers and devops believe that there are way too many different kinds of containers to mess. Bottomley insists that this is not the case. “All containers have the same code at bottom. It only looks like there are lots of containers.” He adds that Google (which used cgroups for its containers) and Parallels (which uses “bean-counters” in OpenVZ) have merged their codebases so there’s no practical differences between them.

Programs such as Docker are built on top of LXC. In Docker’s case, its advantage is that its open-source engine can be used to pack, ship, and run any application as a lightweight, portable, self sufficient LXC container that runs virtually anywhere. It’s a packaging system for applications.

The big win here for application developers, Bottomley notes, is that programs such as Docker enable you to create a containerized app on your laptop and deploy it to the cloud. “Containers gives you instant application portability,” he says. “In theory, you can do this with hypervisors, but in reality there’s a lot of time spend getting VMs right. If you’re an application developer and use containers you can leave worrying about all the crap to others.”

Bottomley thinks “We’re only beginning to touch what this new virtualization and packing paradigm can mean to us. Eventually, it will make it easier to create true cloud-only applications and server programs that can fit on only almost any device.” Indeed, he believes containers will let us move our programs from any platform to any other platform in time and space… sort of like Dr. Who’s TARDIS.

A version of this story was first published in SmartBear.

April 2, 2014
by sjvn01
0 comments

GOTO Still Has a Place in Modern Programming. No! Really!

Mea culpa! Sometimes, the experts agree, GOTO can be very useful.

When I wrote a few weeks about Apple’s SSL GOTO security fiasco, I put the blame on GOTO. I quoted no less a seer than programming guru Edsger W. Dijkstra who wrote way back in 1968 that the goto statement should be abolished from all “higher level” programming languages.

Since then, I’ve had it gently drilled into my head by programming experts that are sometimes — okay, many times — using GOTO not only makes perfect sense, sometimes it’s the best choice.

For example, SmartBear reader ohengel wrote, “I generally agree that many uses of the goto command are undesirable, but not all are. A good counter-example is the state machine, sometimes also used and known as a syntax machine. It works very well and with unmatched performance when you use goto commands.” State machines show when certain Boolean conditions are met.

John Stracke, a software architect for ITA Software agrees with ohengel. Stracke writes, “I hate GOTO, you hate GOTO, all Right-Thinking Modern Programmers hate GOTO. It’s a primitive, archaic construct, left over from machine languages, where primitive constructs make sense. When used, it enables bad coding practices, leads to bizarre bugs, and generally contributes to the decline of civilization. But, there is one case where GOTO is The Right Thing: finite state machines.”

Stracke explains, “Think about it. The states of a finite state machine basically correspond to the program counter of a CPU. In this view, state transitions are GOTOs, right? So you can either build your code to store your FSM’s state in some sort of variable (in which case assignments to that variable are GOTOs in disguise), or you can be honest with yourself and write GOTO-based code.” Okay, so that’s one example, but how often do you use state machines?

Well, it turns out GOTOs are still alive, well, and doing useful work in more than just this one example.

Indeed, GOTOs perform essential work in the Linux kernel. No less a figure than Dirk Hohndel, Intel’s Chief Linux and Open Source Technologist, set me straight. He writes, “The Linux kernel is full of GOTOs. So are most other well-maintained C programs. GOTO used in the right circumstances makes for easier-to-read, easier-to-maintain code.”

“For example?” I ask.

Hohndel replies, “It’s quite simple. If you write a function that tests lots of conditions but needs to clean up when exiting, you can either do this with lots and lots of identical code repeated all over the code, or with nested IF statements that get way too many levels of indentation – or you can have a clearly marked exit point and jump to it.”

Of course, you can’t use GOTO indiscriminately. Hohndel gave a few simple rules for the correct use of GOTO:

  • Only jump forward, never backward.
  • Only jump to the end of a block. (It doesn’t have to be the end of the function, but usually is.)
  • Use good names for the label (e.g. goto error_cleanup;).
  • Use it consistently.

For a sample of when it’s done right, Hohndel points to When To Use Goto When Programming in C.

As for Apple’s fiasco, Hohndel opines, “The problem with Apple’s code is a lack of review, not the use of GOTO.”

Hohndel’s not the only top programmer who sees positive good coming from correct GOTO use. Jeff Law and Jason Merrill, Red Hat engineers and, oh by the way, both members of the GCC (GNU Compiler Collection) steering committee, believe that GOTO can sometimes help with code clarity. In the case of the Apple SSL error, Merrill told Dr. Dobbs, it appears GOTO was used to make the code easier to read. Using GOTO to micro-optimize your code, however, is still a bad idea. “Clarity trumps micro-optimization most of the time,” says Merrill.

Last, but by no means least, no less a figure than Linux’s creator, Linus Torvalds defended GOTO in the LKML (Linux Kernel Mailing List) back in 2003. Torvalds wrote:

I think GOTOs are fine, and they are often more readable than large amounts of indentation. That’s especially true if the code flow isn’t actually naturally indented (in this case it is, so I don’t think using GOTO is in any way clearer than not, but in general GOTOs can be quite good for readability).

. . .

[S]ometimes structure is bad, and gets into the way, and using a  GOTO is just much clearer. For example, it is quite common to have conditionals THAT DO NOT NEST.

In which case you have two possibilities

– use GOTO, and be happy, since it doesn’t enforce nesting

This makes the code more readable, since the code just does what the algorithm says it should do.

– duplicate the code, and rewrite it in a nesting form so that you can use the structured jumps.

This often makes the code much less readable, harder to maintain, and bigger.

Okay! I surrender. I’m at best a mediocre programmer, but if Torvalds, other programmers, and three top Linux developers, who work with C every day of the year, agree that GOTO can be extremely useful – who am I to argue?

So, go forth! Use GOTO! Just make darn sure you’re using it correctly is all I ask of you.

A version of this story was first published in Smart Bear.