Practical Technology

for practical people.

January 5, 2022
by sjvn01
0 comments

Lenovo IdeaPad Duet 5: Great Chromebook, great tablet

I’m usually not keen on either ARM-powered Chromebooks or dual Chromebooks/tablets. But, the latest Lenovo IdeaPad Duet 5 has me reconsidering.

This hybrid laptop is a 2-in-1 Chromebook and tablet. The lightweight 13.3-inch tablet/display is completely detachable.

Its screen is one of the things I love about this combo. It’s an organic light-emitting diode (OLED) Samsung Display, which provides incredible colors, darks, and contrast. I am not a fan of watching videos on screens. That’s why I paid serious money for an  LG OLED77C1 4K TV . I mean, why would I watch something on a small display when I have a 77-inch TV? But, for the first time, I have a portable screen I’ll be happy to watch the next episode of Star Trek Discovery on.

Lenovo IdeaPad Duet 5: Great Chromebook, great tablet. More>

January 4, 2022
by sjvn01
0 comments

The open office floor plan: rethinking an awful idea

Some friends and I were talking about the new Google building at 2000 North Shoreline Blvd. in Mountain View, Calif. Truth be told, we’re not impressed. As one person said, it looks like a sagging tent city covered in dragon scales (and not in a good way). But the real kicker? Someone else dared to hope that at least it would have real offices instead of an open office floor plan.

Alas, it won’t. And “that’s one reason why I’m never going back into the office again,” one of my friends declared. “At least at home, I have a private office where I can close the door. Those days are so long gone at ‘work.’”

Personally, I believe open offices are one of the many reasons behind the Great Resignation. Indeed, the hate generated by them is one reason so many people love the idea of working from home now.

The open office floor plan: rethinking an awful idea. More>

January 3, 2022
by sjvn01
0 comments

Cleaning up the Linux kernel’s ‘Dependency Hell’: This developer is proposing 2,200 commit changes

Last year, Linux’s source code came to a whopping 27.8 million lines of code. It’s only gotten bigger since then. Like any 30-year old software project, Linux has picked up its fair share of cruft over the years. Now, after months of work, senior Linux kernel developer Ingo Molnar is releasing his first stab at cleaning it up at a fundamental level with his “Fast Kernel Headers” project.

The object? No less than a comprehensive clean-up and rework of the Linux kernel’s header hierarchy and header dependencies. Linux contains many header, .h, files. To be exact there are about 10,000 main .h headers in the Linux kernel with the include/ and arch/*/include/ hierarchies. As Molnar explained, “Over the last 30+ years they have grown into a complicated & painful set of cross-dependencies we are affectionately calling ‘Dependency Hell’.”

Cleaning up the Linux kernel's 'Dependency Hell': This developer is proposing 2,200 commit changes. More>

December 9, 2021
by sjvn01
0 comments

It’s time to move off CentOS 8, here are your best choices.

The end of CentOS 8 Linux has been coming for awhile now, and the day is finally here. On December 31, 2021, Red Hat‘s CentOS Linux 8 will reach End Of Life (EOL). Since that falls right in the heart of the holiday season, Red Hat will extend CentOS Linux 8 zero-day support until January 31, 2022. Indeed, there will be one last CentOS Linux 8 release — perhaps even after CentOS 8’s official EOL. After that, it’s all over for CentOS Linux.

What can you do now?

CentOS Linux 8 is about to die. What do you do next? More>

November 8, 2021
by sjvn01
0 comments

Last of original SCO v IBM Linux lawsuit settled

Well, that took long enough! But, what’s this? The lawsuit still lingers on in one last case from the company that bought SCO’s Unix operating systems.

While at the Linux Foundation Members Summit in Napa, California, I was bemused to find that an open-source savvy intellectual property attorney had never heard of SCO vs. IBM.  You know, the lawsuit that at one time threatened to end Linux in the cradle? Well, at least some people thought so anyway. More fool they. But now, after SCO went bankrupt; court after court dismissing SCO’s crazy copyright claims; and closing in on 20-years into the saga, the U.S. District Court of Utah has finally put a period to the SCO vs. IBM lawsuit.

According to the Court, since:

All claims and counterclaims in this matter, whether alleged or not alleged, pleaded or not pleaded, have been settled, compromised, and resolved in full, and for good cause appearing,

IT IS HEREBY ORDERED that the parties’ Motion is GRANTED. All claims and counterclaims in this action, whether alleged or not alleged, pleaded or not pleaded, have been settled, compromised, and resolved in full, and are DISMISSED with prejudice and on the merits. The parties shall bear their own respective costs and expenses, including attorneys’ fees. The Clerk is directed to close the action.

Finally!

Earlier, the US Bankruptcy Court for the District of Delaware, which has been overseeing SCO’s bankruptcy had announced that the TSG Group, which represents SCO’s debtors, has settled with IBM and resolved all the remaining claims between TSG and IBM: “Under the Settlement Agreement, the Parties have agreed to resolve all disputes between them for a payment to the Trustee [TLD], on behalf of the Estates [IBM], of $14,250,000.

In return, TLD gives up all rights and interests in all litigation claims pending or that may be asserted in the future against IBM and Red Hat, and any allegations that Linux violates SCO’s Unix intellectual property.

So is this it? Is it finally all over and only people who lived through the battle will remember it? I wish.

Xinuos, which bought SCO’s Unix products and intellectual property (IP) in 2011, sued IBM and Red Hat for “illegally Copying Xinuos’ software code for its server operating systems” on March 31, 2021.

How? Xinuos bought SCO Unix operating systems in 2011. These operating systems, OpenServer and Unixware, still have a few customers. When Xinuos made the deal, its CEO, Richard A. Bolandz, promised that the company “has no intention to pursue any litigation related to the SCO Group assets acquired by the company. We are all about world leadership in technology, not litigation.”

That didn’t last. In the US District Court of the Virgin Islands, the company claimed:

First, IBM stole Xinuos’ intellectual property and used that stolen property to build and sell a product to compete with Xinuos itself. Second, stolen property in IBM’s hand, IBM and Red Hat illegally agreed to divide the relevant market and use their growing market powers to victimize consumers, innovative competitors, and innovation itself. Third, after IBM and Red Hat launched their conspiracy, IBM then acquired Red Hat to solidify and make their scheme permanent.

Xinuos also claims that IBM is out to destroy FreeBSD. While there’s no love lost between Linux and BSD Unix supporters, this claim is a stretch. Xinuos is throwing this claim in on the grounds that its “most recent innovations have been based” on FreeBSD.

So, Xinuos is not asking merely for damages, but for the courts to dismiss IBM’s Red Hat $34-billion acquisition! Yeah, that’s not going to happen.

While we’re one step closer, the SCO lawsuits still live on just like one of those Halloween monsters that just won’t die. But, in this go-around, there aren’t many people in the audience.

This story first appeared in ZDNet.

June 2, 2021
by sjvn01
0 comments

GitOps: The next step in cloud-native development

What do you get when you combine DevOps management and the Git distributed version control system? Say hello to GitOps.

Automation makes software better and more reliable. GitOps takes automation a step further and merges it with deployment.

DevOps, in which system administrators work hand-in-hand with developers, can speed up software development and operational deployments from months to days. At the same time, we use Kubernetes to orchestrate containers over clouds to speed up software development and operational deployments.

Ultimately, both approaches lend themselves to continuous Integration/continuous delivery (CI/CD). Wouldn’t it be great, thought Weaveworks CEO Alexis Richardson, if we could combine these approaches and use the Git distributed version control system as the ultimate source of truth? So, when there’s a dispute over the correct state of the site, people know where to go for the correct version.

It turns out Richardson was on to something. Cornelia Davis, Weaveworks’ CTO, recently said, “I believe that GitOps is the model that will dominate operations. … I think, five years from now, everybody will be doing some level of GitOps.”

Why? Because GitOps is a set of practices that enables you to manage and deploy highly distributed software that’s constantly changing without breaking a sweat.

Now if just a single vendor was promoting its approach, you might be wise to be skeptical. But it’s not just Weaveworks. Priyanka Sharma, general manager at the Cloud Native Computing Foundation (CNCF), believes GitOps is becoming to Kubernetes what Git already is to Linux: the fundamental building tool for the next generation of cloud-native computing.

GitOps is designed for and, as it now stands, really applicable to just orchestrated cloud-native applications.

fficially, Weaveworks defines GitOps as “a way to do Kubernetes cluster management and application delivery. GitOps works by using Git as a single source of truth for declarative infrastructure and applications. With GitOps, the use of software agents can alert on any divergence between Git with what’s running in a cluster, and if there’s a difference, Kubernetes reconcilers automatically update or roll back the cluster depending on the case. With Git at the center of your delivery pipelines, developers use familiar tools to make pull requests, to accelerate and simplify both application deployments and operations tasks to Kubernetes.”

Superficially, GitOps is quite simple. GitOps uses a version control system, Git, to house all information, documentation, and code for a Kubernetes deployment. Kubernetes then automatically deploys changes to the cluster.

Of course, simple concepts are always more complex in reality. Let’s look at the fundamentals.

1) Everything that can be described must be stored in Git.

By using Git as the source of truth, it is possible to observe your cluster and compare it with the desired state. The goal is to describe everything: policies, code, configuration, and even monitored events and version control. Keeping everything under version control enforces convergence where changes can be reapplied if at first they didn’t succeed.

These descriptions of the entire system are described declaratively in YAML. This is a human-readable data serialization language. YAML is commonly used for configuration files and data storage and transmission.

If this sounds a lot like DevOps, you’re right, it does. For example, AnsibleAzure PipelinesSalt, and Puppet all use YAML. No matter the program, the idea is the same: Use declarations written in YAML to control operations. This approach is also known as infrastructure as code (IAC).

Within GitHub YAML files, you find not instructions like “Start 10 MySQL servers” but declarations—for instance, “There are 10 MySQL servers. These are their names.”

The bottom line is to take fundamental DevOps and IAC concepts and move them to the cloud-native world.

2) Use a Kubernetes controller that follows an operator pattern

With a Kubernetes controller that follows the operator pattern, your cluster is always in sync with your Git repository, the source of truth. Since the desired state of your cluster is kept in Git YAML files, you can easily spot differences between your Git files and the running cluster.

For GitOps, the most popular controller is the open source program Flux. This is a collection of tools for keeping Kubernetes clusters in sync with YAML files in Git repositories and automating configuration updates when there’s new code to deploy. Indeed, although Flux’s code is changing rapidly, Flux has already been recommended by the CNCF for adoption by CD users.

Please read: DevSecOps: What it is and where it’s going

With Flux, you can describe everything about the entire desired state of your system in Git. This includes apps, configuration, dashboards, monitoring, and everything else.

This means everything—and I mean everything—is controlled through pull requests. There’s no learning curve for new programmers; they just use the Git commands they already know. If there’s a production issue, it’s fixed via a pull request instead of manually changing the running system. As a big side benefit, your Git history automatically provides a log of transactions, enabling you to recover your system state from any snapshot.

Flux also uses Kubernetes Custom Resources, object state reports, and via Kubernetes Events, integration with Kubernetes role-based access control (RBAC), making it declaratively configurable.

The next version, Flux v2, is being built from the ground up to use Kubernetes’ API extension system and to integrate with Prometheus and other core Kubernetes components. In version 2, Flux supports multi-tenancy and can sync an arbitrary number of Git repositories, among other long-requested features. The Flux people are building this with the GitOps Toolkit, a set of composable APIs and specialized tools for building CD on top of Kubernetes.

There are non-Flux Kubernetes platforms that support GitOps as well, including HPE Ezmeral Container Platform. Ezmeral delivers GitOps through its Centralized Policy Management capabilities using Argo CD, a declarative, GitOps continuous delivery tool for Kubernetes.

3) Software agents are used to ensure correctness and act on divergence, a.k.a. Kubernetes reconcilers

As Richardson puts it, “One of the most important functions of GitOps is to enable a group of system changes to be applied correctly and then verified. After that, GitOps should enable the users and orchestrators to be notified [alerted] if any of the systems has drifted from the correct state so that it may then be converged back to the correct desired state, which is in Git.”

So, how would this work in practice? Something like this.

You make your changes to your Git files. Then, Jenkins, the open source automation server, pushes these changes to the Quay container image registry. Jenkins then pushes the new config, and Helm, the Kubernetes package manager, charts to the master Git storage bucket. Once the merge request is complete, the automated GitOps operator detects the change and calls Flux to make the changes operational by deploying the updated YAML files to the master Git repository and hence to the operational Kubernetes cluster.

That may sound complicated, but once it’s set up and debugged, it will greatly speed up your deployment of applications. What do you end up with? GitOps is a CI/CD system that’s greater than the sum of its partsAs a corollary to this, you don’t use kubectl, the Kubernetes command-line interface, because all changes are handled automatically by the GitOps pipeline. Indeed, Richardson says it’s not a good idea to deploy directly to the cluster using kubectl. That’s because by relying on kubectl, you’re making production potentially open to shell-based hacking attacks.

Why GitOps?

Sharma says, just as “Kubernetes unleashes the power of cloud computing for building software fast and resiliently, GitOps is basically utilizing the Git workflows that every developer is used to.” So, how important is this approach? “Not everyone who is touching Kubernetes is using GitOps, but I know everyone wants to because it would make their life easier.”

“Everyone” includes the CNCF GitOps Working Group, which is working on best practices for the still quite new GitOps. Its members include Hewlett Packard Enterprise, Amazon Web Services, GitHub, Codefresh, Microsoft, and, of course, Weaveworks.

Besides best practices, the working group is also hammering out the GitOps Manifest. This is very much a work in progress. When done, it will define GitOps’ principles and technical aspects in a vendor- and implementation-neutral manner. It will also lay out a common understanding of GitOps systems based on shared principles rather than on individual opinion. Another aim is to encourage innovation by clarifying the technical outcomes rather than the code, tests, or organizational elements needed to achieve them.

Sharma, GitLab‘s director of technical evangelism, says it best: “For those shops doing DevOps, this approach can be appealing because GitOps brings the workflow closer to the developer.”

Programmers just keep using the Git tools they already know to push code into production. And since the workflow goes directly through Git, it’s recorded and logged. Sharma says, “There is an audit trail, the ability to revert problematic changes, and ultimately a single source of truth of what is happening in the system from both the software development and infrastructure perspective.”

Richardson says, “Imagine a world where every time you do a deployment it’s correct. And if it’s not correct, then the deployment fails completely, so you can try again or make other intelligent decisions. … That is just an incredible cost-saver in operational overhead—moving from an unsafe, semireliable system to one that is basically more robust.”

But it’s not magic. Simply storing your old operational patterns into Git won’t get you anywhere. As Cornelia Davis, Weaveworks CTO, comments, “Just because you put something in Git doesn’t make it GitOps. It isn’t actually the central part of GitOps. Ops is the central part of GitOps.”

The biggest mistake people make about GitOps, Davis says, is people don’t get that things are always correcting themselves and you always have to respond to change with reconciliation loops. You must think about this and use it in your approach or you’ll just be recycling old mistakes in a new concept.

Do it right, however, and you can expect to reap the following:

  1. Increased CI/CD productivity.
  2. A better developer experience, by letting developers push code instead of managing containers.
  3. Improved stability, thanks to Git’s audit log of Kubernetes cluster changes.
  4. Better reliability as a result of Git’s built-in revert/rollback and fork from a single source of truth.
  5. And last but never least, improved cost efficiency from less downtime and improved productivity.

Sound good to you? It does to me. I predicted long before most people did that Kubernetes would become the container orchestration program. I’m now going to go on a limb and predict that GitOps, in turn, is going to become the way most of us will end up deploying programs to the cloud in the next few years. It just makes too much sense for it not to work.

Lessons for leaders

  • Cloud-native development does an excellent job of enabling new capabilities, including GitOps.
  • GitOps enables more efficient processes and, ultimately, better service for customers than traditional approaches.
  • One of the best reasons to give GitOps a chance is that it uses tools that developers already know.

A version of this story was first published in enterprise.nxt.