Practical Technology

for practical people.

Where’s the Yelp for open-source tools?

We’d like an easy way to judge open-source programs. It can be done. But easily? That’s another matter. When it comes to open source, you can’t rely on star power.

The “wisdom of the crowd” has inspired all sorts of online services wherein people share their opinions and guide others in making choices. The Internet community has created many ways to do this, such as Amazon reviews, Glassdoor (where you can rate employers), and TripAdvisor and Yelp (for hotels, restaurants, and other service providers). You can rate or recommend commercial software, too, such as on mobile app stores or through sites like product hunt. But if you want advice to help you choose open-source applications, the results are disappointing.

It isn’t for lack of trying. Plenty of people have created systems to collect, judge, and evaluate open-source projects, including information about a project’s popularity, reliability, and activity. But each of those review sites – and their methodologies – have flaws.

Take that most archaic of programming metrics: Lines of code (LoC). Yes, it’s easy to measure. But it’s also profoundly misleading. As programming genius Edsger Dijkstra observed in 1988, LoC gives people “the reassuring illusion that programs are just devices like any others, the only difference admitted being that their manufacture might require a new type of craftsmen, viz. programmers. From there it is only a small step to measuring ‘programmer productivity’ in terms of ‘number of lines of code produced per month.’ This is a very costly measuring unit because it encourages the writing of insipid code.”

We’ve gotten better since then, haven’t we? Perhaps not.

First attempts

In 1997, well-known developer Patrick Lenz founded the first listing and announcement site for free and open-source software, freshmeat.net. It was meant to be the guide to open-source programs. But freshmeat never lived up to its promise.

Due to changes in direction and ownership, freshmeat sputtered into failure. In the end, no one really had a clear idea of how to monetize the site.

Freshmeat was quickly followed by Freecode. This site’s mission was to maintain the “Web’s largest index of Linux, Unix, and cross-platform software and mobile applications.” It explained, “Each entry provides a description of the software, links to download it and to obtain more information, and a history of the project’s releases, so readers can keep up-to-date on the latest developments.” You could think of Freecode as an open-source software-specific version of Yahoo’s first iteration as a guide to the web.

Like its predecessor, the site slowly came to a stop – and then halted entirely. Its owners declared, “The Freecode site has been moved to a static state effective June 18, 2014, due to low traffic levels and so that folks will focus on more useful endeavors than site upkeep.”

Open-source co-creator Eric S. Raymond tried to revive Freecode. Raymond believed no other site was “quite so good for getting a cross-sectional view of what the open-source world is doing.” From his perspective, Freecode had numerous but fixable, problems. Among them: cutting down on human moderation, focusing exclusively on open-source software, and paring it down to essential features. Alas, his efforts came to little. The site remains a static antique.

Case in point: GitHub

Today, GitHub Stars is presented as a quick, easy way to evaluate the virtues of an open-source program. GitHub, the biggest open-source Git repository, describes its star system as just a way to keep track of projects people find interesting. However, many developers use the stars as a way of boosting their reputations.

Theoretically, the more stars, the better the software. Or is it?

Solomon Hykes, Docker’s co-founder, strongly disagrees. “GitHub stars are a scam. This bullshit metric is so pervasive, and GitHub’s chokehold on the open-source community so complete, that maintainers have to distort their workflows to fit the ‘GitHub model’ or risk being publicly shamed by industry analysts. What a disgrace.”

Hykes isn’t the only one who views GitHub stars as a misleading flop. Fintan Ryan, a Gartner senior director, thinks stars are just a game that confuses marketing and the code that’s actually on GitHub. And Microsoft project manager for open-source development on Azure, Ralph Squillace, tweeted, “In my opinion and for Microsoft project [engineering] and management they are worthless. [But] There are always people who seize on them anyway.”

And that’s the problem. People love easy metrics. They want a quick, one-glance answer to their coding (and other) problems. Spoiler alert: There is no such thing.

Opening the code: Open Hub

Still, some sites and services provide valuable insights into a project’s overall health. Many of those websites, such as GitHub Insights, a commercial standalone application for GitHub One customers, are restricted to project’s development managers rather than outsiders looking in on a project.

One service that does give everyone a look into open-source projects is Synopsys‘s Black Duck Open Hub, formerly Ohloh. By searching on the site, anyone can dive deep into major projects to see who’s doing what in a given open-source application’s version control system.

While Open Hub doesn’t give you simple answers about a project, it does share such important information as security vulnerabilities per the last ten versions; the number of commits per month; and the number of recently active developers. You can determine a project’s most active developers, too. Armed with this data, you can come to your own conclusions about a particular project’s worthiness.

Simple tools for complex answers: Google Trends

Let’s say you want to know which open-source program is the most popular for a given need. Thanks to Google Trends, comparing projects is easier to determine than you might think.

For instance, one eternal open-source debate is whether OpenOffice or LibreOffice is the better open-source office suite. Its relative quality may remain open to question, but by comparing searches for both programs, you can see in a jiffy that LibreOffice shows up in searches on average more than twice as often as OpenOffice.

That only scratches the surface. You can also use the service’s time range option to find that LibreOffice has been more popular than OpenOffice since late 2016. (Which, as it happens, is a few months after I declared LibreOffice the victor between the two.)

Or, to pick a more enterprise-oriented search, when it comes to container orchestration, we all know Kubernetes is the winner. But, when did it become clear that it was going to beat Docker Swarm Mode and Mesosphere, now D2IQ? One Google Trend search later, and you see that by the spring of 2016, Kubernetes had already grabbed a large lead in container orchestration mindshare.

Finally, Google Trends is also good when you want to look at what hasn’t happened yet. For example, “breakouts” are sudden rises of traffic on a specific search term. By doing a general search on “technology,” I found the top breakout term was one I’d never even heard of—although I make my living from tracking what’s hot in tech: Transportation as a Service (TaaS). Keep an eye on it; interest is high.

Moving to a new model of judging open-source software

It would be great if there were a genuinely useful rating system that would help people discover excellent but less-visible open-source projects. But an easy way to work out which of the tens of thousands of projects are the vital, important ones – a software Yelp, if you will – doesn’t exist. It may never come to be.

Hope springs eternal. Brian Profitt, Red Hat‘s Open Source Program Office (OSPO) manager, is working with others on a new project to make it easy to evaluate open-source projects: Project CHAOSS. This Linux Foundation project is devoted to creating analytics and metrics that help define open-source community health.

“Since I started working at Red Hat, figuring out a way to quantifiably measure community health has been a priority for my team,” Profitt explains. “At first, we took a mega-dashboard approach, using the tools provided by the Spanish vendor Bitergia. They use open-source projects like Grimoirelabs to put together amazing dashboards that give daily updated reports on how our stewarded projects were doing.”

However, like the other efforts we’ve been discussing, this approach ran into trouble. This time it’s too much information. “Most of our community managers did not have the time to analyze this firehose of data and make meaningful decisions based on this,” Profitt says.

So Red Hat also began working on Project CHAOSS. This pulled together Grimoirelab and similar programs, such as Augur and Red Hat’s own Prospector. CHAOSS has two sides: People working on software applications, and “others working on what things could be defined as part of a community’s health,” Profitt says. “In the past, the argument was always, ‘Well, my community is different because of X, so you can’t judge us by the same standards as Y.’”

Profitt and CHAOSS disagree with such an argument. “Look at a small hamlet in rural China. Look at a mega-city in Europe. Both communities, but vastly different, right? Except they both have to have ways to deal with water, food, sanitation. They both have to have ways of getting around. Yes, in the hamlet it could be bicycles on a dirt road, and in the big city, it will be streets, highways, trams, subways. But at a fundamental level, the health of the community only depends on if the appropriate level of transportation is available, not what kind.”

While the focus is on community,CHAOSS’s metrics can be used by project managers and maintainers as well. Its metrics include what kinds of contributions are being madewhen the contributions are made; and who’s making the contributions. All of which are vital to understanding the overall health of a project.

CHAOSS is still a work in progress. Its official release is scheduled for February 2021.

In a related development, starting in late 2019,  Bitergia began work on a new upstream project, Cauldron. The company took the data it had been presenting in dashboards and making it available as a service. Profitt sees great potential here. “It’s really very interesting, and pretty much dead simple: You can point it at a git repo and it will pull out stats about contributors, time to close issues, contribution levels… the works.”

Red Hat is already using Cauldron to provide snapshots of communities. “Red Hat’s open-source processes are known to be successful, so OSPO gets lots of questions from partners and customers about how to work with open source,” says Profitt. Some of the questions are specific, such as, “We would like to work with Project X. Is it healthy, is it sustainable?” Profitt explains, “If we can provide a snapshot report on key aspects of a community’s health, we can help get a conversation going.”

Ultimately, this data will be available to all, from end users to the project leads. “In fact, I hope this happens a lot, because we can refine our models more quickly,” says Profitt. “For example, if we did find a project is deficient in some aspect—say, they seem to be missing a primary communications channel, like a mailing list—they are certainly very welcome to come back and say, ‘No, we have one, here it is!’ We can then correct our data and make a note for future reports on other projects that might be configuring their comm channel the same way. If it’s really obscured, we could suggest that maybe they could make it more obvious, so it’s a two-way conversation.”

“Quantifiable metrics is a good way to set concrete goals and plans to get things improved,” Profitt concludes. That’s the same goal of the people who used LoC as their metric back in the 60s. Today, thanks to a better understanding of what does and doesn’t really matter, we’re getting closer to having a real and easy way of understanding which programs are worth our time and which we can safely ignore.

This story was first published in Functioize.

Comments are closed.