Caveats around using Libyears

Featured image for sharing metadata for article

Over the last few years, I've spent time working on projects like dependency-management-data with the intent that I could provide visibility into organisational usage of dependencies and insight into how outdated they are, which continues with working upstream on Renovate and Mend's enterprise offerings.

Over the last few years of work, I've spent time looking at some of the great metrics the CHAOSS project work on, where I found the Libyear metric:

Two cartoon characters talking, in a two-pane "Before" and "After". In the "Before" pane, 1: "We should really upgrade", 2: "How bad is it?", 1: "Bad", 2: "But, compared to this other project?", 1: "Ummm, worse, I think", In the "After" pane, 1: "We should really upgrade", 2: "How bad is it?", 1: "We are 120 libyears behind", 2: "But, compared to this other project?", 1: "That project is only 42 libyears behind"

Libyears gives a more tangible definition of "dependency freshness", and a more meaningful description of how outdated your dependencies are. At face value, Libyear is a really useful metric to discuss, but comes with a few caveats to be aware of.

Renovate exposes information about Libyear calculations, but each time I end up discussing Libyears with someone, I end up sharing these caveats.

Before finally writing official documentation about Libyears and the caveats, I wanted to flesh out my ideas on my blog, first.

Breaking changes

One key thing that isn't taken into account in Libyears is what changes you're taking into account. This is an intentional choice by the author, and I don't fault them for it - but it's something that isn't necessarily made as clear as it could be to users of the metric, especially if used by organisations who think "small number good".

For instance, given two projects which have a "0.5 Libyears" and "1.0 Libyears", which of the two has more work to do to upgrade?

Yes, that was a trick question, because:

  1. 0.5 Libyears: includes 1 major version (with breaking changes) bump, every month for 5 months
  2. 1.0 Libyears: includes 1 patch version (with no breaking changes), 12 months ago

Looking at the number itself hides the fact that there is some nuance to take into account around what changes are included in those upgrades.

This calculation often doesn't include transitive upgrades or indirect work required, so a major version bump (with a new package name) is different to a major version bump where you also need to upgrade your version of Java and Spring Boot or your Node.JS version.

Not all ecosystems are equal

A straightforward metric like Libyear can lead organisations to thinking that they can take the Libyears calculation for different repositories, and then determine which repository needs more focus on dependency upgrades. It's tempting, and even if we ignore the above note about whether breaking changes are required, there is still nuance to consider.

If you consider the Javascript ecosystem, where there is a trend towards many, smaller packages which update much more frequently, compared to a more slow-moving ecosystem like Java, which has fewer packages overall, a Libyear for a Javascript project could be 10x the Libyear of Java due to the increased number of dependencies and rate of change.

When comparing between repositories, it's worth considering which ecosystems you're comparing, and attempting to compare each ecosystem like-for-like where possible.

If we wanted to more effectively compare these numbers between package files, we could divide the Libyears by the number of packages in the package file or ecosystem. In this example, the result is the same, but when comparing a package.json with a pom.xml, the number of dependencies wouldn't skew as highly.

Proposal for a new metric

I'm wondering if we should perhaps build a new metric on top of Libyear which takes these caveats into account. I'm not sure what a good name for this could be, but my gut feel is that something like "Tempered Libyear" could work.

With "Tempered Libyears", we would:

  • calculate the list of dependencies between current version and latest version
  • calculate the Libyear between the current version and the next version upgrade, and
    • each time there is a 0.x version upgrade, increase that upgrade's Libyear calculation by 50% (as it could be a breaking change)
    • each time there is a major version upgrade, increase that upgrade's Libyear calculation by 100% (as it's likely a breaking change)

Both the naming and calculation could be improved, and I'll see if the CHAOSS community have any thoughts on this, or if there's anything similar out there.

Worked example

For instance, if we have two packages which have the following release schedules:

DateDependencyVersion
2026-01-01a0.1
2026-01-01b0.3
2026-01-15a0.1.1
2026-02-01a0.2
2026-02-01b1.0
2026-02-15a0.2.1
2026-03-01a0.3
2026-03-01b2.0
2026-03-15a0.3.1
2026-04-01a1.0
2026-04-01b3.0

Let's say that we're currently depending on a at 0.1 and b at 0.3, let's look at how the Libyear and Tempered Libyear change over time:

Calculation DateDependencyLibyearTempered Libyear
2026-01-01a00
2026-01-01b00
2026-01-15a0.0420.052
2026-02-01a0.0830.104
2026-02-01b0.0830.167
2026-02-15a0.1250.156
2026-03-01a0.1670.208
2026-03-01b0.1670.333
2026-03-15a0.2080.260
2026-04-01a0.2500.344
2026-04-01b0.2500.500
2026-05-01a0.3330.427
2026-05-01b0.3330.583

This would give us a total of 1.01 Tempered Libyears compared to 0.666 Libyears, and indicates that there is more work associated with the updates than pure time component.

If we wanted to more effectively compare these numbers between package files, we'd then divide the Tempered Libyears by the number of packages in the package file. In this example, the result is the same, but when comparing a package.json with a pom.xml, the number of dependencies wouldn't skew as highly.

I agree that this is a more complex process to calculate, but I feel provides a more realistic view of impact of the upgrade lag, and provides a more realistic conversation for upgrade freshness.

What if there aren't new releases?

Another key caveat of Libyear calculations are that it works based on new releases of a dependency. If a dependency hasn't been updated - because it's archived, abandoned, or "done" - then you'll see a Libyear of 0, which means you don't have any updates to do, but doesn't mean that there aren't issues lurking in a (potentially) unmaintained dependency.

In Renovate, we describe this as Abandonments, and provide a means to warn users about this.

As with many things, there's a lot of nuance and interpretation of the data to take things into account!

Written by Jamie Tanna's profile image Jamie Tanna on , and last updated on .

Content for this article is shared under the terms of the Creative Commons Attribution Non Commercial Share Alike 4.0 International, and code is shared under the Apache License 2.0.

#blogumentation #open-source #ospo.

This post was filed under articles.

Interactions with this post

Interactions with this post

Below you can find the interactions that this page has had using WebMention.

Have you written a response to this post? Let me know the URL:

Do you not have a website set up with WebMention capabilities? You can use Comment Parade.