“All that work to slim your image down and using secure base images means little for security if you’re not also auto-rebuilding it for every upstream change, including detecting upstream base image digest updates.”
TL;DR
- Our open source dependency’s are under attack, and our CI automation needs to evolve.
- Container images “rot” as they age, increasing their CVE count. We should look for ways to rebuild based on upstream changes that (hopefully) reduce CVEs in our images.
- Even if you use “zero CVE base images” from Chainguard or others, you still need frequent (daily) automation to check for new base image builds.
- Upstream base images (Docker Hub Official Images, Chainguard Container Images, etc.) silently rebuild for the same SemVer tag. Often, these new builds will lower the CVE count, but the tag never changes. I call these Silent Rebuilds.
- This post lays out how/why this is happening and a strategy for implementing CI workflows that adapt to this realization, along with a GitHub repository of examples.
We walked through much of this on a live stream event you can get for free.
If you want a more detailed step-by-step with assignments and real-world examples, sign up for my course wait list, where I'm working on multiple GitHub Action courses.
Chainguard sponsored my work on this topic 🥰, and this content is 100% my opinion and they had no control over it. It's no secret I'm a fan of their images and tools, and I wanted to create a teaching tool around how you can automate the update checking of your base images. Thanks to Chainguard for supporting this project, and I hope this helps you automate more of your container image updates and keep your production CVE count low or even ZERO.
For the sake of this topic, I’ll assume these things are true for you:
- You’re using containers.
- You understand the basics of dependency checking with tools like GitHub Dependabot or Mend Renovate.
- You’re concerned about CVE scanning your images, reducing CVE count, and security update checking.
- Hopefully, you’ve already taken CVE-reduction steps like using Alpine, distroless, or Chainguard’s Wolfi as your base images.
The day 2 problem of images: Image CVE Rot
It’s generally understood that we can scan a system, image, or repo for publicly known vulnerabilities (CVEs), but the results are only a point in time. A week later, that scan result may worsen due to newly discovered CVEs.

Like my increasingly-rusty Jeep Wrangler that I’ve had for 25 years, each day it gets older, and it shows more of its age. So it is with containers. Every day, A container image in production has a non-zero chance of increasing its CVE count. That has always been true for any software aging on a server. The reason we’re all talking about this now is that images are awesome, and we can track and audit CVEs better than ever. Images contain only the dependencies our apps need, and nothing else (that's the goal, anyway). They are artifacts. Because of their unique SHA-256 digest, copies running on multiple servers are identical to the bit.
Even though others and I talk about container image CVEs a lot, it’s only because it’s never been easier to ensure that a piece of software and its dependencies are adequately scanned for CVEs. Before containers, I usually witnessed security engineers scanning entire servers I built with all the apps installed, and we hoped nothing would change after the scan. That server might be in production for years before it’s replaced, and it ALWAYS had CVEs. Not once did I ever see a scan come back truly zero. There was just too much software already installed to isolate anything reliably.
Containers changed all that for the better in a huge way. But the Day 2 problem of aging code still exists. Let’s call this a type of “Image Rot,” specifically “Image CVE Rot.” As an image build ages, even a zero-CVE image will start to grow its CVE count. They age more like my Jeep rather than a fine wine. They get rusty.
We know we should update our dependencies, rebuild our images, and redeploy more often. We’re racing against the rot. But, most of us aren’t crazy enough to always rebuild and redeploy daily “just in case it helps reduce CVEs.” That feels way too brute-force and aggressive.
The good news is we’re finally in a place where we can track most dependencies, check for updates daily, and automate builds and deployments in a more “evidence-based” way than daily brute-force builds.
Even if you used Chainguard’s Container Images, which usually have zero CVEs, you will have Image CVE Rot if you don’t automate update checks. Unlike most other public base images, Chainguard rebuilds theirs daily. If there’s a difference in their build, they’ll publish it as a new digest in the tags for that image. Assuming you’re checking them for Silent Rebuilds as well, this means you’ll likely rebuild and deploy more when using a “managed base image.” To be more secure and reduce your production CVEs, you’ll have to deploy more often.
Container Rot is the evil sibling of Image Rot

You may have realized that even if you’re checking for updates daily and rebuilding those images quickly, you still have to get them deployed. Without automated and timely deployments, all the update checks are wasted. I personally recommend (and teach) a deploy workflow that automates all steps except a single human-approved PR. That’s achievable for most teams. I’ll leave it up to you to decide if you fully automate deploys (everything after approving the dependency update PR).
This post isn’t about deployment automation, but I do think it’s time for us to define these types of software rot:
Image CVE Rot: The natural aging of a specific container image, increasing the likelihood that it now contains more CVEs than when it was initially built. As it ages, it “rots.” This is something we should strive to avoid by rebuilding the image with either a brute-force schedule (daily, weekly) or when dependency changes are detected (better). Image Rot can be tracked in the delta of days since last build (if using a brute-force update method) or a CVE delta (today’s CVE count minus build-date CVE count).
Container CVE Rot: Like the rotting of an image over time, but unique in that it’s a running instance of an image. The less ephemeral the solution is designed, the more likely long-running containers exist. It’s much more work to ensure every image rebuild is redeployed quickly (same day) so that container rot state matches the image rot. In a perfect solution, every rebuild of a release image is immediately redeployed. Container rot can be tracked in the delta of days since image build, or a CVE delta since image build. We care about the deltas since image build, not since container creation.
The evolution of container image pinning
When we started learning Dockerfiles, we were told to pin to versions, such as:
FROM node:24.11.0
The more complete our SemVer tag is, the “safer” we become, but the more we’ll need to update the version as it changes upstream. Those more frequent changes mean more work, testing, and risk. I’ve seen some teams just use node:24 to avoid that work, but I find that subpar. I consider it a “DevOps smell” of a team that would rather add risk to the project than add automation.
This is part of what I call the “DevOps change-rate dilemma.” DevOps metrics track this as “Change failure rate” and “Lead time for changes,” which can seem to be at odds with each other. We strive for predictable and deterministic updates that can be clearly controlled, but we (should) also want to deploy updates as fast as possible (particularly when it’s a security fix). You might think of it as a pendulum where we have full change control on one side (tracked as a change failure rate) and fast changes on the other side (tracked as lead time for changes), and never the two shall meet. Moving toward one might have you thinking you're hurting the other.

I used to think of it this way, but I don’t anymore. I think it’s a false choice. We can have both, and I see it as a path. I start at full control and safety, and automate my way to fast. The moment something goes wrong because I went too fast (likely due to not enough testing, linting, or automated “checks”), I slow down, improve my automation, and keep pushing faster.

Back to the SemVer dilemma. Good News! You can solve the version update problem today by using Dependabot or Renovate to check for new SemVer tags and create Pull Requests for your approval. It’s the best of both “controlled changes” and “automated updates.” That PR should also kick off the same type of testing you would do for your app versions. Hopefully it’s automated.
At some point, you may have learned about pinning to the image digest, which guarantees you’ll get the exact same image every time. Tags are mutable and can be reused. Digests are not. This gives us even more control of our changes.
You might have started pinning like this:
FROM node@sha256:e5bbac0e9b8a6e3b96a86a82bbbcf4c533a879694fd613ed616bae5116f6f243
# node 24.11.0But that’s hard for humans to read, and only useful if we have automation to update that digest when the version changes. Luckily, Docker supports keeping the tag in there. Note that once we add the digest, the tag is only for documentation, as the container runtime will ignore the tag and always pull via the digest hash.
FROM node:24.11.0@sha256:e5bbac0e9b8a6e3b96a86a82bbbcf4c533a879694fd613ed616bae5116f6f243Both Dependabot and Renovate support that format, and you can configure them to create PRs when the version changes. The PR will be a single-line diff.
- FROM node:24.11.0@sha256:e5bbac0e9b8a6e3b96a86a82bbbcf4c533a879694fd613ed616bae5116f6f243
+ FROM node:24.11.1@sha256:aa648b387728c25f81ff811799bbf8de39df66d7e2d9b3ab55cc6300cb9175d9
The largely-unknown problem of Silent Rebuilds

Turns out, we’re not done. We pinned to the digest for a specific tag, and you might think that the digest won’t change for that tag, ever. We know that tags are mutable, but does that node:24.11.0 image really ever change? Won’t it only change when they release node:24.11.1?
I used to think that, until I started noticing PR diffs like this from Dependabot:
- FROM nginx:1.27.4@124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
+ FROM nginx:1.27.4@09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eab
Notice that the version didn’t change, but the digest did. The new image also had less CVEs in it, even though the nginx version was the same. There was no (easy) way to know the reason for the digest change, but it happened, and I assume for a good reason (if lower CVE count isn’t enough of a good reason).
This is what I call a Silent Rebuild.
Let me provide a more complete definition:
Silent Rebuilds: The act of building a container image and reusing some or all tags. The only way to tell the difference is the new creation date and the new digest. Since there’s no new tag to track, the updated image was “silent” in its rebuild. Automation is typically required to track digest/date changes for a given tag (latest, 1.3, 1.3.2, etc.) since humans don’t memorize SHA hashes well. It’s normal for official images to silently rebuild for a given tag for several reasons, including updated dependencies, CVE fixes, or simply a repo update that didn’t bump the apps SemVer. It’s common to see popular official images rebuilt 2-5+ times between tag changes.
Humans wouldn’t typically notice these changes, and our image registries don’t highlight this change or let us easily see a history of digests for a specific tag. The lack of history is a missing feature, I think.
My friend Eric Smalling, a Chainguard engineer and Docker pro, and I spent a week tracking down the history of tags for a few popular official images and all of them had 2-6 Silent Rebuilds per tag. We stored them here to document what we’re seeing. Here’s an example of 5 nginx versions showing 20 unique digests:

In the case of Docker Hub’s Official Images, the changes to these images are largely managed by a team of volunteers that run GitHub repositories full of Dockerfiles for each open source project (here's the repo for the nginx Dockerfiles). They update those repositories all the time, and not just for version changes of the app inside the image. Sometimes there are changes due to dependencies, build improvements, or just a simple Dockerfile or README tweak. Image build workflows typically kick off for any file changed in the repo, and the image doesn’t contain the reason for its rebuild.
The bottom line is that we should know and care about these Silent Rebuilds. They work in our favor and often have updated build dependencies, resulting in fewer CVEs. We should want to track these Silent Rebuilds and deploy them just like a SemVer tag change.
Now that you know about Silent Rebuilds, you’re burdened with knowing that you might have images somewhere that are older than the latest upstream digest for that tag.
Good News! If you set your Dependabot config or Renovate workflow to kick off daily, and they are set to pin digests, then you’re in luck; you’re already covered! You’ll see PRs to update your base image digests (real PR example here), which will give you multiple benefits, including “improved” base images that don’t change the underlying app version, and it’ll reduce your Image CVE Rot.
If you’re not able to use those tools yet, Chainguard created Digestabot, a purpose-fit utility that’s focused on updating any image digest it finds in a GitHub repo.
Integrate Silent Rebuild checks into your CI automation
CI systems are as unique as the code they build. Sometimes, trying to compare how two orgs design their workflows and automations is like comparing works of art, but there are some common essentials that I focus on as a baseline for what every team should be doing. Part of that I’m calling “Bret’s Dependency Update Framework”

“Keeping Base Image Current” is a workflow goal that includes monitoring for Silent Rebuilds. If you’re using Dependabot/Renovate and checking for app language dependency updates as well as Docker tag & digest updates, this will look like a single workflow.
Implementing Silent Rebuild checks with Dependabot
If you’re using Dependabot on GitHub, the three “Daily Cron Workflows” I listed in the image above are a single config file. Since Dependabot is built into GitHub, no Actions workflow is needed for this to run daily. This is how the Docker config might look in .github/dependabot.yml
# Add this section for each directory with a Dockerfile, Compose file, K8s manifest, or Helm chart
# For Dockerfiles, it checks any FROM image
# For YAML (K8s, Helm, Compose) it checks any image
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "daily"
commit-message:
# Prefix all commit messages with "[docker] "
prefix: "[docker] "
# don't create PRs for major SemVer updates, only minor and patch level
ignore:
- dependency-name: "*"
update-types: ["version-update:semver-major"]
# if you only want this for security updates, then avoid major+minor:
# update-types: ["version-update:semver-major","version-update:semver-minor"]
Implementing Silent Rebuild checks with Renovate
My example repository for this has a Renovate config example at .github/renovate.json5 for Dockerfile update checks, and it enables digest pinning and the other options similar to Dependabot above (but it’s much longer and harder to read, so find it in the repo.)
Avoid Burnout: Split your workflows into security and feature updates
It’s not lost on me that for some, this will create a lot of new PRs, and for some, it will create a not-insignificant amount of human toil. If you're just starting out with all these auto-update ideas… Pro Tip: Go slow for the purposes of avoiding burnout or pushback from teammates. One of the ways to do this is to split out security updates (SemVer patch level) from feature updates (SemVer major/minor level). Create a “security set” of pipeline workflows that just look for security-related updates that fix CVEs. Implement those first as weekly jobs. Then once you’re able to handle the work, try daily. Just the act of enabling these types of update checks on older/bigger projects can have a “system shock” on your workload and team. Be mindful of that shock potential 😉. Then later, you can create a “feature set” of similar workflows that add in major/minor SemVer updates to the mix, and only run weekly or monthly.
This type of “split workflows” is possible in Dependabot/Renovate for anything that follows SemVer, but it’s not possible for digest updates inside the same image tag. Silent Rebuild Strikes Back! because it has no SemVer change. We'll just have to accept that if a digest check finds a new image, we’re treating it the same, regardless of whether there was a significant change.
Or do we?
Advanced: Add a CVE diff to your Silent Rebuild workflow to avoid deploying trivial changes
An example workflow that I haven’t built yet adds a step after the updated digest PR is created that shows the before/after CVE change in the base image it’s providing in the PR. It should be possible to post those before/after results in the PR, and label (or even close) the PR if the CVE count isn’t improved. If you’re primarily concerned about CVEs (like I am) in Silent Rebuilds, then why bother deploying an updated image if it didn’t change the app version nor provide CVE benefits?
Final thoughts: even with these workflows, stable builds are hard
There are still major pain points to keep this all working. I’d be lying if I said ramping up the number of PRs for dependency updates wasn’t without its flaws (but I still think it’s worth it for the security benefits). Here are some highlights of “the suck” that still exists in the ecosystem:
- Package managers still drop off old packages, breaking your builds. This is true of apt, pip, and other “legacy package managers” (as I call them). You can try to pin things in your Dockerfile, but they will always eventually break. The current options to fix this are to avoid languages that use those types of package managers, generally, that’s languages that existed before the invention of the cloud in 2007. Or, try a package cache to help avoid breaking builds, but these are usually painful to manage and can increase your CVE count.
- Many dependencies have dependencies of their own, and those are sometimes not pinned. This results in your builds having changed dependencies that you didn’t expect. This is very package manager specific, so it would benefit you to know the internals of how your package managers work in resolving dependencies. For example, do you know how Node
npmandpnpmare different? What about postinstall scripts or git repo dependencies? OK, I’m getting heartburn just thinking about it. Hopefully, your testing is good enough to catch 90% of issues, but even if you catch those issues and it fails a PR build… now you’re saddled with the troubleshooting cost.
I point these out because in a different post, I would discuss how language and package manager choices significantly affect your build and deployment complexity, speed, reliability, and security. I lean towards modern languages like Go, Rust, and TypeScript because they generally solve today’s problems better than 30-year-old languages designed for a different era. Sometimes, there’s only so much you can do with an aging language, framework, or package manager, so Your Mileage May Vary when designing a fast-paced update framework like I’ve spelled out here.
More questions or feedback? Let’s chat on Discord or Bluesky.