Skip to content
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Improved Docker Container Image Lifecycle

As part of #10349 to improve our docker container security and sustainability, we need to improve the container image lifecycle. Currently, our container definitions are stable, but rarely updated, with some of the definitions dating back several years. We don't have means to ensure that all container images we use contain the latest OS patches and CVE fixes. One of the main points of this proposal is to ensure that the containers are updated regularly, accepting servicing updates form the OS on a regular basis. The major business goals of this work are to make sure that:

- Our container images are re-built regularly and they contain the latest underlying OS patches and CVE fixes
- There is a mechanism for updating the docker containers used by product teams so that they are always on the latest version of each container image
- There is a process and tools implemented for identifying and removing images that are out of date
- There is a process and tools to delete old container images (older than 3-6 months) from MCR
- All images used in the building and testing of .NET use Microsoft-approved base images, either Mariner where appropriate, or [Microsoft Artifact Registry-approved images](https://eng.ms/docs/more/containers-secure-supply-chain/approved-images) where Mariner is insufficient

## Stakeholders

- .NET Core Engineering
- .NET Acquisition and Deployment
- .NET Product teams

## Risks

- Will the new implementation of any existing functionality cause breaking changes for existing consumers?

The major risk in this portion of the epic is finding and updating all container usages by product teams, and making sure that moving them to the latest versions of the container images doesn't break their builds/tests because of missing artifacts. Our goal is to use docker tags to label the latest known good of each container image, and replace usages of specific docker image tags with a `<os>-<version>-<other-identifying-info>-latest` tag. That way, much like with helix images, their builds and tests will be updated automatically when we deploy a new latest version. In the transition to latest images, we may find that older versions of a container may have different versions of artifacts installed on those containers, which could affect builds and tests. We will need to be prepared to help product teams identify these issues and work through them.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Matrix of Truth (MoT) code now contains some heuristics to derive the OperatingSystemId (the unified way to connect OS definitions between MoT, OSOB and RIDs) for docker environments. The OperatingSystemId definitions can be found in the helix-machines repo at os-definitions.json.

When designing the new tagging scheme, please make sure that it's possible to obtain the OperatingSystemId for our containers in some algorithmic way rather than by the currently used heuristics, that might not be 100% accurate in some cases. To be clear, I'm not proposing to make the OperatingSystemID to be used in the new tagging schema directly, but I'm raising this for awareness as this is the last remaining issue in unifying concept of an operating system identification across our infrastructure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In what way is the RID used from this data? Because I see Mariner listed there with a RID of mariner.1-x64. But there are no RIDs defined for Mariner: https://github.com/dotnet/runtime/blob/main/src/libraries/Microsoft.NETCore.Platforms/src/runtime.json.

cc

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having the MoT OperatingSystemId as a part of the docker tag wouldn't be a bad idea (Linux-Ubuntu-20.04-AMD64--latest for example).
The RID isn't currently used for anything, it is there for informational purposes.
We are aware that there is no RID for mariner, we should probably fix that. I'm not seeing anything that would tell us what to put there in dotnet/runtime#65566

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't sound like there will be a RID for Mariner. There's no functional need.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me share a bit of context here. When we started designing the Matrix of Truth, it became very soon apparent that we need some unified concept of identifying a particular version of operating system so that we could unify the concept of OS as used by PMs to define what versions of OSes are supported by each version of the product with our internal infrastructure and tooling, such as the OSOB. Now even though RIDs aren't directly an OS description (but a concept of expressing target platforms where the application runs), it was clear that they are closely related to our OS definitions and that we should think how to incorporate them into our model. From discussion with @ericstj and @eerhardt, we've learnt that each version of a given OS has exactly one RID that can be thought of a "primary RID" for that given version OS. This RID is then what is used in the os-definitions.json mentioned above. Having said that, I don't think these values are currently used anywhere besides our MoT PowerBI reports.

The point with non-existing RID for Mariner is of course valid, @ericstj - would you know if there are plans for introducing RIDs for Mariner and if not, what RID would correspond to the primary rid of this platform?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an issue for that: dotnet/runtime#65566
Folks are proposing removing the concept of the distro/version/arch specific RIDs: dotnet/designs#260

RID happens to be useful for you today in its current form. It may change, but that doesn't mean you can't continue to use the algorithm that the host had for identifying a RID. It's really just an algorithm that encodes a number of significant machine characteristics into a string. If the host changes that algorithm you can just maintain your own copy of one that works for you. Even the host's algorithm requires regular maintenance to ensure it continues to provide unique and meaningful strings per distro.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for clarification Eric!


- What are your assumptions?

Assumptions include:
- The Matrix of Truth work will enable us to identify all pipelines and branches that are using docker containers and which images they are using
- We will be able to extend the existing publishing infrastructure to also idetify images that are due for removal
- All of our existing base images can be replaced with MAR-approved images
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see at least two base images today that aren't approved - opensuse and raspbian.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is also the fact that they are going to be removing the alpine images (which I just noticed while scrolling through the website). They are suggesting that folks running in alpine move to mariner.

- Most of the official build that is currently built in docker containers can be built on Mariner
- MAR-approved images are updated with OS patches and CVE fixes

- What are your unknowns?

Unknowns include:
- How will we identify the LKG for each docker image?
- What testing is currently in place for docker images, so that we can have confidence that updating the `latest` image will not break product teams?
- What is the rollback story for the `latest` tagging scheme?
- If the MAR-approved images are not updated on a regular basis, how do we apply OS patches and CVE fixes to the base operating systems?

- What dependencies will this epic/feature(s) have?

This feature will depend heavily on MAR-approved images (and whatever updating scheme they have for updating base images), as well as the existing functionality for building and publishing our docker container images. We will want to expand the existing functionality to allow us to 1) identify the last known good of each docker image and 2) tag that LKG with a descriptive `latest` tag.

## Serviceability of Feature

### Rollout and Deployment

As part of this work, we will need to implement a rollout story for the new tagging feature. We do not want every published image to immediately be tagged as `latest`. In fact, we may want to implement two different tags: `latest` and `staging`. In this scheme, we would branch the `dotnet-buildtools-prereqs-docker` repo so that we have a production branch. Every image published from main would be tagged `staging` which could then be used in testing, much like the images in our Helix -Int pool. This tag would be used for identifying issues ahead of time so that when we rollout, we will be more confident that the images we are tagging as `latest` will be safe for our customers. The rollout would be performed on a weekly basis, much like our helix-service, helix-machines, and arcade-services rollouts. We roll all of the known good changes in the main branch to production, and publish those images with the `latest` tag. This would allow us to reuse the same logic for both staging and latest.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the potential for some UX issues here. There are two different but related scenarios.

  1. I am a developer who is building a new image variant and wants to consume it in the product build. When the new image variant is built for the first time, will a latest exist right away or will only staging exist? As a developer I want to be able to check-in my code using latest after I validate my changes. I don't have to check-in using staging and then remember to make a second change to update it to latest once available.

  2. I am a developer making a change to an existing image variant - e.g. adding a new component. Like the previous scenario if I validate the changes I want to be able to commit my changes that take a dependency on the new component right away. In this scenario, maybe you are envisioning that if a new component is added, then it becomes a new image variant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How I envision this is that it will act the same as when we add new queues to helix: you request it, it goes into staging when the work is done, and then it rolls out to prod the following Wednesday. Our product teams are already used to that timeframe for all other images, and while it's a bit of an assumption on my part, I think they will adjust to that being true for docker images as well.

As part of this work, we are likely going to end up requiring most, if not all, changes to docker images to come through dnceng so that we can vet the changes and make sure new dependencies have updates and the like, much like we do with helix/buildpool machines.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think they will adjust to that being true for docker images as well.

Perhaps there is a middle ground here. At least for new images, perhaps latest can be created right away.

I am sure it will be an iterative process that will be refined over time. I hope we all agree that we should be attentive to feedback and explore options to adjust to provide the best UX we can reasonably provide.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. I think we don't necessarily have to talk about using the "latest" tag directly (which has a special meaning in docker as it's the tag that is used by default when a tag is not explicitly specified).

What about rather stating that we will need to design a tagging scheme that would allow us to handle containers in a well-defined scenarios (such as the weekly rollout that @adiaaida has on mind or the local development story that @MichaelSimons described), define these scenarios first and later, based on these scenarios, think about concrete implementation by a tagging schema?

Not saying we should discard the idea of staging / latest tagging completely, but perhaps we could park it for now and focus on investigating the scenarios first?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is always the option hotfix prod to get a new latest, but I agree with @adiaaida that this is a process that teams are already used to. It seems pretty rare that there would be a case where a new platform would immediately be ready for check-in in < 1 week.

I do like branching as a method of managing what is 'prod' vs. what is head of tree. This is a well-worn path that is consistent with our other services.


We will also need a rollback story so that if an image breaks a product team's build or test, we can untag that image and retag the previous `latest` image. A rollback should be as simple as reverting a previous change and publishing the images at the new commit. While this image may be identical to a previously published image, it will effectively be treated as a new version.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly to my comment above, we could just agree that we need to be able to perform a rollback when needed and later investigate the possibilities of how to do that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed implementation-type details for this section


### FR and Operations Handoff

We will create documentation for managing the tags so that when a rollback needs to occur, FR will be able to make those changes. Additionally, we will create documentation and processes that can be used by Operations and/or the vendors to handle any manual OS/base image updating or removing of old and out-of-date images from MCR, as necessary. We will also create documentation for responding to customer requests for new docker images, including where to get the base images, and how to install required dependencies (though that is coming in a different one pager).