Transcript
Thanks so much for joining this session. We’re going to be covering the topic of securing the software supply chain and diving straight into what you just heard about in terms of Docker Scout GA and everything that goes along with that.
Just as a reminder, Docker Scout is now in general availability. We’re really excited to see you get up and running. And we’ll show a demo to show some of those really core capabilities that you can use with Docker Scout GA. I’m going to just set a little broader framing here and kind of talk through some of the ways in which Docker Scout really helps our customers to address these really poignant — really tricky challenges that are core to how Docker Scout was first built and why it’s built to solve those problems.
Table of Contents
Overview
As a broad base overview, Docker Scout is really focused on generating these signals across the software supply chain. Across all of the integrations that we just discussed in the keynote, it really helped to inform development workflows. And we’ll step through a couple of those development workflows.
Developers are really thinking about how to take the opportunity to use these contextual recommendations that are right where you work, to leverage the data that’s collected through all of these many integrations, and have the ability to step through into a set of remediation workflows as a result of that. To go into a bit more detail around how we keep hearing again and again from different customers, what these challenges are in terms of how they need to be addressed, and how Docker Scout can help to really address those head on is initially this concept of a black box.
Separate signal from noise
Ultimately, it comes down to issues being caught far too late in the development workflow. Maybe just before production, the security team steps in and tells the developer they really need to step in and solve for a specific security challenge — all while something is really in the process of needing to be pushed to production quickly. And then additionally, just lack of insights.
There may be a whole host of insights coming in. But in reality, it’s hard to extract the signal from that, which will step into the noise portion. And those insights really tend to start to not have the context that is needed, and they really lack that end-to-end visibility that you can get across a whole wide range of integrations. So stepping into that kind of next piece is that it comes down to the noise that is associated with it.
This is such a common challenge to think about. There are just too many issues to prioritize and not enough information to really be able to triage effectively and step through into what the right solution will be. And then there’s really a disconnection from the real risk exposure that is originally what you’re really trying to solve for.
Then, ultimately, when we think about poor developer experience and all of the context shifting that goes into trying to solve and think about how to secure this software supply chain more successfully, you end up with these really repetitive tasks and jumping between a lot of different point solutions. We see difficulty in collaboration between development teams and security teams in the process, and just not enough traceability that goes into having an effective mechanism to track everything end-to-end through all of these processes.
Analyze, remediate, evaluate
So when it comes down to helping solve those core challenges, it steps through what in some ways you can summarize quickly as this analyze, remediate, evaluate kind of workflow. Starting with the analysis piece, Docker Scout really helps you to think about how to analyze and add more context into components, libraries, tools, and processes, and have that much more transparency across the software supply chain, and then be able to remediate that. Docker Scout can help you to be guided towards these smarter development decisions through context or recommendations.
All throughout the process, you have policy evaluation that helps to detect, highlight, and suggest these corrections based on these relevant changes. So if you see a slight deviation based on the policies that you have set — and we’ll be able to show more of those out-of-the-box policies — then it really helps to show more from that context.
What it ultimately comes down to is really being able to build with reliability and security built in from the start. So Amy [Bass] was able to step through a lot of the really core pieces that come into play here. Firstly, trusted content — so Docker official images, Docker verified publishers, and Docker sponsored open source content as well. This really allows you to track the full lifecycle of software artifacts and be able to build on trusted content from the start, so that you’re preventing some of those challenges that can come in down the road.
Additionally, the centralized view, which is really being able to operate from one view of centralized insights, to have visibility control across the board, and to be able to step through and look at all the different policies that are most relevant for your sets of data sources you’re looking at. And then recommended workflows — so being able to build faster and more reliably through the demo that Christian Dupuis (CD) was able to show earlier as well and really able to have those context-aware recommendations embedded into developer workflows as a result.
Quick demo
With that, I’m going to actually step into a quick demo from CD. Prior to that, I’m going to just show one more thing, just give me one moment here. So, yeah, I just want to show one quick summary here and give a better sense of how recommended remediation paths, policy evaluation, and trusted content really come together and show all of the different ways in which within the centralized view. You can see, for example, policy time-based trends, number of vulnerabilities, and watch for those trend lines, and then be able to have that much more context as you make these sets of decisions. So with that, I will now pass it over to CD, and we’ll switch screens.
Good morning, everyone. Thank you. So what I wasn’t allowed to do in the keynote was effectively doing a live demo. As you may have noticed, the internet was down when I started, so everything was kind of loaded already and really stressful. So I’ll try to take you through the product live. Things might break. Things might go wrong because of the internet, who knows, demo gods. So that’s what I’m planning on doing. And I want to start with scout.docker.com.
As Jason said, and Amy said earlier, Scout is GA, you can sign up with your Docker Hub account. It’s free up to three repositories for you to try out. Just give it a try; give us feedback. All right, here we are: scout.docker.com. First things first, you see kind of policy tiers here. The first four, sorry, are out-of-the-box policies that you get when you enable. The bottom two are the ones that we kind of end in development and are kind of very specific, and they are only kind of existing on my namespace right now. But again, the first four, I want you to kind of get today. And if I’m not mistaken, they’re actually available to everyone. Is that right? I think so, yeah.
So when you log in, everyone can see those policies. What I want to do is take you through how you can use the information presented, bring this into your inner loop, start using the information, start remediating some of these violations or deviations, and kind of get through that flow all the way to the end.
Let’s start by taking a look at this policy here, which is kind of like highlights. Let me make this a bit bigger. So it lists all the images that were last pushed into these repositories on Docker Hub or ECR or any other on-prem, for example, J4 artifact tree that you can integrate. By the way, I should mention that there is an integration section up here that you can use to start integrating your ECR, the things that I showed earlier. Sysdig is there, and all the integrations that we highlighted in the keynote. Let me start by selecting, I only care what images are running in production again, right? This is kind of where we bring in runtime insights. And there are various ways of delivering that information to us. I mentioned Sysdig during the keynote.
There’s also a way of embedding our CLI into your deployment pipeline, sending these events in. There is also a Kubernetes admission controller that we’re working on that you may be able to get your hands on very soon. And all of what we’re doing with this is effectively marking images into a release stream. They are now kind of part of an environment. And you can group these environments and name them as you like. They should really make sense to you and not really to us. We’re not prescriptive in what your environments are that you want to monitor.
So I have this image deployed here in one of our Kubernetes clusters. And I can take a look into the details here. This is now all the high and critical vulnerabilities, with fixes older than 30 days. Just very important and the idea that we have. The team’s working hard on making this configurable for you. So should it be a week? Should it be two months? Should it be high end or critical? It’s really up to our customers to make sure that they get these policies as they want. So you see a bunch of high and critical ones that I am tasked by. I am being tasked by my PM to kind of go after and fix that.
What we haven’t done yet, again, something in the works is tying the SaaS product to your source code repository, like tying this back into automated remediation. That’s, again, a next step for us; it’ll happen. But it’s not happening now.
So I take this information now and go into my inner loop. Let’s see. This is the tiny little demo that we’re using. You see a very simple Docker file. It’s pinned to an old digest on purpose so that we can kind of simulate this being vulnerable to something. Don’t get too hung on this. And it’s using nodes, and it’s installing a bunch of packages. Pretty random app and pretty common, I would say, in frontend space. Doesn’t matter. Scout works across all the platforms, all the image types, all operating systems, all stacks, Java, Go, Python. Doesn’t matter what you’re using. And then here’s our package JSON.
Security status
Let’s start by first running the CLI on that image and getting a sense for the security status of this. CVEs. All right. That was rather quick. And it’s quick because, well, first of all, the internet is working apparently. And secondly, it is because this image has a lot of attestations. In fact, two, it has provenance and an SBOM attestation. These are coming out of buildkit and making us not need to kind of index the image. So not downloading the whole thing, looking into the image, what’s in there. We can already get that from the attestation on the image. It tells us that.
What do we see here? A little bit more detail. These are all links. And this is what I didn’t show in the keynote. So you can click it, and it takes you to GitHub. I did see a banner on GitHub, which I didn’t like. I’ll click that away really quickly. This is an interesting link. It takes us straight back to the line in the Docker file from your CLI, all done by correlating information coming out of various integrations. In this case, provenance from buildkit, sitting in a registry somewhere.
Now what can I do with this? First of all, I can go ahead and say only package type: NPM. Because there are a few CVEs in there that are coming from our base image. So let’s take a look at the ones that I’m actually caring about first, which are the NPM ones. And apparently they are really in use — meaning this image is currently running in one of our production clusters. It is connected to Sysdig runtime insights. Again, there is a free trial, I think, available to everyone that is using Docker Scout. So you can get started with Sysdig to provide you this level of insight.
It’s kind of exciting about this. Sysdig is able to tell us which images, sorry, which packages inside of the image are being used at runtime. So kind of removing all the ones that you accidentally added to your image like build tools, and a shell, and stuff like that. Stuff that you normally wouldn’t use at runtime. I can see now these two vulnerabilities on the NPM express package and on the QS package are fixable; there are fixed versions. And that’s probably what I want to do.
If I quickly go back to our policy screen, you can see both of them here. So it is a policy violation that I’ve been asked to fix. What I can do now is effectively go in and open the package JSON and do a quick local build. I’m doing this locally now because I want to show you another feature of Scout, which is effectively our integration in Docker Desktop. Let’s see how fast this actually is. In the meantime, I can already bring up Docker Desktop here. There we go. I started. And I see my image up here as the first in the list. I can click it now, and I’ll see a visual representation of the SBOM first and foremost. The image hierarchy, so like this is the section up here.
So, you have local insight into what are the base images, what are the base images of your base image. How are they in terms of are there any more current versions? In this case, I’m using Alpine, and there is a newer version available. And I can start selecting these layers, and it will start highlighting me what belongs where. If I select a base image, it shows me the layers of that image. If I select a layer, it shows me the packages that get introduced by this particular layer. So I can go ahead here and see that this layer now — which is still adding express and all the other NPM packages — all this information is now free of vulnerability. So in fact we fixed already two of the ones that we were asked to fix in our policy.
Update image
Now another really exciting feature that I think highlighted in the demo is if we go back to our policy screen here real quick. And then select the other policy. The base image update policy, select one of these images. I’m going to stick to the AMD64 version here. It tells me that, you know what, there is a newer version of your base image. Remember I was using 314, so it does two things. First and foremost, it tells me there is a new digest for the 314 image. I’m using the tag. I built the image against the 314 tag, and in the meantime that tag has moved on.
So there is a newer version on Docker Hub that I should update. That’s just a simple rebuild if I remove the pin in the from line. But the system also tells me — or Scout tells me — that there is actually a better alternative here, which is like 318 the latest version of PowerPoint. You can get the same information down here in Docker Desktop. So you have the same screen. You see it’s now 318, and I can go ahead and do this in my Docker file. So let’s do this quickly. And let’s build again. That was fast. And everything’s green.
So with a few simple steps — it’s not automated yet — but it is very informative. You can do this all locally. We’re seeing great successes by giving developers the level of insight and helping them do that in their innerloop.
All right, let’s take a quick look at what happens if I push this image now to connect our Git repository. So, I should probably check this out into a branch. I put this image and express and git push. I think I may already have that branch. All right, let’s go to the GitHub repository.
What did I push this to? I shouldn’t be using the CLI — I should really stick to the dumb tools. This is now being pushed. Hopefully, the GitHub action is now kicking in. There we go. There’s a build running. This is a doc send and Docker build running on GitHub actions. It’s going to push this into the repository and within a few seconds or minutes, we’ll see that new image on scout.com. Hopefully, we’ll get updated policy results. Let’s wait for this to happen.
While we do this, why don’t we just take a quick look at another integration that we have? The thing that I wanted to show is that we have a GitHub action integration that on PRs and other connectivity settings where GitHub runs, you can take our action and effectively get immediate feedback in the CLI.
In this case, in the pull request, you see a great level of insight here similar to what you saw on the CLI with regards to comparing the PR image to the one that is currently in this case running in production. This is the version currently running in what we call a stream or an environment currently running in production. You can see this right there where developers can reason about this in their PR when they do their peer review and stuff.
Let’s take a quick look. If this build has finished, it has indeed. Let’s go here. And our image is now there. It doesn’t quite look like what I expected. Well, luckily I prepared this before. So there is an image down here.
The PR number six, which is kind of the same thing. Me raising a pull request with the same changes. Just go back to that repository to show you there’s a pull request here. This is the PR number six that I raised yesterday in preparation. So no other changes than the one that I just did locally. I must have pushed something to a random place. That’s the same image that I intended to build. Scout is now telling me that, on that particular image, you actually fixed all these vulnerabilities.
The one policy that I haven’t fixed is the one down there, which is kind of the SSE metadata verified and attached. That policy requires an SBOM and a provenance attestation to be there. In this particular image, I did not push an SBOM.
Let’s take another look at the comparison that I highlighted in the demo. So I can do Scout compare. And I can use my PR six image against the one currently running in production. I can do this before even pushing that image into the registry. It could have just existed in your local Docker daemon, and it would have worked in the same way. And you see a couple of things now which is pulling down the image; it’s pulling down the details about the images that you want to compare to. It’s querying for policy results and then preparing the output. So if I scroll up here now, you see all the details.
Again, this is the PR number six image, the one that I prepared on the changes. And you see that Scout picked up on the fact that AR updated my base image, which you can see down here. It’s now 318 and not 314 anymore. In case you’re wondering, how do we know this? This is part of the provenance attestation. It’s embedded when you run your build with buildkit. All the sources that go into your build, all the materials, as it’s kind of referred to, are being recorded in your provenance attestation.
So what is your Git commit showing? Your Git repo that goes into the provenance attestation, your base image goes in there, and all the various other multi-stage, base images will be recorded in that provenance attestation. And again, a few other interesting bits and pieces, labels as they change. And of course, in this particular case, we’re changing the commit show. And here, very interestingly, the changes to the policy. As you work, you can immediately see the feedback in the CLI here. And the bit down here is — I just like this because I’m really a geek, and I’m really interested in what is happening.
For example, if you ever want to know what’s, let’s say, Docker Scout. Now I’m going off script, so let’s see what happens. Alpine 314 to Alpine 318. Just picking small images, I’m giving the internet. One thing that Amy mentioned in the keynote today is that we will be shipping SBOM and provenance attestations. We’re going to be signing them for all of our Docker official images content. Then we wouldn’t even need to pull down the images anymore because they had, they would have SBOMs attached. But this is giving you a very quick overview of which packages actually did change between these two versions.
Now I’m comparing 314 against 318. That way around, I did actually downgrade, I should have done it the other way around. I changed all these packages. So it’s a really interesting way of quickly looking at how do random images change? What is in them? What can you do with them? It doesn’t just only work on your own images. Interestingly here, there are no policy results. Why is that? That’s because this is an image that is not in your Docker Hub organization. So it’s not one of your images. We have not calculated online policy results for that. But there’s a little trick here. You can pull them down.
If you pull them and run the policy evaluation, then the policy evaluation will happen locally. This effectively means there are a bunch of containers running in your demon that are effectively our policies, and then you would get results. So this is just like for remote images that sit in your registry that we have integrated with Docker Scout, you get these results right there in the CLI. For images that you don’t own, you can still pull them down and use local policy evaluation. All right. Let’s just take a quick look.
Vulnerability search
What are some of the other things that we can talk about? One thing that I didn’t cover earlier is the vulnerability search. So one of the things that a lot of you will be doing next week is searching for a new vulnerability on a package called curl, which they are going to be announcing on October the 11th. So it’s already good to know where would I be affected in theory. You can go over to the packages tab and start searching for curl. That would give you all images across all your organization that are using curl. And you could prepare yourself for what’s going to come next week. And on their Twitter announcement. They said buckle up. So it seems to be something serious. Although curl being curl, it’s probably not a tool that a lot of people use in production. But who knows, right? It’s worth reviewing. So prepare yourself.
There’s a way for you to search this. If you had your repositories enabled in Scout, we would have indexed all your images. We would have captured all the information coming out of the SBOM. And we would have made it searchable for you. So heading back over here to the policy page, just saying filter this down to the production environment already gives me like a great reduction of the things that I should care about. And I can now drill in. Of course, read up on all the nitty gritty details of CVEs. What’s even more interesting is the “my images” screen. So instead of clicking through each and every image running an analysis on each and every image in your production cluster, it’s one click here. It’s production images only. Find the CVE that you’re interested in. Take a look at what images are affected. And off you go.
There’s a lot of manual searching for information. A lot of these things are gone if you’re enabling Docker Scout and bringing these integrations in. And again, we will be adding a new kind of stand-in CVE later today or tomorrow for the newly to be announced curl CVE that would allow you to kind of see the information on this page and see all the images that you have that are going to be potentially affected.
Q&A
I was told that we should keep about 15 minutes left for Q&A at the end. We are at 14 minutes. With that, Jason, you want to come back up, and we’ll open up the floor for some questions? Hopefully you have questions. Of course. Comments. Thank you. That’s what we wanted to hear. That’s great. Thank you very much. What was the comment? I think it’s amazing. Thank you very much.
GitLab integration
Question over here. I noticed that there is a GitLab integration. Yeah, that’s a good question. Let me just repeat the question for the online audience. There is a GitLab integration, and the question is does this integrate as nicely with the merge request? I think that’s what it’s called on GitLab, as we do for GitHub. It’s not as nicely integrated, but you can get into work. So all of our CLI commands that are available on these various platforms, they’re all output marked on. And the mark on can be used to put into comments.
Now for GitHub, we created a dedicated action that does this. But it can be done on GitLab and like circle and other CLI environments. Yes. Does that answer your question? Yes. Does that answer that? That’s just during that runtime that shows the merge request. But it does not show up there on the Docker Scout website. It does not. No. So there are various ways of getting images showing up on Docker Scout. There are registry integrations. So you can integrate with, currently it’s JFrog on-prem and cloud. It’s Docker Hub and ECR, Amazon’s container registry. There’s also a way of pushing the metadata of your images into Scout without sending the images to us. That’s part of the CLI. And that’s something that if you’re using a different registry that we don’t natively support yet, you could use that facility to send images to Scout. And you could use the same features as demonstrated today without any kind of tight registry integration.
SonarQube
So in the keynote, SonarQube was mentioned. I was wondering if you could talk a little bit more about the integration with SonarQube.
Yeah, yeah, totally. So what we did for SonarQube, and that’s really interesting because that spans normally you don’t have quality metrics for container images on SonarQube. SonarQube operates on your Git source code. So what we have done there is effectively every image is built out of a Git commit that is kind of part of the image metadata.
So using the platform capabilities of code, effectively in the background building a connected graph out of all the data that we put into the database. And that means that you can navigate from your image via the Git commit to anything else that is attaching to these commits. Like builds, SonarQube quality gates, for example. So what’s happening here is, in the moment that you’re pushing a Git commit, we receive that Git commit if you have our GitHub integration configured.
With that information, we can then go off and wait for SonarQube to send us an event to call the SonarQube quality gates to the commit. The next thing is the image comes into our database and that’s getting correlated to the commit because it carries metadata. Ideally, that’s a provenance attestation, that’s hopefully going to be signed in the future. So we can link all these things up and that’s how this particular policy works. And then it’s really up to you how you define your quality gates on SonarQube in your organization. I just used the stock standard ones and added a few violations for demo purposes, but you’re totally free to choose whatever policies and rules you want in these systems.
That’s a general idea of Scout is like bring an integration, bring data in, and then reason on the data and take activity, and bring this into the inner loop so that your developers can kind of take action.
VEX statements
How do you account for exceptions like if you have an exception to vulnerability you want to skip? Yeah, that’s a very good question. So if I scroll further up here, what you see here is this “affected” section here. This thing here is really a VEX statement.
A VEX statement is vulnerability exclusion exchange. Sorry, it’s not exclusion. Exploitation exchange. Thank you. It doesn’t stand for exclusions. It’s a new spec that’s coming out of the CISA group, the same body that is kind of standardizing SBOMs in America. And the purpose is that you as a provider of container images or software artifacts have the ability to say, I’m affected, I’m investigating, or I’m not affected by a certain CVE within a certain context. There are various levels of granularity here, and you can say this is a product, within that product is a certain package, and I’m affected, I’m not affected, I’m still investigating. And that’s going to turn hopefully into an attestation in the future that people can consume off of a registry.
So what this is here is it’s already a little preview of what’s going to come next for us, which is vulnerability of VEX support. On the CLI, we already do this. So our Sysdig integration creates VEX statements, puts them into the database. So in this case we have two directional — we have the VEX statement that effectively says you are affected because we discovered that this image is used at runtime. And we have the opposite, which means this image is not used at runtime, so you’re not affected. And that’s all expressed in a VEX statement that you can either attach to the image, you can put it into a Git repo, or you can have that on your local file system, and feed it into the CLI when you evaluate, when you run the CVE command.
What we don’t yet have is any way of surfacing that information on scout.com. But trust me that’s coming. But it’s a fairly involved and complicated process because VEX assumes that you don’t have to trust every publisher. Just because some random person attached a VEX statement to a container image, doesn’t mean that you have to trust it.
So there is a little bit of like, who do you trust? Like which publisher do you trust? Do those things need to be signed? And we’re working with customers right now to figure out what are the workflows that they want to collect, they are willing to accept you. Do they have to be signed? Do they need to specify the signing identity and so on and so forth? Any other questions?
Conclusion
If there are no more questions, we’ll switch screens and show the Docker Quickstart documents. This steps through how you can get up and running with docker scout. It gives you a quick overview, including a five-minute demo from CD. It shows the various steps that would be involved to enable the repos that you want to be able to analyze with Docker Scout and goes through many steps beyond that within your documentation.
I’ll leave you with this link for the Quickstart documents, and that will help you to get up and running quickly. Thank you so much.
Learn more
- Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain
- Docker Scout product page
- Docker Scout Design Partner Program
- Try Docker Scout
- Looking to get up and running? Use our Quickstart guide
- Highlights from DockerCon 2023 (New Docker Local, Cloud, and AI/ML Innovations)
Find a subscription that’s right for you
Contact an expert today to find the perfect balance of collaboration, security, and support with a Docker subscription.