Transcript
Hi, I’m Michael Irwin. I’m a member of the Developer Relations team here at Docker. I’m excited to talk to you about shifting left. Now, jumping into this, first off, we’re going to spend a moment and actually define what we mean by shift left, what it is, what it isn’t, why it’s important, etc. Then we’re going to talk about three different ways in which Docker helps you shift left. First off, we’re going to talk about shifting development environments, then we’ll talk about integration testing, and then we’ll talk about vulnerability assessment, and just security in general, and then we’ll wrap up with a recap. Let’s jump in.
Table of Contents
- Defining “shift left” (0:35)
- Shifting dev environments (4:20)
- Demo: Dev Environment (9:26)
- Shifting integration testing (15:05)
- Introducing Testcontainers (17:43)
- Demo: Testcontainers (21:30)
- Shifting vulnerability assessment (26:32)
- What is Docker Scout? (28:20)
- Demo: Docker Scout (29:31)
- Recap (35:19)
- Learn more
Defining “shift left” (0:35)
What do we mean by shift left? Where did it come from, but also what is it, what isn’t it, etc. Let’s first start off by looking at the software development lifecycle, or what we call the SDLC. We see that there’s typically two loops, which we call the inner loop and the outer loop. Each loop is designed to get feedback and then prepare for the next loop. The inner loop focuses on everything that occurs before code is pushed to a repository. The planning, the setup, the actual coding, building, testing, etc. Once the code is then pushed, though, CI jobs are typically triggered, and then that build, test, and verifies the application, and ensures that it meets organizational standards. Then after all that, the app is deployed, it’s monitored, etc. Obviously, there’s more nuance, there’s more details to the SDLC, and this is keeping it at a high level. These are typically the components that fit in the SDLC. One of the interesting things here is that this process really hasn’t changed in many, many years. Sure, the tools have evolved, and some of the processes and handoffs between each of these steps have evolved a little bit. But the overall goals and the overall steps that exist within the SDLC have been pretty consistent.
Several years ago, I’m sure most of you have heard about the DevOps movement, and many ways it’s the same inner and outer loop, now just kind of mashed into an infinity sign. But really, the focus of DevOps has been, what’s the handoff between teams, kind of the traditional development, and the traditional operations, the inner loop versus the outer loop, and how do we streamline those processes? Where are the pain points? Where is the friction? How could we better collaborate as teams? But at the same time that we’re delivering software quickly, we also want to make sure that we’re not taking hits when it comes to quality, uptime, and security. Because at the end of the day, the main goal is to deliver software and to make customers happy.
Now, when we think about shift left, and again, kind of this inner loop, outer loop, and going back to the SDLC, the entire idea of shift left is to look at where is the feedback coming from, and how can we shorten that feedback loop? And going back to DevOps, where are the friction points? Where are the pain points that exist in our workflow? So are there things that typically run in the outer loop or depend on the outer loop? And can we move them kind of to the left? Can we move them towards the inner loop? Because the more that we can move it towards the left here, the faster the feedback we get. And the more time that developers can spend on flow. So again, where are the things that can go wrong? What are the things that are slow? How can we help provide additional context and tools to catch those issues earlier?
Now, one of the things I want to clarify up front, shift left doesn’t mean just give developers more work to do. Because that’s something I hear quite often when folks shift left: “great, that means me as a developer, I’ve got to do more stuff.” That’s not the approach that we want to take here. When we’re talking about shift left, we want to think about the responsibilities and things that you’re already having to do. And again, how can we move those a little bit leftward to help you stay in flow, to get that feedback faster, to help you develop more effectively, more easily, etc. And what are the tools that are needed to be able to do that? So again, we’re going to talk about three different, very specific areas that Docker helps, kind of with the shift left and helping developers develop more easily.
Shifting dev environments (4:20)
First off, let’s talk about development environments and how Docker helps out with this. Now, I think a lot of people already have this idea in place and how containers help out with this. So some of this may be repetitive, but some of it may also be new to you as well. Let’s start off with a sample application. Many of our apps are composed of a lot of different services. This particular application, a catalog service, stores its data in a PostgreSQL database, images and blob storage such as AWS S3, and then gets inventory data from an external inventory service. Additionally, it publishes update events into Kafka then to allow other services to be notified of those changes, where they can send notifications, emails, from whatever. So now the question is, how do we set up a development environment in a testing environment? Now, there’s a wide variety of different methods that we’ve seen as we’ve talked to various teams and organizations.
The first option is just say, well, hey, this is starting to get complex enough and there’s so many different microservices in the system that we’re not even going to set up a local development environment. In order to test your changes, you’re just going to have to deploy to a Dev environment. And so in this case, the developer makes changes to their codebase. They push those changes to their code repository. From there, a CI system builds tests and deploys it out to the Dev environment. And then from there, it can connect to all the different services that they need. But again, depending on how long the CI environment is, and even if it’s five minutes, even if it’s 10 minutes, that’s a long feedback loop just to test a code change. And every time you do this, then developers can leave flow, there’s other requests that come in, etc. So how can we shorten that feedback loop?
Typically, the next iteration that we see is, hey, can we just use the same Dev environment that had all those different services but now let’s run the application locally. So in this case, the app is running locally, and then it’s connecting to the different services that are running out in a Dev environment. So with this option, you know, me as a developer, I can make changes to the application. I can deploy those changes locally. I can test them out. I can validate it works and everything. But the question is, how often is a developer ever just working on their own, by themselves on a single application? Typically, there’s going to be other developers working on either the same app or nearby applications that leverage the same services and resources. And this starts to add a lot of complexity with networking and resource contention. And you know, what happens if one developer nukes the database and now others can’t continue to develop, again, it just gets really complicated. And since this is off in a Dev environment, there’s a good chance that if there’s a problem that occurs with this development environment out in the cloud, there’s another team that has to be pulled in to fix the development environment, fix issues, etc.
So with shift left, how can we remove that outer loop, that CI process and now move that inward? Well, with Docker’s ability to run containers, I can set up a development environment that mimics that same environment. So instead of deploying remote cloud infrastructure and managing IM policies and everything for that AWS S3 bucket, what if I run a S3 compatible service such as MinIO or local stack on my machine? My app still thinks it’s talking to S3. It’s still speaking the same protocols and API, but it’s now just a local copy of it. Instead of relying on the external inventory service, maybe I can mock that using a tool like wire mock. And so again, my application is still talking to an API to get in this case inventory data, but it’s all mock data. The responses that come back still fit the same structure that I would expect it to, but my app is now talking to a mock and I don’t have to then run all of its downstream dependencies, etc. I can also run Kafka and PostgreSQL locally as well too. And a lot of a lot of teams will already do this part as well. But again, once I start thinking containers, not only can I replicate my environment here, but I can start to now enhance my environment. So in this case, if I wanted to see what was going on in Kafka, I can run a visualizer or if maybe I’m a platform team and I want to help support my developers that are running Kafka, maybe they’re not as familiar with it. Maybe we can make a tool that helps publish or consume test messages from that Kafka cluster. Maybe we’ve got some folks that aren’t as familiar with all the different nuances of PostgreSQL. And great, let’s run pgAdmin and provide a database visualizer and see what’s going on. So this idea is what we call container supported development using containers to build a development environment, but even enhance it. Now, the cool thing is this can happen even if this application isn’t running in a container. I can still run all these other services in containers and my main app can run natively. And in fact, let’s try that out right now.
Demo: Dev Environment (9:26)
I’m going to switch over to VS code. And in VS code, I’ve got a compose file that outlines the same architecture that we just saw. So I’ve got my Postgres database, I got PGAdmin, I’ve got local stack to run AWS. I’ve got a tool in this case that’s going to create the bucket that my app is going to use, mock inventory, Kafka configuration, etc. Okay, so with all this, I can just run Docker compose up. I’m going to add -d so it runs in the background. And this will take a couple of seconds to spin up just because there’s some health checks that have to pass and everything else. But within a few seconds, I’ve got my database, I’ve got everything up and running. And wait for it. Just waiting for the health checks to pass here. Okay, there it goes. All right. And now what I’m going to do is let’s actually just run yard dev. So I’m going to run this application natively on my machine. Again, I’m using node, it’s running natively, but it’s connecting to the database and local stack, everything else that’s running in a container. So what I’m going to do is I’ve got some helper scripts in this repo that allows me to query and make a request against the API.
So in this case, I’m going to make a couple products. Let’s make three. And it’s just sending the name of the product and the price for it. So I’ve got three products in my database,that’s awesome. Now what I can do is if I need to visualize what’s going on the database, well, I can go to my browser. And I’ve got PGadmin running here and I can open up the database. And in case you’re wondering, that password is actually configured here in the compose file. It’s just the default password there. I haven’t figured out how to auto configure that in PGAdmin yet, but that’s all right. But again, I can open up the database. I can open up the schemas. Let’s go to the tables and let’s view all the data. Let me make a change to one of these items and let’s just call us another product. I’ll save the change. Again, I’m interacting with the database and just to prove that this is the case, if I go back to VS code and then just say, hey, get the product for ID 2, we’ll see that it has the updated name associated with it. And it’s getting inventory data from the inventory service where again, I can see when I make a request for inventory2, I get a quantity of 15, which reflects here. But for 3, I actually have it testing a corner case that’s hard for me to test. And so now if I make a request for three, my inventory service sends a 500 back. And now I can test, does my application responds to these kind of edge cases or error cases in which the service isn’t responding well. And again, I can mock all that out really easily where it may be hard for me to do otherwise.
Going back to my browser, I can also open up, in this case, I’m using kafbat to visualize what’s going on in the Kafka cluster. And I can see the messages that were published as the product was being created. And I can validate that they have the right structure, etc. And again, use this for troubleshooting and for debugging. And I don’t have to learn all the different CLI commands and everything to interact with the Kafka cluster. Now so going back here, the focus in order to make this work, I have to really just focus on the protocols and the abstractions between my application and the external services. Databases, there’s binary protocols in place. If I talk to a Postgres database that’s running locally on my machine or a managed service, it’s speaking the same binary protocol. Great. But the same thing works for APIs and for Kafka clusters and you know, all these other services, there’s ways to abstract that away so that I can run local copies. I’ve even seen some teams that say, hey, I’m going to run a copy of key cloak locally so I can do a OAuth or OIDC authentication locally on my machine with just test users so that I can test out the different roles for the different users and personas that my application has. So again, if you focus on those protocols and abstractions between the services and ensure that your services do have solid APIs in place, then it makes these types of opportunities much easier to do.
In this case, I also map my container ports to the host to connect from non-containerized apps. So, you know, I ran Postgres. I just expose it onto my host machine. And then in this case, I can connect using the CLI and just connect the local host with the user and great – who cares that it’s running in a container. But then, again, when I’m done, I just tear down the containers. Nothing’s left installed on my machine and I can move on to the next project or I can come back in to work tomorrow and spin it back up and pick up where I left off. That’s the beauty of containers here. And then finally, when you’re building your development environments, considering adding other tools, what are the other things that are going to enhance your development environment, whether it’s visualizers or message publishers or just other tools to help build environments that’s going to help developers troubleshoot, debug and be more effective. So that’s development environments.
Shifting integration testing (15:05)
Let’s talk about integration testing. Integration testing is a fun one because a lot of people have different ideas of what integration testing is, so we’ll spend a second here to define it. And specifically, we’re looking at this part of the outer loop where we’ve pushed code and where you want to validate that our application works with its external dependencies, whether the database, in many ways, the things that we just saw in development, how do we now do that in a testing loop? And so again, the more that we can do testing, this is typically the last step, last step before deploy. So the better our test coverage is in the loop here, the more comfortable we are with deploying our code more often.
Now, so there’s a lot of challenges that come with integration testing. We’ve got to manage the different services. And when we run these in CI, we either have to already have them up and running somewhere, or we have to spin them up. So we have to start managing lifecycle or just the infrastructure to support and maintain these things. How do we reset to each of these different services between tests? Sometimes we know we may have a test suite that we need to reset the data between every test run, or sometimes this whole suite can run in one go, like each test may have a different set of rules and expectations around the data that it may be interacting with. How do we even write them in the first place? Seeing that, if we’re having to manage the lifecycle of services, how do we set up those things before we can actually write our test? And how do I run that locally if I don’t have the ability to spin up all the stuff into the cloud or wherever my CI environment may be leveraging?
And preferably, what we want is to kind of work towards the ideal testing pyramid. So we have unit tests at the bottom. We’ve got integration testing and the more integration testing we have, the more that we know our application is going to work well with other services. But in a lot of cases, we see that this is hard because again, you have to manage these different services. And so a lot of times teams will end up with the testing hour glass where it’s like, yeah, we don’t have a lot of integration testing because that’s hard. We’ll just deploy it either to a staging environment or production or whatever. And we’ll do more end to end or UI testing once it’s deployed. Okay, so again, we want to think about how can we get the proper test structure so that we’re not deploying and hoping. But we can be confident before our code gets out.
Introducing Testcontainers (17:43)
So in order to do this, we’re going to talk about the Testcontainers framework and library. So Testcontainers is an open source collection of libraries, as it says here, for providing ephemeral lightweight instances of test dependencies. At the end of the day, it’s programmatic containers. Where before we saw Compose in which I could say Docker compose up and I declared it in a YAML document, with Testcontainers, it’s a very programmatic approach. And yeah, you can see lots of numbers here, how many hub pulls and organizations use it. And there’s some big company names. In fact, Netflix has talked quite a bit of how Testcontainers has really helped their engineering culture become a testing culture, which has been really fascinating to watch that.
But again, with this programmatic approach, it allows us to kind of plug this into the lifecycle of test itself. Okay, so in this example, we’ve got a code snippet that’s defined in a postgres, a variable name postgres, that’s tied to a postgres container for this particular image. Now this image in this case is coming from Docker Hub, but this can also be your own image from your own internal registry. As long as the code that’s running this has credentials to be able to pull from a private registry, you can use an image from anywhere. So if I plug this into my test, before my test run, I can start my containers, I can then run my tests, and then when I’m done, I can clean up my containers. And I can choose, again, depending on the scope of the test, whether I do this startup tear down with every test or with a suite of test, or whatnot, I have that control because it’s part of the programmatic interface.
Now, there’s a lot of go-to modules and ready-to-go modules for a lot of the common use cases, the databases, the message cues, the AI machine learning models, etc. So there’s a lot of really good modules. In fact, I’ll pull them up here in the browser. And so there’s a lot of different modules and you can see different categories for things. And you can just pick up one of these and use it. And it’s interesting too because a lot of companies, as they start using Testcontainers, will start making their own modules, because really a module is just a wrapper, it’s kind of another higher level abstraction on top of the primitives that Testcontainers bring. So that when you, for example, say, let’s just pick Kafka, before Kafka became something that you can just run with a single container, you know, had to run Zookeeper. And so I would use the Kafka module. And yeah, it just looks like I’ve got a one-liner for Kafka, but behind the scenes, it’s actually an abstraction on top to spin up Zookeeper, get the multiple services connected, wait for health checks to pass, now start Kafka. It’s a pretty cool abstraction, to be honest. And so a lot of companies and organizations will start to create their own modules to support their own services, their own workflows, etc.
Now, when we’re writing these tests and we’ll do a demo of this here in just a second, it’s quite often the case where as a developer, I can write these tests, I can validate works on my local machine. Now, what happens when I get the CI? And a lot of CI environments, it may be a little restricted or locked down where I can’t do Docker in Docker or maybe I’m running in Kubernetes pods and I don’t even have a Docker CLI or a Docker engine to interact. Testcontainers Cloud will help solve that where now you can delegate the running of containers to cloud resources. And basically, they spin up when you need them, tear down when you no longer need them. And it’s incredibly fast. And we’ll actually see that here in just a second. So again, it’s a very valuable resource, especially in the CI environments.
Demo: Testcontainers (21:30)
So with that, I’m going to jump to IntelliJ. And this IntelliJ project, this is Springboot application. And I’ve got a test here, I’ll remove that break point there. In which it’s basically just going to spin up the application and hit the API, validate that it stores things in the database. It also uses Redis as well too. So just again, make sure everything works. This particular test case extends an abstract that what it’s going to do is spin up the Postgres database and spin up Redis. Now one cool thing I’ll just call out, and this is a kind of special integration with the Springboot framework here, is that when Postgres starts, you’ll see that it says with exposed port 5432. Now, this isn’t going to be exposed on the host necessarily in 5432. It’s just going to pick an ephemeral port to use. But the service connection annotation basically says, hey, this is a container, take a look at it, and it’s discovered it’s a Postgres container, and we’ll then extract the configuration out of the running container to update the rest of the Spring config as well. So in this case, it may, let’s say it just will pick a random port 50000. And so when Postgres starts on port 50000, the JDBC URL will automatically be reconfigured to point to the container at that exposed port. Again, I don’t have to do anything about it. Same thing with Redis here.
So let’s go ahead and run these tests. Let’s just run them all. And while it’s doing this, I’m going to pull up Docker Desktop here on the side or the dashboard, and you’ll see as things get started here, I’ve got this running in Testcontainers cloud right now. So I’ll do that demo first. Got the option. So you’ll see that it’s pulling containers and it’s starting stuff here, but you’ll notice that it’s actually not running anything on my local machine here. And that’s fine. So with this, again, my test pass, it’s using containers. And again, I didn’t have to do anything to connect to, in this case, containers that were running out in the cloud. Now, for kicks, I’m going to go ahead and just reconfigure it to run locally now. And in this case, I want to set a breakpoint. Let’s go back here. And now I’m going to, let’s just debug this particular one. I know the right click menu didn’t show up through the screen share there, but okay, so I’m running this particular test. I set a breakpoint. We’ll see that the containers have started up. And let’s wait for the breakpoint to hit. Okay. So the breakpoint is hit.
And what I’m going to do now is let’s open up the dashboard full screen here. And let’s go into the database. And I’m going to exec in and let’s connect as the test user. And I can see the tables here. Select star from the demo entity. We’ll see that there’s a value there. And so that the test. Pull back up here. See that it’s expecting that the value is going to have some value. So what I’m going to do is I’m going to adjust that now. So update demo entities set value equals to “another value that will fail the test.” And now if I select again, we’ll see that it’s updated. All right. Let’s go back to IntelliJ here. And let’s tell it to do the query here. So now we see that the result object here has that other value and other value that will fail the test. So again, it’s live demo here of it pulling from the database. Now, obviously, this will fail the test. My test will fail, etc. And then also actually while it’s doing that, all the tests or the other containers have already died and gone away.
And so again, with the Testcontainers framework, and we see that the test failed. I see the output here. Again, I’m able to programmatically spin up containers and just say, Hey, here’s what my application needs. And in this case, it’s very easy to plug it into my springboot code. But the SDK is working a lot of different ways. For different languages, different frameworks, etc. A lot of really, really cool value here. And now again, I can commit this code. I can push it to my code repo, my CI/CD system can either run this in the pipeline or use Testcontainers Cloud. And it’s going to work the exact same way. And now I’ve got much greater confidence that my code is going to work as I expect it to even out in production.
Shifting vulnerability assessment (26:32)
So last thing we want to talk about is shifting vulnerability assessment and just security in general. So if we again go back to the SDLC, one of the things that we hear from developers a lot is, well, you know, quite often I push my code and I have to wait for builds and tests and eventually that the security assessment runs. And it’s not until much later on in the process that I hear oops, I have a dependency that’s out of date now or I need to fix something here. So how can we move that a little bit leftward so that I can solve things a little bit earlier?
Now containers, again, bring a different approach and tactic to how we think about application packaging and also security. So with a container, I’m building my applications in an environment and then I’m promoting it everywhere that the app needs to run. I’m no longer going into my staging environment and getting a VM and installing everything that I need there and same thing in production. Now when there’s issues, I’ve got to go patch all those machines. No, I’m building an image and I’m promoting that everywhere that the application needs to run. So that means if I have a vulnerability in my application, I just need to build a new container image. But it also means that hey, if I can build that image locally in my development environment, can I find out if there’s issues in my container image before I push it off and my CI system builds and it runs, and theoretically I should be able to build the same exact image locally as what my CI/CD system is building and be able to get outputs and results, etc. And that’s exactly what Docker Scout is intended to do.
What is Docker Scout? (28:20)
Docker Scout is one of our products that’s part of our suite of services that helps me know at the end of the day am I building good images? Do they meet my organizational policies around runtime configuration and vulnerabilities and CDEs, open source license issues, etc. And then when issues are found, it gives me as a developer actionable insights on how I can fix them. Now there’s a couple of different ways that it provides that depending on the different features and different views that I’m looking at and we’ll see each of these in action here in just a second.
So feature number one is Centralized View. I can open up a centralized web panel that gives me a view across my entire organization and I can leverage various integrations with other repositories or CI/CD systems and again gain insights across the software supply chain. As a developer I can get recommended workflows and how do I fix issues? And then secure development helping me, I can get image analysis and understand and how do I fix things as they’re happening.
Demo: Docker Scout (29:31)
So with that, let’s actually just try this out here. What I’m going to do is I’m going to open up first scout.docker.com and I’m going to go into a demo environment here and this particular demo environment we see quite a few different policies up top and you’ll see that some of these are vulnerability based, some of these aren’t. So for example, this first one – default non-root user – I want to set a policy for my organization that images are configured by default to run as a non-root user. In this case, there’s only one image and that’s being monitored and it’s not compliant. So that means, hey, let me look at this particular policy. This particular image is not compliant. So let’s go fix that so that it’s running as a non-root user. And there’s others related to copy left open source license and things we need to be aware of there, vulnerability assessment (am I using outdated base images and am I using unapproved base images) and also are my supply chain attestations actually attached to my image.
So this just gives me a cross-organization view of how am I doing? And as new vulnerabilities are discovered and new CVEs are announced, one of the cool things about containers is the software bill of materials (or SBOM) basically serves as the cargo manifest of what’s in the box, what’s in the container image. And so as new vulnerabilities come out, it’s just a simple cross check of, hey, this vulnerability affects these packages in this version. Cool. What SBOMs have that vulnerable package in it and then I can just open this up and just say cool. Let me look at that CVE, let me look at the image that’s affected by it and let’s learn more, how do we go fix it, etc. Scout gives me this.
Now as a developer, let’s take a look at some of the tools that are here. I’ve got a project here. It’s a node project and I’ve got a Dockerfile in this case. It’s an old Dockerfile again just for demonstration. I’m going to build from node 16. I’m going to copy my package JSON. I do my install, I copy my app code, etc. So let’s go ahead and do a Docker build and we’ll call this Scout demo v1. And this is going to build. And since I ran this earlier, it’s pretty quick most of it’s cached. And I can immediately do a Docker Scout quick view. So it gives me the hints here. And let’s do Scout demo v1. So again, it’s looking at that SBOM and saying, hey, what issues exist with this? Now, I’m interacting with the CLI here and we can do this with the GUI dashboard here in just a second as well too. But one of the things that Scout recognizes, well, hey, container images are complex. They’re made of layers. And so when I have vulnerabilities here, are they coming from things I inherited? Did I start with a bad base? Or did I introduce those problems through the image that I built? So in this case, I see that my base image is introducing quite a few vulnerabilities here. But if I change my base image from node 16-Alpine to node 18-Alpine, I can see my criticals drop from 1 to 0 for the highs go from 4 to 0, mediums go from 9 to 0 and low one is 0 as well. So that’s a pretty good state to be in. So let’s go ahead and make that change. All right.
Let’s now build v2 here. And now I can do the same thing. Let’s do a Scout quick view of v2. In this case, I’ve got to wait for the SBOM to be analyzed. And this may take a couple of seconds for it. When I did this demo last time, I think I went to node 20 instead of 18. So we see it’s doing the indexing. It’s got that now. And again, we see that we’re pretty good here. But we also see, hey, I still have some vulnerabilities here. I didn’t get them from my base image. Okay, that’s cool. So I’ve done something else here. And I can look at the CVEs. So Docker Scout CVEs. And this is going to give me quite a bit of output here. And I can see that, hey, I’m using Express. And I’m using old version Express. And it’s got a couple of different vulnerabilities. And it even tells me, I can click on the link and learn more about it. It gives me the CVSS scores, etc. But also tells me the fix versions. So in this case, okay, 4173 would fix this one, but it wouldn’t fix these other two. So let’s go to the one that’s got the highest version number here. So I got my package JSON. I’ll change express to that newer version.
And now let’s do my V3. And so again, Scout has helped me along the way and to help me understand, am I building good images or not? And while this demo is kind of focusing on the vulnerabilities, again, as we saw in the dashboard, there’s other policies that can be leveraged as well. And for those that may not be quite as comfortable in the CLI, I can up the GUI as well. And I can look at the images. And let’s look at that V2 one that we saw earlier. Again, I can see that there’s no vulnerabilities coming from my base image. But I introduced quite a few in my own image. And if I want to go in layer by layer, I can do that as well. And again, I can see the same information expressed 4171 had these vulnerabilities in it with the information and fix versions and whatnot. So again, all this is available through the GUI as well.
Recap (35:19)
So to wrap up here, now, at the end of the day, it’s all about how do we deliver software? How do we deliver effectively? How do we deliver efficiently? How do we deliver swiftly and securely? Lots of buzzwords, of course. And so the idea of shift left isn’t intended to say, hey, developers, guess what you’ve got more work to do. It’s, hey, what are the things that you’re already having to take care of and the things that you should be thinking about? And how do we move it leftward so that it can be closer to where you’re developing and where you’re already spending your time. And so that way, you can stay in flow, which is typically speaking for myself anyways, that’s where I’m happiest. So how can we do that?
And we’ve talked about three specific opportunities today, specifically Dev environments, integration testing and security tooling. But again, there’s a lot of different ways when you start thinking about containers that it helps simplify this process because you build once, you can run it anywhere and you can develop with containers. There’s just a of really good opportunities.
So with that, I say thank you. Thanks for tuning in. Thanks for watching this. Hope you learned something. If you’ve got any questions, feel free to reach out to me. Again, my name is Michael Irwin. I’m a member of the DevRel team here at Docker. And it’s been a pleasure being here with you. Thank you all.
Learn more
- New to Docker? Get started.
- Deep dive into Docker products with featured guides.
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.