DockerCon
Master the Container Security Model with Sysdig
Eric Carter, Product Marketing Team, Sysdig
Transcript
My name is Eric Carter. I’m on the product marketing team at Sysdig. I’m joined with Alex, who’s our principal security architect. I brought him along. He wasn’t on the schedule but that’s because he can answer the hard questions. We want to talk to you a little bit about container security. How many folks here identify mostly as developers? Any like DevOps kind of people, if you consider that different? Anything else, like security or operations? Okay, so we’ve got a good mix of people.
Part of our goal today is to talk a little bit about things that we’re seeing in the industry around container security problems. Also, to talk about some best practices that you can do when you’re building containers. We will delve a little bit into what we like to call shield right, which is when your containers are running, what are some of the things you can do?
We will remind you — most of you probably saw or heard the news — of the integration that we’ve done, Sysdig with Docker Scout to provide some additional help for prioritizing security issues, CVEs, things that you might be having a tough time figuring out which to fix first. What we call runtime insights is going to help with that, so we’ll deliver that message as well.
Table of Contents
Container security model
We’re going to talk about the container security model, so if there is container personnel or security personnel, don’t worry about it, you’re in the right place. Awesome.
We’re going to cover what we see as some of the things you should be doing — it’s what we’re calling the container security model. Hopefully, it’s helpful for you. We’re going to talk about shift left/shield right as I mentioned, and some best practices across the whole life cycle.
You might have security issues for a number of reasons, but one is based on what’s inside your container. Maybe some things that you’ve done that are not probably the right thing to do when building your containers. Other times, it’s more about not necessarily what you’ve done but in the surrounding environment things that give access that probably shouldn’t be there, so people being able to look in your doors or go through a back door, and whatnot. That’s kind of where we want to focus on, and there’s a lot of things happening in the industry — whether it’s containers, Kubernetes, cloud — that we need to keep pace with from the time we’re coding all the way through the time things run. Because we are deploying a lot more frequently than we’ve done in the past. The number here is basically, 60% of companies report that they were doing this either many times a day or or every few days, and we compare that to what we do more every four seconds.
Shift left
We have to keep up with things. How do you prioritize things without slowing yourself down? How do we detect and respond to issues that might take place? Let’s talk first about shift left. Many of you may be like me where you start to roll your eyes when you hear this term, but it’s an important concept, which is essentially, when we are developing, do the right things from the earliest point. Fix those things as early as possible. So, there’s going to be a lot of things that we can do. We’re going to roll through those. One, we’re not stopped or our projects getting delayed. We want to prevent those things later on, because it’s costly to try and do it at a later point in time. The better thing to do is handle it. What are some of the security issues that many of you are probably contending with? One is simply that there are so many logged CVEs that get reported back to us if you’re scanning images. I ran into some of you yesterday who said “I don’t even think we’re doing this.” So that’s a concern, but there are lots of places you can do it. We’ll talk about some of those ways.
The other thing is one of the reports that we did just studying our own customers. Sysdig runs a SaaS platform, and a lot of our customers are container-oriented. We looked to see how many customers were actually running containers as root. Now sometimes you need to, and we’ll talk about that, but 83% of the containers we were seeing were running as root, and that’s quite highly permissive and probably not the right thing to do. We’ll talk about why in just a minute.
The other issue that we see, and I think Alex will talk to some of this, is sometimes the images that we’re starting with — because it’s so easy to go grab an image as our base image and start building with it — there are some things there that should keep us up at night.
Basically, what it comes down to is, if you look at the content of an image, there’s a fair amount of stuff in there that isn’t maybe the greatest thing. You’re starting with Nginx from some random spot on the internet. When you go and use that in your overall infra, that saves you some time, but there’s typically things in there that you might not already know about, so the first thing you should do is scan it, right? Look for what you might find configuration-wise, vulnerability-wise, but that generally finds about 90% of all the things you might care about. So things around crypto jacking or around embedding secrets or other proxy stuff, you know, things that could go wrong. But about 10% of the stuff within that container doesn’t show up until it actually starts to execute.
So, 90% is great, right, that’s a fair amount of things, but there’s still 10% of risks and threats that don’t show up until actually running your workload on that particular container. The reason that matters is a lot of that stuff has a pretty significant impact, so the report Eric was talking about actually shows that roughly a cryptominer takes about $53 of your resources to generate $1 for them, right? So that XM rig is baked inside of that image, and it’s a big deal. So if you’re not looking at these things in context of when it’s being developed, when it’s in your repository, when it’s being admitted to your cluster, you’re missing out on different key areas. And you have to keep that in mind as you have that development life cycle.
A lot of this comes because we have a threat research team, which a lot of companies in our orgs have, and they are discovering some of these things and writing a lot of interesting blogs to say, hey, we’re seeing this thing happen, and a lot of it’s in that 10% arena.
Build practices
Let’s talk a little bit about build practices. We’re going to run through this quickly. We’re not going to show you the code and how to do it, but some of this is no brainer stuff. Use trusted sources, right? We want to get images from known publishers, from trusted sources, things like Docker Hub that has done the right thing with certified publishers that are also being scanned in those repositories to give you some idea of what’s going on with them. I was listening to a customer call this week. And one of them said, yeah, we’re rolling out some stuff one of our guys with a helm chart, and he just had this line in there that said go pull this image, that just randomly like bypassed all of the things with security, getting it from someplace that wasn’t being scanned, and so on. These are the kind of things that you want to avoid; just make sure it’s trusted.
It’s all about making sure that you know your sources, and there’s a lot of different ways to do it. One really low-hanging fruit way is what’s the distribution of the image? Is it some random Ubuntu image? Is it a UBI image? Is it coming from some commercial entity like ChainGuard? There’s a number of ways you can handle that, and it just puts you in an overall better posture as you start building that application out.
One thing is pretty clear, and that’s just avoiding unnecessary privileges. Don’t run as root, if you can help it. Now, look, there are sometimes when that’s needed. You know, in fact, there are some things that we do where to get the right level of insight you need to have that container run as privileged, right? But if that’s needed, also just make it rightable so that it can’t go doing other things in the environment.
Any other things to add to that? I mean not really; just be cautious. You want to make sure you’re at least specifying who the container should run as. Don’t just let it run as whoever it wants. Give it a particular username or whatever you use for those things. Really, really low-hanging fruit is to make sure that the posture of that particular container is limited. Permissions are a bear, right? They’re never necessarily the most fun thing to deal with, and we’ve all done the chmod 777 just to get on with our day, but we probably shouldn’t do that in production.
Great, and, correct me if I’m wrong, but when I go grab an image for the first time by nature of it it’s set as root, right? Typically, you’ll see a lot of images running as user 1000, okay, but it’s better than user 0, yeah.
The next one is a little more tricky, I suppose. But all these things that are ending up in my image that might be dependencies or other packages that may be inherited, or maybe you mentioned Nginx earlier, it’s got things that I’m not even going to use. When those things are vulnerable, you know, you’ve got this, what we call bloat, right, of your image. You want to be careful about that — keep your images slim.
This one is a multi-stage build — I don’t quite understand this concept, do you? Basically, it’s just meaning like layer on what’s necessary as you need to. Like don’t necessarily go get the thing that has everything that you might potentially need at some point in time. Just start with something very simplistic and deploy what you need to to that image in the layers that you need. It’s just basically about keeping that thing minimal. Don’t have it have everything under the sun. You probably don’t need netcat in your container. Yeah, it’s cool, but it’s also an exposure spot, so just make sure you’re doing your diligence. Do stuff in a way that makes sense. Keep it slim; keep it minimal.
You can start with some minimal images that are known, right, and there’s also this concept of distroless as well. At the end of the day, you don’t want to expose a bunch of things like ports that are not even necessary for your app, right?
There’s a lot of things you can do to make it so that there’s less ways for adversaries to get to you and to do the wrong thing. We’re talking about doing the right thing. Don’t hard-code credentials. This is key, right? If you can use some sort of external store, we just don’t want those kinds of things like secrets inside even if it’s expedient to do so. Probably better not to, because as soon as someone gets to that, they can start doing other things.
That’s really a best practice, to start early and often. I can’t tell you how many times I have seen folks accidentally slip credentials into production, because that’s what the developer is doing on their laptop that made it simple. They could pass the test; they could get their stuff through quickly. It’s convenient, don’t get me wrong, but you need to practice those pieces of hygiene at the very beginning. If you don’t do that, accidents happen. It’s almost never malicious. A user isn’t like you know in office space reliving their fantasy going and stealing pennies from you. They are just trying to do their job, right? And no one’s really at fault other than just you know bad practices, bad hygiene, and so it’s just about enabling folks to do it the right way from the very, very start.
Clearly, the next one — don’t include confidential info. We have a really good blog, which I’ll point you to at the end. But, there was some stuff in there, and I thought I didn’t have it in there anymore, but it’s actually still kind of down there in one of the layers. That stuff might still be in there of any confidential nature that you don’t want.
How many folks are using private registries onsite or somewhere? Good, it’s good to have that, but also make sure that you’re using the right kind of permissions and things to keep that protected. To use the right kind of secure protocols for communicating with it, and so on. Yeah, I mean don’t be afraid to have a couple of registries. You might have one where you dump all of your artifacts, everything under the sun goes in there, and then you have one where you keep all of your kind of golden images, right? Your stuff that’s running in production, maybe those come from a different registry than where your dev branches are. That helps you have a certain level of rigor as you promote stuff through the process.
Automatic checking
That’s a little bit about some of the things that we see as key. If you haven’t been thinking about it in terms of build, there’s also you know the ability to automate the checking of some of these things. Just in case we forget, we can put something in place like a linter, that will kind of warn us if we’ve not done the right things. So, we’re saying, hey, there’s something called Haskell Dockerfile Linter, which exposes some of those issues to you before you actually get too far.
So you can see that you are not going to be able to do much for these things that are not probably set up the best way. Some image scanners will also provide those insights to you — like Sysdig does look at some of these things. Like, hey, this image is set as root. You can actually block something from proceeding through a pipeline, for instance, by having that level of visibility do that automatically — beyond just giving you a read out of what’s vulnerable, right? That your configurations are off.
Now, how many folks think that in their org they aren’t really doing scanning or at least not sufficiently of the images? Yeah, I saw you. Maybe it’s with the nature of your business or the size or the speed at which you’re working, but it’s good to do this in some level or form. You know, there’s a lot of registries that will have them built in. It’s there, probably a good idea to go ahead and use it — whether that’s something in the cloud or Quay, or otherwise. It’s also available on standalone tooling right. We partner with an organization called Snyk; we’re now partnering with Docker Scout. And these are things that you can put in place to make sure that things are being scanned. I know Scout is doing this; we’ll say, hey, here’s what we recommend, what you just can bump up to in order to address this.
What’s cool is when you get a readout that says, if you just upgrade to this minor revision, it will actually address this many of the problems. Like one shot, and you’re in much better shape from a security perspective. So those things can be really important or useful in terms of saving time, and I know Alex will help make the point about a lot of this. It isn’t just about being secure, but it’s also about saving you time.
The next slide really brings this to bear. How many folks use, like, a CI/CD pipeline, like a Jenkins, or whatnot? Yeah that’s some of you, and some may not be doing that yet. But consider pushing some of the scan into that, right, so that when you’re in something like a Jenkins or whatever it might be, you’re getting these readouts right there. And you’re addressing them before it actually makes its way down into your registry repository. We also believe that that’s not the end of it all. So, registry scanning, scanning as you’re building, and those kinds of things. There’s a lot of tools that can help you get there, so that you can address these things quickly and not have someone tap you on the shoulder later on. And you should also consider things around runtime, and I’ll let you take it.
Where this really comes into play is, wouldn’t it be nice if we knew exactly what was being used by an application. From a developer’s perspective, you often are starting with some sort of base image, you’re layering your stuff on top of that — what you need for your application to run. And you’re deploying into production. Inevitably, what ends up happening is you end up getting a laundry list of CVEs and vulnerabilities and things you have to go fix — two, three weeks, four weeks down the road. A lot of that comes from the number of images, or so to say packages, in that image that you’re never actually leveraging. There’s some great technology coming out now between you know people like Sysdig and some others that are doing this notion of runtime scanning. What this is about is, it isn’t just getting your image yet again, it’s looking at the libraries within your image and correlating that to what’s actually being executed by the container.
So if I build a custom image, I can figure out what libraries in that image I’m actually using to power my application. Ultimately, what it does is it allows you to basically say — these 15-20 packages, these 30, 40, 50 libraries in those packages — these are the things that make up my core application. Then there’s all these other libraries and all these other images or packages in here that are never ever being called, but they have vulnerabilities.
Let’s do two things. Let’s create a report, and let’s draw a line between the stuff that’s in use and the stuff that’s not in use. Let’s go fix the stuff that’s in use, and then tomorrow let’s go remove all the stuff that’s vulnerable that was never actually being called by the application. So what that means is the security personnel aren’t giving you a list that’s, you know, 50 pages long of vulnerabilities, because you’ve limited the scope to just the stuff that’s actually running in the application. Your next iteration cycle, your next patching cycle, your footprint is significantly lower. So that first time, it’s still somewhat fairly long, but instead of fixing it, you’re removing all the fluff you’re not using. And then the next time, you’re only getting reports on the stuff that actually is in use by the application. It saves the developer time; it saves security personnel time; it gives you more time in your day to do the stuff you actually care about — that’s adding value to your company, not just patching the stuff you’ve already done. So that runtime element is really, really intriguing, because it’s how we can basically help you scope down the potential risks within that image and then be able to go forward only doing things that actually matter.
Very good, and let’s put a finer point on it. This is another thing that we saw in why this is so hard and why you’re overwhelmed potentially with these CVE reports that come back at you. There’s a lot there, so a lot of our customers, the first time they’re using an image they are scanning, and there’s a lot of high and critical stuff. And, it’s like, how do I even deal with this? Part of that challenge is that, every day, there are more and more CVEs being logged. That’s where the runtime element comes in. Because it becomes this kind of whack-a-mole scenario.
But we do want to look at severity — like a lot of folks will just address the highs and criticals. They want to know if there’s an exploit or if there isn’t. No, it’s vulnerable, but there’s no way in the wild that we’ve seen anyone ever get to this to use it for bad purposes. Knowing that is important, because you can then prioritize based on what is exposed to the internet. There’s another term for that, I can’t remember. Anyway, net-net, is it even reachable? Is there a fix, right? If there’s no known fix, I want to probably figure out what else to protect for that vulnerability or watch for any behavior that the known vulnerability is being exploited. And then we’re adding this new dimension of is it in use. And that’s a Sysdig tenet — if there’s nothing else you remember about Sysdig, that’s a good thing.
Sysdig
At the end, Alex will tell you about the whole platform, but it’s helping you with visibility around all things runtime. And we use that to help prioritize. Many of you — if you’re developers you’re more on the left side of this diagram maybe — some of the configs, we talked about permissions, but along the way depending on what you’re working on, we use this concept in use. In use can either help you decide what to deprioritize, or it can help you decide what to shift and change, right? So, how many folks running containers in like the public cloud? So I’ve been given all these permissions, and I never use them, or it’s like not needed to do my job. We’re going to help you understand that so that you can shift and change that.
On the phone side, one of our customers is saying, hey, you’re saving me a ton of time. And that’s net-net to what Alex was saying. Take this one question I got in the booth yesterday — and I recognize many of you I spoke with — they’re like how do you know what’s the secret sauce for Sysdig, knowing if something’s running beyond just the container? How do you know which packages are running?
The way that we instrument is a little bit different than most. We’ve taken a kind of a bottom-up approach for getting details out of the infrastructure. A lot of this comes from our lineage. So Sysdig was founded about 10 years ago now, and it was founded by the co-creator of Wireshark. And Wireshark was a fantastic tool, when you own the infrastructure, to grab packets and see what was going on inside your applications. The problem we set out to solve was how do you do that for the cloud when you don’t own the infra, you don’t own the switch, you don’t own any of that stuff. You don’t have access to a spam port, you know, what do you do? The solution we came up with was, well, the least common denominator in the cloud then would be the operating system we’re running on top of.
If we can go into the kernel, and we can intercept system calls, we have the same level of granularity that we had with a packet. So we’re actually arguably more granular in some ways, but the net-net is that we’re able to go in and see every single system call, every single process, every single file access, and we can correlate that back to the libraries, the packages, the things that it came from. So it’s letting us get really detailed information on the runtime workloads and see what was going on there. Then we took it to the next stage, saying, okay that’s great, we can see all this data for the running workloads, what about the cloud itself? What about the other SaaS services we’re using?
We took that same architecture about intercepting system calls to be able to go look at things like the Kubernetes audit log, the cloud trail logs from AWS, or the similar logs from GCP, or your logs from Okta, all sorts of other sources, in order to look for malicious activity and those concepts of streaming data. And so the entire engine we use is looking at all this data coming in and looking at anomalous activity right? You can almost think of it like how Snort looked at patterns and packets. Sysdig is looking at patterns and system calls and audit files and log files, things like that. Cool.
Getting down to the short strokes here. Just to put a finer point on the container side. Your container may look like the one on the left. But what if we can tell you which ones are being used and save you a lot of time, money, and effort, and reduce some of the fatigue that you’re experiencing now? Just in the last few moments, if you missed it, this is part of a plan to start to leverage something like Docker Scout. This is what we’re bringing together with the folks at Docker, and we were super pleased to be able to announce this yesterday. Again, same value proposition. Probably the most important is the bottom one: deliver secure images faster.
So what happens is this information gets sent over to or gets pulled by Docker Scout in the form of a Vex (vulnerability exchange), and then you’ve got a visibility into it, which looks something like this. Net-net is to be able to get a clear vision into what’s actually impacted, because somebody can get to it. And what’s just sitting there dormant, right? So that’s key, and we love to see that moment where you’re staring at it with dread, and then you go show me just the stuff that’s impacted or affected, and that number gets a lot more manageable, and we’re a lot happier.
Shift left / Shield right
That’s the key message about Sysdig and Docker Scout I want to give you. Last thing, we mentioned earlier — shift left/shield right. What is shield right all about? For some of you, this will be in your domain; for some of you, it will not be in your domain. But either way, it’s an important concept. Alex described the how, but runtime threat detection is really all about when that thing that you built is now running in a Docker node or a cluster of some sort, right? We want to watch for the thing. All the best vulnerability management in the world will not prevent some of the bad behaviors. That’s also the things we want to watch for, or this can be your safety net for when I can’t fix that vulnerability, I’m going to watch for what might be the expected outcome of somebody trying to exploit that.
It’s all about how do I deal with these runtime threats, and how do I figure out what is potentially at risk in my environment. The concept is that if I’ve got a running container, and I’ve got things that are exposed, I would need to look for stuff like cryptomining. I need to look for data exploitation. I need to look for “insert random MITRE term here” basically. Being able to have a runtime threat solution for that is rather interesting, rather necessary. Think of it like EDR but for the cloud, and that’s what we’re trying to fit into that space to help make sure that your stuff when it’s running isn’t at risk, isn’t exposed. Then we’re trying to take that data and give it back to the developers, give it back to the folks who are building the applications so that they can basically do better and spend more time doing what matters. Partnering with people like Docker allows the developers to get that data early and get it often to help avoid the potential risks in the future. Great.
The whole goal we have as a company really is to effectively save people as much time as it possibly can, so we talked about vulnerability management ad nauseam. It’s not really that different when it comes to the other pillars, if we talk about things like looking at user permissions and resource access inside the cloud infrastructure. If we can look at the runtime logs, things like cloud trail, we can see what users are doing, what they’re accessing, what they’re touching, and then we can suggest proper permissions and proper data sets for those different roles.
So if we can look at the user access in a runtime context, we can obviously be able to say, well, this user was never actually accessing these regions, never using these permission sets. Let’s pull those away. It’s not that we don’t trust the user, but if that user’s credentials get exposed, the tokens get out there. It’s that blast radius they can influence, right? So let’s limit those things based off that runtime data.
Conclusion
We’ll be happy to take a few questions. Let’s stay secure, but let’s make sure we have time to innovate. I stole this from the Docker home page yesterday: Build secure software from the start. We all want to do that in spirit. It just can’t be onerous, right? And that’s where a lot of things that we do, that Docker Scout is helping with, as we move forward is important. Shift left and shield right. They’re both important. Shift right might not be in your purview, but it’s for somebody who’s concerned about the production running environment. So let us know if we can help. We invite you to ask any questions that you have in the few moments we have left and to visit us out there. Many of you already have, so thank you for that.
All right, that’s closing. Thank you for being in the audience, there’s a lot more extensive details in our blog. All right. Cool. Any questions?
Q&A
Okay. So a lot of people may know that security teams have a big issue. It’s a long-standing enemy relationship with containerization for various reasons, and it seems up to this point security has been strongly lacking in containerization for a while. And it seems almost like a light at the end of the tunnel when it comes to Sysdig and Docker Scout now coming out for container scanning. My question is: the security framework standards like ISO 2701, CMMC — are these going to be met with Sysdig and Docker Scout?
In part, you can’t say that I’m running these two things, therefore I’m compliant, but they are part of your artifacts to state why you are compliant. There’s going to be controls and things you have to do that are outside of just scanning or reporting or getting that data set, but these would be artifacts within your control framework that you use to become compliant with those standards.
Okay, so this will make security teams a little more happy.
And, one of the things we’ve done with this thing across the whole spectrum — so from the time we’re trying to scan things to posture management to the runtime — is depending on the thing, right. In the middle of the posture, there’s a report for ISO to the 27001, and, by the way, we’ll check your environment, typically cloud-based, could be on-prem, OpenShift, things like that. But, hey, here’s the reds, and here’s the green you forgot about. Here’s the specific thing that you’re not doing, you’re not running. There’s some things you want to have flipped on, and maybe you’re not doing it correctly. So there’s that, but on the scanning side, if you need to be ISO-compliant, these are the things you should have in place. On the runtime side, if this thing triggers this policy that we’ve given to you, you should know that you’re probably violating ISO or PCI or whatever the standard is that you’re trying to meet.
Really what it comes down to is there’s a lot of ownership of those compliance specs as it relates to vulnerability management, particularly in containers. You’ve probably have had to deal with, like, I can only have 50 criticals with this environment in this particular scan. If I have more than that, we’re going to fail our audit or whatever it might be. So you end up spending a bunch of your time trying to de-risk the particular critical, the particular high vulnerability that you see, and that’s where that in-use concept comes into play. Basically, we can prove that, yes, this is a high vulnerability. It’s never actually being called, and so we can de-risk that from a high down to a medium in our framework based off this particular reason. Then, better yet, the next time we scan it, it won’t be there, because we’re going to remove it in the thing in the first place. So a lot of those policies come down to basically counting criticals, counting highs, what is our limit at, because we have a certain tolerance we can have or not. It’s silly, but that’s the way the auditors look at it.
I do agree with you, by the way, that sometimes the container side was sort of off to the side and security people weren’t coming into play. That has been shifting especially over the last 18 months. Like we’re seeing more security teams, and it might vary on the size of your organization and so on, that are getting really engaged in cloud-native, cloud especially.
So, yeah, it’s where that whole shift left thing really comes into play, if you can partner with your security team on container development early, your life is going to be a lot better because they’re going to understand it better.
You showed one of the slides there with eBPF. I just want to ask two questions on that. If it’s just a probe, is it like doing any enforcement, or is it just observability? And, second, if you have, let’s say, Cilium also running and also working at the kernel level, how does that co-exist?
In the case of eBPF, everything runs in its own little memory space. Think of it almost like — it’s a bad analogy — but kind of like the JVM. eBPF runs in its own little containerized location, and it’s not interfering with adjacent eBPF applications, so they can coexist. There’s nothing wrong in that. From the Sysdig side, we run as a read-only process, so it’s a non-blocking read that we’re doing when we’re pulling data out. So there isn’t any direct enforcement inside of eBPF itself, and you shouldn’t be doing right access to the kernel. That would be very, very scary, and I would recommend running away from anybody doing that. Maybe not Nvidia, because they make you, but but the core tenet there is that all of the reaction, all of the enforcement on stuff that we see happens after the fact or in line with.
There’s a container drift policy in the Sysdig interface. It says, if I have a container that suddenly had something new added to it after it’s been running, I can choose to either stop that container, or I can choose to kill the process, or I can choose to prevent it from running in the first place. What happens in that case is that the container does the thing, the system calls a vote, Sysdig sees it, the agent reads that, and says, oh, I have a “prevents” action against this particular thing. I’m going to go kill off that with ptrace or something, and so it’s all done in line. It’s not happening at the kernel specifically, because we don’t ever want to write to the kernel. Does that make sense? Yeah it makes sense.
And, second question, you mentioned compliance packs, like PCI and everything. Let’s say you have your, like, CISOs have their own compliance. Like if it’s a banking application, and they have cid data, and they want to be able to check if that data or this got access across some network. Can you come with your own policies enforcement?
Yeah, so to a degree, everything we do in Sysdig is based on open source, so the runtime engine is Falco, the cspm engines all written in rego, all of our enforcement from our controllers in rego, as well. So all these things are in open standards.
So in a lot of it you can certainly bring your own policy and run it. There are certain areas of the products where you can’t completely bring all of your own stuff, but you can customize what exists to fit your particular controller requirement. Most of that limitation is within rego, because, let’s be honest, it’s a really complex syntactical language, and we don’t want to have people run stuff that’s going to fail for the most part. So we have a number of policies that are highly customizable that you can use as kind of your base effort.
By the way, since you asked about eBPF, I want to make the offer to you and anyone here if you didn’t get one, there’s a pretty chunky book. So maybe today’s the right day to pick it up, so you can take it home about the eBPF concept. It’s not about Sysdig. It’s really about eBPF. So we were heavily involved in eBPF very early on within the Linux kernel itself. Some of our employees were folks who were helping originate a lot of that code, translating into Linux, and then going and executing that stuff. We’ve got some really fun resources on that. Conversely, it might help you sleep better on the plane, yeah. By the way, eBPF, for those that don’t know — extended Berkeley Packet Filter — which now has very little to do, if anything, with packets. It’s just the way to get and run these little programs at the kernel level.
We appreciate you being here. Thank you all.
Learn more
- Container Security and Why It Matters
- What are containers?
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
- Subscribe to the Docker Newsletter.
Find a subscription that’s right for you
Contact an expert today to find the perfect balance of collaboration, security, and support with a Docker subscription.