DockerCon
Getting Started with Wasm, Docker, and Kubernetes
Nigel Poulton, Trainer, Self
Transcript
So, Kubernetes is not the easiest word in the world to pronounce. Funny story. Years ago, when I was first getting into Kubernetes, my wife would hear me talking about it on Zoom calls or things like that. She’d hear me just referring to it. And it wasn’t quite an established word in our family at that time. And she famously, within our family, used to call it, Kumbernauts. Are you working on that Kumbernauts thing again? I’m like, Kubernetes, never mind.
Can I get a super quick show of hands? Who does anything already with WebAssembly? Okay, so this is a good audience for me. So, this session was originally put in as a workshop. But we don’t have tables and things, so it’s more of a live demo now. But I’m going to show you a link to a GitHub repo where you can run all the commands and things later on, in your own time. So, come on, let’s crack on. This is me. I’ve been involved with Docker and containers and Kubernetes and now WebAssembly for a little while.
Table of Contents
Setting the scene
Just very quickly, I want to set the scene a little bit for WebAssembly. Years ago, when I first got my hands on VMware, virtual machines, I was young. And I didn’t really understand a great deal about technology, but I love technology. And that first experience with virtual machines or VMware literally blew my mind. This idea of running multiple computers on a computer, I was like, pff, head explosion. And I could see almost straight away that this was going to be a huge thing. So, I decided to dive in at the deep end, and it was really positive for my career. Then a few years later, Docker came along. And it was really early. I did a little bit with Docker 0.7, but 0.9 was when I first got into it. But I still remember that first time I spun up a Linux container and figured out how it was different to a virtual machine. I had that kind of, pff, head explosion moment again. And I dove straight in and thought this is going to change the world, and it did just like VMware did. Similar experience with Kubernetes, although I don’t feel like Kubernetes changed the world quite the way that virtual machines and Docker did.
Fast forward to the present. And I was starting to hear this Wasm or WebAssembly. And I’m like, you know, you hear it enough times and at some point you’re like, right, I’m going to have to just at least wrap my head around what the concepts are, so I don’t embarrass myself in a conversation with somebody. So I write books okay. And as soon as you write a book, people suddenly think you’re amazingly clever and you know everything. And that puts loads of pressure on you. People would come up to me at events, and I was kind of scared somebody would come up to me, so what do you think about this web assembly stuff? And I’d be like, I don’t know. Anyway, so I tried to figure out WebAssembly and no joke, it was immediate. It was like with VMware and Docker. I was like, head explosion. This is going to change the world. I dove straight into the deep end. And here we are today.
Anyway, you’ll hear around at events like this and other cloud native events — people talking about three ways of cloud computing. So the first way was virtual machines. They kind of kicked off the whole cloud. We did everything in virtual machines. Along came containers — smaller, faster, more portable. More portable, kind of just because they’re smaller really. But they could do things and go places and run places that virtual machines couldn’t. And they were just way more efficient. So they drove, or they powered, I guess, a kind of a second wave in cloud computing. Well, there’s a third wave coming. So we’re all out there. We’ve got our bodyboards or surfboards or whatever. I don’t know if you’ve done bodyboarding or surfing at all, but you’re kind of a little bit out in the ocean, and you see these ripples, and you’re like, that one’s going to be a wave. And sometimes it is, sometimes it’s not. But I felt like the wave was quite far away a year ago. Whereas now, I’m like, the wave is here. I’m getting on the board, and I’m giving it a bit of this. And it looks like it’s going to be a big wave.
Anyway, that’s basically setting the scene. Now, I do want a shameless plug. The last session of the day today, right? Five past four downstairs in breakout room two. If you’re watching this on YouTube or wherever later on, there’ll be a link to this session. But we’re going to have a pretty honest and open conversation about the future of WebAssembly — its strengths, its weaknesses, what its good use cases are, and what it’s not good for. But also this coming together of WebAssembly and containers. Is it a thing? And if it is a thing, is it a good thing? Is it not? Anyway, later on, right.
What we’ll do
This is the agenda. This is what we’re going to do. Now, I am encouraging you just to watch. You can follow along if you want, but I’ve got to keep the pace up a little bit. We’re going to write a super simple Rust web application. Now, I am as far away from a Rust developer as it is possible to get. I couldn’t write Rust code if my life depended on it. But we’re going to use a tool that makes it super simple. The way Docker makes things simple, okay? So we’re going to write a simple Rust web app. We are going to compile that to a WebAssembly binary. We’ll talk a little bit about what a WebAssembly binary is later. And we’ll see it. Then we’re going to test it and just make sure that it works. And it’s going to be so simple, you’re either going to be like, gee, why did I bother coming to this session to see that? Or you’re going to be like, wow, that’s actually the cool part. The fact that it can be so simple is what’s good about it. Then these are the commands that we’re going to use, by the way.
We’re going to use a WebAssembly framework from the folks at Fermyon called Spin. Other frameworks and other WebAssembly runtimes and tools do exist. But this one’s just going to make it super simple for us. We’re going to take the WebAssembly artifact after this, though. And then we’re going to use Docker tools that we already know to containerize this into an OCI image or a Docker image, yeah? Then we’re going to use docker run, and we’re going to run that compiled WebAssembly app inside of a Docker container. Then we’re going to push it to Docker Hub. Again, other registries do exist. Docker Hub is just super simple. Basically any OCI registry or any registry that supports OCI artifacts will be cool.
So we’re going to build and compile this app. Then we build and package it into a Docker container. Then we’re going to build a Kubernetes cluster, multi-node cluster. All right. All doable on your laptop. It’s going to have a load balancer exposed onto my machine here on port 8081. And we’re going to use this port a little bit later to access the WebAssembly app when we’ve got it running on a Kubernetes cluster.
We’ll spin up a three-node Kubernetes cluster, the node on the far end, control plane node, right? And then two workers. The important thing is we’re going to have some software on there that will enable us to run these WebAssembly workloads. And it’s all super simple containerd stuff. Is there anybody who doesn’t know the fundamentals of Kubernetes? And it’s totally okay if you don’t. Okay, right.
So Kubernetes is this super high-level orchestrator. I’ve got some slides on in a little in a minute. But Kubernetes sits right at the top there. And it’s just kind of accepting work from us and it’s saying, okay, node one, go and run it please or node two, go and run this task. Kubernetes actually can’t run containers or do low-level stuff like that. Users are the tools for that. And probably the most popular tool for Kubernetes to run containers is a tool called containerd. So most Kubernetes clusters that you spin up out in the wild will have this containerd software on it already. Our build is going to be a little bit special. We’re going to have a WebAssembly runtime on there. Lots of jargon right now; we’re going to see it. So don’t stress if this feels like a lot. Three-node cluster with enough software on there to run a WebAssembly app. We’re going to label one of the nodes to help Kubernetes schedule our WebAssembly app. And we’re going to deploy something called a runtime class. So much jargon. I’m really sorry.
Very quick story right, when this was originally going to be a workshop. We budgeted 45 minutes, and I ran through it the other day and it finished in 17 minutes. So this might seem complex, but when we get to do it it’s really not — so simple. Anyway, we’re going to deploy a runtime class to our Kubernetes cluster. We’ll deploy an application to it that references the runtime class. And that’s going to make sure our application gets scheduled on the right node. And then we’ve got a bunch of networking stuff and ingresses and things just to make a web client or a browser on my laptop, on that port 8081, be able to access our application. That is the plan.
Prerequisites
A few prerequisites, okay. You will need an up-to-date version of Docker Desktop. There’s six prerequisites. So if you’re going to take a photo, wait until number six is up. This is the GitHub repo at the bottom by the way. If you want to follow along or if you want to do this exercise later at home or in the hotel or whatever. The reason you’re going to need a relatively new version of Docker Desktop is it’s got some functionality or support built in for WebAssembly. Right now, these are kind of experimental features, and I’m going to show you how to enable them. In the future if you’re watching it on the video, they may be turned on by default.
Okay, you don’t need a Docker Hub account. Other registries exist, as long as you have some way of pushing the software to a registry because the Kubernetes cluster is going to need to be able to pull it from a registry. You will need Rust installed. Other languages are supported. Python, C, Go, a bunch of languages. We’re just going to be showing Rust. The Fermyon Spin application. You will need kubectl. That is the Kubernetes command line utility. It seems really complex, but it’s not. And k3d to build our Kubernetes cluster. If you want to take a photo, now is the time.
Demo
It’s time to get to the demos now. We are now about to build and compile a simple application as a WebAssembly binary. Now then. The first thing we’re going to do is use the Fermyon Spin application which I’ve got installed.
So, in Docker Desktop at the moment, if you go into settings and features in development, you want to use containers for pulling and storing images enabled. And this checkbox down here enables Wasm. Like I say in the future, these may be turned on by default. Right now, you need those two checkboxes clicked and then hit the apply and restart button.
We can see I’ve got Spin installed on my machine. If you’ve got Rust installed, okay, you will need to add the wasm32 wasi target. I’ve already got it installed on my machine. Here is this one here. Now what this is going to do, this is interesting. It’s at the core of WebAssembly. When you come to compile an application, normally you will say, right, I want to compile this application for whatever Arm and Linux or Linux on Arm, yeah. Or Linux on AMD64, something like that. WebAssembly is its own instruction set or architectural binary instruction set, meaning instead of compiling to Linux on arm, we can now compile to Qasm. That application will run anywhere on any system or host that has a WebAssembly runtime. So very much delivering on the promise of build once, run anywhere, making it a lot more portable than containers. So I’ve got that installed. Any other prerequisites. No, I don’t think so.
If I go spin new, this is going to allow me to build a new application in any of the languages that are on the screen right now. Like I said, I have no clue how to code Rust, but I’m brave. So I’m going to deploy an HTTP Rust application. So if I hit return, I’m going to call it DockerCon, of course. Everybody enjoying DockerCon by the way? Yeah. That’s what I’m talking about. Okay, cool. So I call it DockerCon. Don’t care about description. And I want this web application to respond on, I’m going to say, yo, because I’m stuck in the 90s or the 80s maybe. But that will give me a new directory called DockerCon. And if I look at the contents of that directory, I’ve basically got three files. So lib.rs is the Rust application. So if I go vim and I edit this, I’m going to do the usual and say “Hello Fermyon.” We’re going to go, yo, DockerCon with a capital C in the middle there as well. Save my changes. Basically that is the application. Simple web server.
The spin.toml file here is a little bit important. This tells Spin — and I’ll tell you what Spin is again in a second — but this tells Spin how to run the WebAssembly application and how to build it. So Spin is a WebAssembly framework that comes with a WebAssembly runtime and a bunch of tools that make it easy to build and deploy WebAssembly applications. For example, this line close to the bottom here. When we do a spin build to build the application into a Web assembly binary, behind the scenes it’s going to run this kind of scary Rust command — with cargo being the tool in the Rust toolchain that lets you compile Rust applications. So instead of going cargo build dash dash target blah blah blah blah blah, I’m just going to go spin build. So, thank you, Spin for making that easier. But it also tells the Spin runtime how to run the application.
Build the WebAssembly binary
This is where it’s going to build our WebAssembly binary. So, basically three files. I don’t care about cargo.toml at the moment. So, just two files that we’re interested in. If I now go spin build and, hopefully over the conference Wi-Fi doesn’t take too long, this is going to build or compile that application as a WebAssembly binary. Fingers crossed, it doesn’t take too long. Prayers to the demo gods. So that’s not bad. Okay, hang on. Now if I do another tree, that loads more files. Okay, but not to be put off or be scared. This one here, dockercon.wasm. This is our compiled Web assembly binary. So spin new. Give it a name, give it a path, spin build.
I’m feeling pretty clever right now. I have compiled a Rust application. If you don’t know Rust, I’m reliably informed it is a rock-hard language to learn and work with. And WebAssembly is super new. I am not the cleverest person out there by a stretch, but I have just compiled a WebAssembly application, and I feel pretty good about it. Super. So any file that is a .wasm file is a WebAssembly application.
Now then, okay, so we’ve got the file, but does it actually work? Well, just to test, all we do is spin up. And that brings up our application. And it is listening on local host here, on 30000. So if I jump over to a browser, cheeky new tab. Oh, that’s tiny. Let’s make this humongous. Let’s go as big as we can. 500%. That’s the biggest it will do, who knew? You can’t go bigger than 500%. But there is our application. It’s actually working. But that’s not what we’re here for. We are here to be able to run it in Docker and to run it in Kubernetes as well.
Package as OCI image
So, where are we at? Okay, so we’ve built and compiled the Wasm application. Now it’s time to package it as an OCI image. Run it with Docker and push it to Docker Hub. So Docker’s Docker, right? Even with WebAssembly. And in order to build something, Docker really likes a Dockerfile, but I don’t have one yet. So I’m just going to create a brand-new Dockerfile in the root directory of the application here. And I am going to jump over to my GitHub repository that has a Dockerfile in it. Now, look, we’re at DockerCon, right? And if you go to a bunch of the sessions, you are going to see some humongous Dockerfiles that, like, make your eyes want to bleed. Well, we’re bleeding-edge stuff here, WebAssembly and everything, right? So this is going to be a complex Dockerfile, I’m sure. Oh, my goodness. I reckon, probably the easiest and simplest Docker file you’re going to see here today at DockerCon. So I’m going to copy this text into the file in a second. And we are saying instead of building from a Linux base image like we normally would, this is a WebAssembly app. It doesn’t need any of that. We’re going to build from the empty scratch image. And then all we’re doing is we’re copying.
Now, this is important. We’re copying in the built WebAssembly binary here. And we’re copying it into the root folder of the container. Okay. That’s going to be important because we’re going to have to edit the spin.toml in a second. But we’re also copying in spin.toml because that’s going to tell the runtime when we come to execute on Kubernetes. Exactly how to run this application. So, um, oh, I’ve gone way too big to even have the copy button available. So if I copy that into here, so simple, that’s saved. And I do need to change the spin.toml file, because like we’ve just seen in the container, we’re copying the WebAssembly binary into the root folder of the container. We do not need any of this long leading path here. So, dockercon.wasm is what it was called. If we save our changes to that, we are ready to do a docker build.
Now, I’m going to go docker buildx. Okay. You don’t have to do buildx. You can just do docker build on the most recent versions of Docker Desktop, but I’m going with buildx here in case you’re running older versions or anything like that. There’s a new flag here that you may not have seen — this platform wasi/wasm flag. That’s going to write some metadata to the image, that’s going to say the architecture is this and operating system is that. And then tools such as docker run can reference this at runtime and help it build the right environment. Then, I’m just tagging the image with… however I want to tag it. I’m going to push to Docker Hub later. My user ID on Docker Hub is Nigel Poulton, so I’ve named it like that. So let’s build it, and it’s already built and so it should, right?
Basically, all we’re doing is copying in two files. One of them is a really small WebAssembly binary, and the other was a configuration file. So if I go back to the top, this is going to wrap across the lines, right. But, Docker images in, what did I call it? So. Okay. So I’ve got some line wrapping, but this is our Docker image here. And half a meg, right? I don’t think I said this at the beginning, or maybe I implied it, but when containers came along as the second wave of cloud computing, they were smaller, faster, more portable than virtual machines. Well, WebAssembly is smaller, faster, more portable and more secure than a traditional Linux container.
So, we’ve got that as a container image there, 500k, fabulous. All right. Well, now that it’s packaged as a Docker image, we can execute it with docker run. Um, we’re going to call the container dockercon. Obviously, again, because we’re here at DockerCon. The runtime flag here is saying, okay, this is a little bit special. It’s not a traditional Linux container. So, containerd, when you come to run this, instead of using runC to go and talk to the kernel and build namespaces and fork, process and, you know, start the app in there, talk to a different runtime, please, and we’re asking it to talk to the spend runtime. We are also specifying here platform=wasi/wasm, but we don’t have to.
Let me just do this real quick. If I go docker inspect and just grab the ID of the image here; I just wanted to show you this. Yeah. So when we built the image, whoops, well, we said that one of the flags, the platform flag, was going to set the architecture and the OS — docker run can read this. I don’t need to actually specify this on the docker run command line. But I’m going to just for completeness, just make sure I’ve got the correct docker run command here. It is. It’d be exposing the application on port 5005 on my laptop here, and we are referencing the correct one.
So if I just go docker run and, oh no, I’ll tell you what, I will docker ps. That is a lot of line wrapping. I can hear people tell me how to do it easier, but my typing while I’m up on stage is shocking. I think it doesn’t matter if you guys can’t see this. Yeah. That’s the one. So if I run that docker run command again. We’ve got a container. If I now go to port 5005, I think it was, local host 5005. And it’s, yo, super small again. Let’s make it big. Same WebAssembly application, packaged as an OCI image, running as a Docker container. So easy even I could do it. Brilliant, progress.
We want to actually run it on Kubernetes as well. So in order to do that, I’m going to make this a little bit bigger so we can all see again. I am going to push it to Docker Hub. Again, regular docker push command. Super small even if the conference Wi-Fi is shocking. It’s going to arrive very quickly. If I go up to Docker Hub now, refresh this page, we should get a new image called spin. Here we go. And there it is, operating system architecture wasi/wasm. And about 500 meg. So it’s now up on Docker Hub as well. And the point to take here right is that I know I’m over stressing this. I’m not a Rust developer. Yeah, I know a little bit about Docker. But these are just our regular commands that we’re working with Docker here. There’s nothing actually new so far for WebAssembly, which can be quite a powerful thing when you’re in an organization or you’re an individual that’s invested a lot of time and effort in learning and deploying and having your staff work with these tools.
Build a Kubernetes cluster
We are now pushed to Docker Hub, which means we are ready to go and build a Kubernetes cluster. I’m going to just very quickly rattle through this bit here. I’m going to build the cluster with k3d, which, don’t worry, that’s wrapped. I’ve got a slide to show you the command that’s running. I’m going to kick this off, because I don’t know how long it’s going to take over the conference Wi-Fi. While that goes away, we’re going to come back to PowerPoint. This is the command that we’re running here.
So, K3D is a really small distribution of Kubernetes that runs inside of Docker. It’s so easy if you’ve got Docker Desktop on your machine to spin up a multi-node Kubernetes cluster locally. The command basically says k3d cluster create, we’re going to call it wasm. That’s going to get us a small Kubernetes cluster with a single control plane node. We’re telling it which image to build our control plane and worker nodes from. And then we’re saying, okay, we want to be able to access our applications inside of this cluster from port 8081 on my laptop. And can I get two worker nodes as well, please?
On the right-hand side is a picture of what we’re actually going to get. The important thing is that this image here comes with all of the Wasm stuff already bootstrapped into your Kubernetes worker nodes. So we don’t actually have to install any WebAssembly stuff into this cluster that we’re building here. We’re going to inspect it so that you see what’s going on behind the scenes. But if you have an existing Kubernetes cluster and you want to add WebAssembly support like runtimes and shims and things to it, I recommend you check out the Kwasm project, if you have an existing Kubernetes cluster.
Now, this is like a moment of truth for me. Has this cluster built over the conference Wi-Fi? I feel like it’s a little bit like Schrodinger’s cat, right? That’s both dead and alive until we observe it — until we collapse the wave function. So I’m about to look and see if I’m about to collapse that wave function, and right now it’s successfully built. It’s also failed, and it’s still trying to build. So what’s actually going to happen? Boom! It’s back here. Round of applause there for quantum physics. Appreciate that. So if I go kubectl get nodes, we should have a three-node cluster, which we do. One plain node and two worker nodes. Next up we’re going to go to WebAssembly configuration.
So back to the slides really quickly. I said before that Kubernetes is this super high-level orchestrator, and it asks other tools to do the low-level work of building containers, starting and stopping them. So there’s Kubernetes up in the heavens or wherever, and it’s lazy. It’s like, I don’t want to build containers: containerd, you do that for me. And containerd, despite being called containerd, even it can’t start containers. So it says, okay, well, I’ll get somebody else to start them for me. This is basic Kubernetes stuff, okay? But it talks to a low-level runtime called runC. So runC, please go and do the horrific kernel work of building namespaces and Cgroups and things like that and out pops a container. And then, runC’s done its work, and it wants to take a rest. So fire up a shim, and a shim basically sits in between the running container and keeps containerd in on the loop, like, yeah, we’re still running. And if containerd wants to stop the container or restart it, it can talk to the shim. Fabulous. Basic Kubernetes stuff. However, everything on this diagram below containerd is opaque to Kubernetes. That means Kubernetes can’t see below or doesn’t care about what’s below. And to be honest, below the shim, even containerd really doesn’t see that. So this kind of architecture means that we can swap runC out for something totally different. Kubernetes doesn’t know or care. Containerd almost doesn’t know or care as well. So we can put in a Wasm shim.
Wasm shim
Now, the architecture of a Wasm shim is slightly different to runC. Got a picture of it, right? So a Wasm shim has two main components. It runs this code called runwasi, which is a Rust library that basically interfaces with containerd over on the left there. My right. It’s got another piece of software in there, which is the WebAssembly runtime. And that’s what actually starts the WebAssembly host and starts the WebAssembly applications and things. Now, runwasi is an official containerd project. So this is all legit stuff. I’m super proud about this, right? I designed that runwasi logo. Shameful plug, but I love it. Because I’m not a developer, I love different ways that I can contribute to the community. And I made a logo. Anyway, it’s always runwasi for WebAssembly stuff with containerd. So there’s loads of different ones out there. We’re using the spin runtime here. Okay?
So that’s the basic architecture. Now, back to this diagram. Out pops a WebAssembly application. It’s basically how it is. So actually, just going back here, we’re going to make sure that we have got WebAssembly runtimes on at least one of our worker nodes. So I’m back at the command line. We’ve got this three-node cluster. And I’m going to say, yeah, docker exec. And we’re going to go into, I’m going to call it agent1. So, we’re going to get a shell on there. And I’m going to put it up at the top. First of all, we need to make sure that containerd is running. Okay? So here, here is containerd. So that’s running. We then need to make sure that we’ve got some WebAssembly shims on this machine. So, ls, they generally get installed in the bin directory. And they are always prefixed with the containerd-shim prefix. Okay?
So I can see we’ve got five shims on this machine. RunC is the default one for running Linux containers. The other four are WebAssembly shims, okay? And the one that we care about is the spin shim, fabulous. Having it installed in that directory is not quite enough, though. Oh dear, there’s a really long path I have to type in this next command, okay? We need to make sure that these shims are registered with containerd or they exist in the containerd configuration file. Which normally is in /etc. But on a k3d cluster, it’s about 1,000 layers deep in the filesystem. So it’s in a var. If I get this right, I’ll be so proud: var/lib/rancher/k3s/agent/etc/containerd/config.toml, fabulous. Look at the bottom of the configuration file as well. We have got four WebAssembly runtimes here, referenced in the containerd configuration file, and we are interested in this one at the top, the spin one, okay? Fab. Just in the interests of time, okay? I’ve got another super long command that will dump the active containerd configuration just to make sure that it is loaded. Trust me, it is loaded.
Right now, we have got a three-node Kubernetes cluster. We have got a WebAssembly shim, which has a WebAssembly runtime inside of it. We have a WebAssembly application loaded up on Docker Hub. We are almost ready on our Kubernetes cluster to run this application. However, do you know what? We could actually run it right now. In this cluster here, every node, including the control plane node, has exactly the same configuration, so all running containerd and all running the spin shim.
In the real world, you are probably going to have more of a heterogeneous node configuration, where some nodes have some Wasm shims on, some nodes have no Wasm shims on, and some nodes have other Wasm shims on. So in that kind of situation, you need to be able to label your Kubernetes nodes, so that we can schedule our work to the correct node. Now, we know that agent-1 has got the Wasm shims on. The others do as well, but we have just been on, and we know for sure that this one does. So if we do a kubectl label. There we go. Okay, agent-1, and we are going to just add the wasm=yes label. Use whatever label you want, it doesn’t matter. So we have labeled that node. Now, the final thing that we need is a runtime class. I am going to make sure we haven’t got any here yet, kubectl get runtime class. I really wish there was a short name for this runtime class. That looks right. Okay. No runtime classes yet. So very quickly on the old PowerPoint. We have got a three-node cluster. We know that spin, the WebAssembly shim on runtime is running on agent-1. We have labeled that. Now we are adding a runtime class.
Now, the runtime class does two things. It selects our nodes with that label. So it is visible — white text on yellow is not great here — It’s saying select or send or schedule work to any node with the wasm=yes label. But also, when you hand the work over to containerd, tell it to use the spin shim instead of the runC shim. Fabulous. Okay. That means we can then deploy an application. We will see it in a deployment.yml file in a minute. That references this runtime class and the application will be deployed to the node. Eight minutes left. Oh, I said I was not going to run over time and I will not run over time.
But we are going to go straight back to the command line here. So now that we’ve got that, I need a runtime class. And it’s here. We’ve got one in the GitHub repo here. Called rc.yaml. And as we can see, I’m going to paste it in. This text will be a little bit bigger when I paste it into the runtime, into the file here. So if I go vim rc.yaml. Whoops. And paste it in there. Basically, we can see it’s got a name. It’s called rc spin. We’re going to reference that in a deployment file in a minute. It’s going to make sure that the spin shim runs the work, and it’s going to send it to any nodes with wasm=yes. Fabuloso.
So we’ve got the runtime class created. Next, we need to create our application file. So if I go vim app.yaml. And back to GitHub here, we can see we have got an app.yaml file here. I’ll walk you through it in just a second. Right. Okay. If you don’t know too much about Kubernetes, this is going to look horrific. It’s not really. We’re basically saying, okay. This is our application. It’s up on Docker Hub. That’s what we call the image. Can I get three copies of it on my Kubernetes cluster, please? And when you come to schedule Kubernetes, because it’s special, it’s a Wasm application. Schedule according to the rules in that runtime class we just created. So agent-1 and make sure that the WebAssembly shim, the spin shim, runs it.
We’ve got some more stuff down here that I’m not going to go into. This is going to make sure that we can then connect to it. So if I save that file and I go kubectl apply -f app.yaml, this is how we deploy applications to Kubernetes. And we can see that, okay, our deployment was created. The service was created and the ingress was. And if I do a kubectl get deploy, we should see three replicas.
Ooh, ah. I didn’t apply the runtime class. Ah, somebody’s watching. I love it. Okay, so I’m just going to kubectl delete. Thank you so much for that because you know what, live on stage, I was never going to troubleshoot that when the clock’s flashing at me. So I’m going to delete this first, just to be cleaner. Um, kubectl apply, apply. Uh, the runtime class. It’s falling apart. It’s not really. Did I not save it? Louder. Oh, app teamwork. Love it. Brilliant. Okay, let’s find the one that deployed the app. Ah, ah ah ah. That’s the one. Right. No, not that one, not that one. This one. Right. Okay, kubect get deploy. Three already up and running! Take it back. You’re giving yourselves a round of applause there. It should be me that’s clapping.
Okay, so the application is running. What I want to check is if I do a kubectl get pods -o wide, I should be able to see. It’s not easy to see because it’s wrapped, but they are all running on agent-1. So the runtime class and the label has done its job. It has sent them to the correct node. So that if I come back to my browser here. Let’s lose this. And if I go local host, and it was 8081 on the yo flag. If I click refresh. It’s done it. It was already up there because we’d run it on Docker, but this time it is talking to my Kubernetes cluster. It’s running on Kubernetes, and it is done with three and a half minutes to go.
So just very quickly recapping. With a bunch of audience help. Thank you. Appreciate it. And a bunch of cool stuff from Docker. A bunch of cool stuff from Fermyon. A bunch of cool stuff from the people at containerd. We have taken a super simple piece of source code. Compiled it as a WebAssembly binary. Pushed it, or we’ve containerized it in a way. We’ve made it into a container image. We’ve pushed it to Docker Hub. We’ve built and configured a Kubernetes cluster that will run Wasm apps. And we actually ran a Wasm app. So that is the demo. And if it seemed hard this first time, go and watch it again on the replay later. Go to the GitHub repo. Do it yourself. You’ll really realize it is not hard.
Q&A
Does anybody have a question? And if you do, please run to one of the mics and speak loud. I’ve got this guy in the middle. Quick as you can. And then if we’ve got time after that, I’ll come to you. Nice and loud in the mic, please.
Great demo. So, actually, Kubernetes didn’t know about anything that runs below containerd. But you still had to create the runtime class?
So, the question was, why did we have to create the runtime class in Kubernetes, if Kubernetes doesn’t know about WebAssembly. It’s basically just to help with scheduling and make sure that that WebAssembly workload. I feel like I haven’t answered this question.
So if you don’t want to schedule those containers in another node, you don’t need to create the runtime class?
Yeah, so if we wanted to scale on another node, we would have to just label another node.
You have to label them to schedule them in those nodes. I mean, if you don’t need to do that, if the container is already configured in all the nodes.
Yes. So in my example, because containerd has the Wasm shims on all three nodes, I would not have needed to do that. No, absolutely. Other questions?
Okay, the question is, why? To clarify on that, you have WebAssembly, you have Docker. Both of these are designed for portability and I’m just trying to get at why are these synergistic, as opposed to redundant ways of achieving?
So please come at five past four today, because we’re going to talk about stuff like that in the panel. But I’m just going to challenge you on that. I love Docker and containers, but they are not designed to be portable. When you build a container, it is pinned to an OS and a CPU architecture. We call containers portable because they’re small. They’re easier to copy between a registry and a node than a VM. But that’s not portability. So WebAssembly, as long as you have a runtime anywhere, you can run it anywhere, whereas a container, yes, Container Build Tools make it easy to build for multiple platforms now. But if you’ve got multiple architectures, you have to have multiple images. So you get a bit of image sprawl.
Honestly, everybody, thank you so much. We are nailed on time. Really appreciate it. A round of applause to yourselves. Thank you.
Learn more
- Announcing Docker+Wasm Technical Preview 2
- Build Kubernetes-ready applications on your desktop
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Find a subscription that’s right for you
Contact an expert today to find the perfect balance of collaboration, security, and support with a Docker subscription.