Transcript
So today we will talk about Testcontainers and how it can help you simplify local development and testing and be more productive as a result. So few words about me: I’m Anna. I’m a solution engineer here at Docker. I joined with the acquisition of AtomicJar. This is the company behind Testcontainers and Testcontainers cloud. So happy to share my expertise and talk about Testcontainers more. In today’s session, we will cover how tests can impact developer productivity. What is the correlation between feedback loop cycle and feeling more productive during the day. I will highlight what is Testcontainers, what it’s used for. And during the demo, we will cover how to run Kafka, PostgreSQL, Localstack without installing these services locally, but using Testcontainers modules. How to run end-to-end tests using Testcontainers with Selenium module and what is Testcontainers cloud and Testcontainers desktop.
Table of Contents
- What drives productivity? (1:29)
- Integration testing challenges (6:43)
- Why Docker API? (11:24)
- Introducing Testcontainers (12:14)
- Demo (19:20)
- Demo: Integration (25:49)
- Demo: local development (33:50)
- Demo: dashboards (44:03)
- Summary (55:17)
- Learn More
What drives productivity? (1:29)
So before we continue, I would like you to think about what drives your productivity and what makes you productive during the day. So just think about it. I have actually an answer that is based on research that was done. So there are three main items that drive our productivity and makes us feel more productive during our day to day work. So first is cognitive load. What that mean is, for example, you open code editor, you pull the changes and you want to introduce some changes, according to the task and you see that code is messy and you need to take a look and understand what’s going on inside. So you need to think a lot about how to implement the change rather than having a clear structure, clear understanding, of each feature, what each method is doing. And implementing the changes. So it just requires additional brain powers to process things.
Another thing like the second bullet that impacts our productivity a lot is the feedback cycle. And here testing comes into play. So it means how fast you can iterate, how fast you can test the changes that you’re introducing to the code works properly and works as expected. Can you implement the change and run tests locally, get the feedback about your changes in a second or you should wait for one period of time, sometimes hours, sometimes days to get the feedback about your feature and then go to production. Testing really impacts the feedback loop. And the last but not least is a flow state, meaning how concentrated can you be on your task, whether you have Slack messages popped up, teams messages popped up, somebody called you, you received an email or you need to wait for a long period of time for your test to get the results about your change and you distract from one task to another task, meaning that you can’t really concentrate on the one task that you’re working on, but you rather look into different things simultaneously. And after the whole day, you feel that you did a lot, but what’s the result, what the outcomes, it’s not clear.
So, being productive is hard and being in the flow state is hard as well. I put here a link to an article about the flow state and how the feedback loop impacts our flow state. So, with frequent context changes, we can’t reach the flow state and we can’t focus on one thing and concentrate and finish the task completely and then move to another. This thing, it impacts our productivity during the day and general during the work hours. Based on this research, we can identify that about starting from 15 to 30 minutes, we can reach the flow state. So, for example, if we need to wait for the test results to pass and we need to get the results about the testing past, we will be able to reach the flow state, complete the task. And after that, switch to another one, be productive and can say after the day of work that we completed this number of tasks, everything went well, these changes already in production. I’m super great. I’m happy about what I’m doing. I love writing code, I love developing, rather than waiting for test results and switching the context from one task to another while I’m waiting for test results for one task, I will develop another task and then wait for another test results. This is not good and this is not making us happy and not letting us to get the joy of the development.
Integration testing challenges (6:43)
So, what can positively impact the flow state and what can positively impact the feedback loop? The most complicated part is integration tests and if they are done properly, we can iterate faster and reach this flow state. How integration testing can be done and is done usually in the wide range of organizations? You can install dependent services to test your application locally, you can install the dependent services on your machine, it can be just postgres installation. You can use in-memory solutions like H2 databases, but in this case your test may pass locally but when you move further to the QA environment or production environment with real instances of databases or even in production, some things may not work as expected because in in-memory solutions they are not real things like not real databases and some things that work differently with these technologies.
Another way you can use mocks, you can use Wiremock to mock things but again this is fake and there can be some challenges moving forward to production. Another popular way of doing integration tests is having a shared instances of, let’s say, databases, but the challenge is let’s say you want to run your tests, you run it, you see some tests failed and sometimes it’s not obvious that tests failed because of the bug found in the code or because of the likeness, because some data, some tests data are not there in the database, maybe another team was testing the same things using the same database and just deleted your data. So often you need to invest your time into debugging these flaky tests and understanding the reasons why tests are failed and in the worst case they are failed not because of the bugs but because of unstable test data, that again affects the feedback loop cycle. Another way, like the field one, this is generally used is testing in production; some companies are doing it by canary releases, but this is unsuitable for every organization and it’s obviously a bit risky.
So how can Docker help with this? With the evolution of containerisation, Docker brings the ability to run services locally, run services on CI, run services on my machine, on my teammate’s machine, and everything will work in the same fashion. So the same database, let’s say postgres, you can start with Docker, you can run your tests and use this postgres in your application, run tests locally, and focus on the inner loop, get feedback fast. But for most cases you use not only one service, but you need to have the environment for the testing, you need to have the environment for development, and you can use the Docker Compose for that. You can start your application, you can start multiple services, the same postgres example, so you can have this whole environment set up locally with your application, you can run tests, you can run some manual verifications. All is good, but this is a pretty static approach.
The next iteration is using Docker API, so what if you can start postgres not from the YAML file, not from the Docker run command, but from your code base saying like postgres start and you have this instance of postgres in one test, you have another instance of postgres in another test. To verify some cases where you can just delete the database and potentially may affect another positive test scenarios.
Why Docker API? (11:24)
So using the Docker API may be very helpful to spin up local environments. Why is it so? You don’t need to leave your IDE; you can stay inside of the IDE where you develop your application code, you can define environments programmatically using the same language used for application development. And in this way, your environments can be self-contained, you can focus plainly on the development, not switching the context from one application to another application, and you don’t need to install any other services. So all you need to have is your IDE, code base based application and Docker to run containers that you describe using the programmatic API.
Introducing Testcontainers (12:14)
So this is how Testcontainers was created and was born. Testcontainers is an open source library that is based on the Docker API and it provides a ephemeral, lightweight instances of real services – databases, message brokers, cloud providers, like everything that you can run in a Docker container. Here is an example of Redis, so you just programmatically define that you want to start Redis, you’ll define what image you want to use, what port to expose and you have Redis in your code base.
How does it work? The same Redis example, you define it in your IDE. So it will basically under the hood execute Docker run, port Redis image, whatever is needed, whatever you define in your code declaration of how you want to start Redis. Everything will be transformed into the Docker command, it will be executed by the Docker daemon on the local host. Daemon will pull the image, it will start the container and it will provide the connection details to your IDE. So you can start containers programmatically and in the same way programmatically you can manage your containers, you can get host, you can get port, you can get environment variables. So you don’t need to define this information statically, you don’t need to remember this information about the host and port where your containers running, you can get this information dynamically from the container that is started with Testcontainers.
So it means that you can automatically, it can have the automatic management of your test environments. Before test start, your containers will be started so your whole environment for integration testing will be ready. Then test will run, they will use the services that you define with Testcontainers and after tests are done, containers will be destroyed automatically by the Testcontainers resource reaper and it helps to automatically maintain and manage the test environment. So in comparison to Docker Compose. With Docker Compose, you say manually Docker Compose start, Docker Compose down; with Testcontainers this is all done for you by the ryuk container you just need to define what containers you want to start.
And like this, you can focus on the inner loop, you can stay in the test. In the inner loop, you can test a lot of things that are usually tested on the outer loop at the integration testing phase. So about 80% of the integration test that are usually done on the outer loop, you can move to the inner loop and get the feedback about your code changes in a second or in a couple of minutes depending on how heavy is your test suit, rather than waiting for the push for all the test to run that sometimes can take hours in some cases.
So, bit of statistics about Testcontainers usage. We have like we see that there are more than 13 million full of the Testcontainers images from the Docker Hub right now. More than 2000 organizations using it and we support more than 10 languages, and we will cover it later about the language support. Also, you can find a lot of public cases and blog posts from Spotify, Netflix, Elastic about their use cases for usage of Testcontainers for the integration testing and how they speed up the development loop, how they speed up iteration with local integration testing like with integration tests that feels like unit tests.
This is the high level overview of the languages that we support. The best support we have right now is for Java, .net, Node, Python and Go. Other languages are mostly community supported, but core maintainers from Docker also take a look after this language implementations and help with maintenance and creating new modules for the languages. I did not put here C++, but we also have implementation of Testcontainers library for C++.
This is an example of how you can run Redis in Java using generic container. And similar example for .net also, like two lines of code, but here you can see that I’m highlighting I’m using the Redis builder not the generic container, but the Redis. What does it mean? It means that we have modules so we have right now more than 100 modules for different languages and a module is a high-level abstraction around some specific technology. So for example, a chroma or dapper, you don’t need to deep dive into how you want to start the services with Testcontainers, but you can use the module that defines the basic setup that is required to start this particular service – so all you need to do is just provide the name, like include this module as a dependency into your project, provide the image name, say start, and it will be started. If you need to modify how you want to start some environment variables, you can do this using the same module.
Demo: Starting (19:20)
At this point let’s go to the demo part. So we can start with just one simple example. Generic container of how we can start Redis. So let’s first move to the pom XML file. So to highlight one more time, Testcontainers is a library; it’s an open source library. All you need to do to get started, you just need to add the dependency (I’m using maven so in my case the pom XML dependency) see Testcontainers Jupiter will be using postgres. This is the module so you can combine with technologies you want to use; you use some generic Testcontainers stuff and then you can add additional modules like here, postgres, Kafka, localstack, Selenium, so all of these modules will be used in the demo project. All right. Let’s go back here to the main. So what we have here I defined the Redis version and I say what port I want to expose so I need to expose this port in order to get connected to the Redis and I say start and I can check for the connection URL so I can get the host from the container. I don’t need to hard code it, it will be provided dynamically to me. The best thing that I can get the port that is mapped for the exposed one for the Redis, meaning that I don’t need to hard code the port as well. If I want to run tests in parallel and have not only one version of Redis but multiple versions of Redis, Testcontainers will start each container on the random port, meaning that I don’t need to worry about the port conflicts; I don’t need to remember what ports are in use on my system, I just say Testcontainers to start my containers and everything will be managed by the library.
So let’s run it. Here you can see that we’ll be using Docker desktop to start the container; I have already image pre-pulled so here containers are creating and started. It’s like pretty fast in one millisecond and there is the connection URL that I can use further to connect to this Redis instance in my integration test. Let’s also check one thing so you see there was a port 50103 let’s run it again. Okay, and there is another port so every time you start a new container it will be started on the random port that is available on the host machine. Let me show a bit more advanced usage; I have code here to make it a bit faster. So you can also use networks with Testcontainers so actually you can use Testcontainers in the same way you’re using Docker or in the same way you describe things in Docker Compose, but you can do it programmatically. Everything that is available in Docker API can be used with Testcontainers so networking is one of the things you can start containers with Docker within one network so you can do the same thing with Testcontainers. Here you define the network; I have example of external zookeeper for kafka container so here you can define zookeeper or you can use your another instance of zookeeper to connect to this Kafka but for the network example let’s start both of them. You can provide only the version of zookeeper you say that you want to start it within this network. Where this network is, environment, or the zookeeper port you started. Then in Kafka also you define the image of what Kafka you want to start with network with external zookeeper so this is this one that’s running on port 2181 and you start it. Then to verify that everything is started properly, you can getBootstrapServers(); if Kafka is started properly it will give this link. So let’s run this example again, it will actually start Redis one more time. And we can also see in Docker Desktop that containers are started. Now Kafka is connected to zookeeper and containers are automatically destroyed so I am not doing anything with container management; this is done for me by the Testcontainers.
And here is Kafka bootstrap servers URL so meaning that this setup was started correctly and containers were started within one network. If you do the same thing in Docker Compose, it will be harder to maintain. This is just a few lines of configuration but if you are working on more comprehensive example, then it will be more lines here. This is just a simple demo.
Demo: Integration (25:49)
Let’s move forward. So these containers they have just started by itself, they are not interacting with my application but what if I want my application to talk to some of these containers? What if I want my application to start with this containers and the environment not just a standalone thing but to have it combined together with my development cycle? So here is an example of how you can do it; let me actually first go to the diagram here. For the demo application I created a springboot project that focuses on the catalog service development, so this is my application under development. It stores product data in the PostgreSQL database, it stores product images in AWS S3, and it uses Kafka to get to listen to the events of image uploads. It also depends on the inventory service so if I want to check like, let’s say I want to display the information about the product and understand whether I have something in inventory or not, my application is talking to the inventory service. In order to run it locally and to test it locally I need to have all of the services running somewhere. It can be like in Docker Compose file, like I can define them how I want them to start. I can have some staging environments with shared instances or I can use Testcontainers to spin up all of the services and have it on demand as a real instance of production-like services running as a local environment.
So how can I do this? So this is my container config class to define what services I want to start with Testcontainers. An important thing here is that you need to define ServiceConnection annotation above the service description as a being in order to tell your springboot application to connect to the service when the application is started in the testing mode. This annotation is available starting from springboot 3.1 and it is like a special integration of Testcontainers with springboot developed by the springboot team. They recommend running Testcontainers for your integration test and they provide a lot of guidance on how you can get started with this way of testing for the springboot applications. So also this annotation is available not for every container. For the most popular right now I think it’s available for localstack as well but I left here the old fashion description just to show the different ways of how you can start Testcontainers based containers and connect them to your application. The program I’m using postgres:16, not the latest one but still let’s say this is my production version. I use Kafka- this is like how you can define Kafka with Testcontainers. Again, this is Testcontainers modules; you can run everything with generic container or you can use module. If you go to the module implementation you will see that it extends generic container, meaning that if you’re using some in-house solution, some in-house Kafka that’s different from the confluent one. You can define your own module, you can just extend generic container, implement required methods and use your in-house Kafka module for your integration testing. You can pack it as a library and just spread across the development teams so everybody will be able to add it as a library to the pom XML or build Gradle or other builder that you’re using.
If you scroll further for some implementation, you will see some default parameters like the Kafka ports, zookeeper, how to start zookeeper, how to connect to Kafka so you don’t need to worry about it. You just say new Kafka module, put the image of the Kafka version you want to use and everything that’s here is already implemented. To move forward, LocalStack- here I’m using dynamic property registry to add localstack variables to my application properties so basically I need to get the access key, secret key, region, end point – this is the basic things you need to add to application properties to connect to the localstack. If you want more configurations you can do it, so let’s check what we have here. You can get container ID, bounds, name, environment, hosts, etc. A lot of things that you can do in localstack and not only localstack but with any container, there is very rich programmatic API that you can use to set things or to get things from the container.
Okay, so here in this implementation I start the localstack container and then create an S3 bucket and here I’m using also Microcks Container that serves as the mock or like kind of advanced mock for my inventory service. It uses open API specification so if you use open API or graphQL in your teams, you can ask for this file from the team, who is responsible for inventory service implementation in my case. Update the test data in this file and run the service based on the open API using my proxy container meaning that you can do a contract testing as well so you can have the small piece of your infrastructure running locally and that will include the services that other teams are developing. So you can check how your particular service is talking to another services, not only databases but the microservices that another teams are developing and you want to make sure that your microservice is working properly with.
Demo: local development (33:50)
So what if I want to start the application locally with the services? I can define the main class under the test class pass and say that I want to start my application from the main using container config and just start. This is why we can also open Docker Desktop to see the status of containers. Here you can see that there is a ryuk container which is the default container that is used by every Testcontainers library implementation, not only Java but for every language. This is a resource reaper; it monitors containers and removes them after test cycle is complete. So here you can see Kafka is ready and localstack; this is used by microcks, postgres, looks like my application is started. We can go to localhost:8080. Yeah, this is my application. I can upload an image here. Okay, it’s uploaded. Yeah, and you see it changed. So if I refresh the state, it’s still here, meaning that the image is in the database. We can do also, we can for example go to the postgres database and debug. So let’s connect to the database. So I’m inside my postgres container. I connected to the database under test user that I described. That is the default user when you start postgres containers in Testcontainers and I can select images from my product table. And here you see, image is uploaded meaning that every integration worked properly; Kafka listened to events, images uploaded to S3, all is good. We can check for example Kafka logs as well. So there will be information about the event.
So in this case, you can just run your application, test things manually and when you’re done, you can just stop it. And let’s take a look. Yeah, containers are destroyed. You don’t need to manage it manually. Everything is managed for you. The same test configuration can be reused in the integration testing. So you can define the base integration test and import Testcontainers config. Then, in your test implementation, you can extend this base integration test class and again, containers will be started for this test cycle and they will be automatically destroyed. Also here for the test data, I’m using springboot way of adding the test data. I have test data SQL file in my resources and here I am just using SQL annotation of the springboot to put data into my database and due to very good and tight integration of springboot and Testcontainers, this is pretty easy to do. I already know that you’re using Testcontainers and we’ll put this test data information into the database that is started with Testcontainers. So in this test, what I do, like a couple of simple verification, I can create product successfully. I can upload product image successfully. So I just get the image, call the API to upload the image and then verify that this is present in the product list and that it was really uploaded to the S3. I can get product by code again by calling the API. And check open API confirmation meaning that I verified that my service will work properly with the inventory service that started, based on the open API specification. Yeah, and couple of negative tests so we can actually run them. We will see the tests are initiated and again, containers will be present in the Docker Desktop right now. So they will start before the test cycle and then they will be automatically destroyed. Often there are questions about how we can make sure that containers are ready before the tests to make sure that tests are not running against the services that are not started properly yet. Testcontainers has wait strategies that are implemented by default in the modules and with generic container, you can implement your own wait strategies. So you can wait for the logs and so on. So again, you see tests are done, container destroyed, that’s it. I just run couple of tests that call the API end points of my application with this pretty holistic infrastructure. And I don’t need to manage any resources.
To switch back to the wait strategies- there are a lot of wait strategies. You can wait for logs. You can wait for health checks. The same you do in Docker compose files. So you can define in the same way health checks for your containers. If you are using modules, we recommend using modules because they already implement wait strategies and you don’t need to think about the possible ways of making sure that your containers started and tests are ready to be executed against this predefined environment. Another thing that I want to show today is this little drop down. This is Testcontainers desktop application and here you can define first the runtime. So for example, right now I was running containers locally, with Docker Desktop or I can run them with Testcontainers Cloud. Also here you can freeze containers shutdown. Let’s actually select this feature and another convenient thing. You can define some services as services. Meaning that Testcontainers as you have seen, start containers on the random port. If you want to have the static connection to your database, let’s say, you don’t want to look in the logs or in the Docker Desktop or in Testcontainers Cloud, what port is used and then update in the string connection to the database to take a look what’s going on inside of the database. You can define it as a static connection string. And your postgres will be running on the static port, meaning that you can connect to it every time. So let’s run test again. This time they will be running with Testcontainers Cloud. And we enabled freeze containers shutdown. So containers will not be destroyed for some time or unless we de-freeze the containers. This is a pretty useful feature for debugging purposes. If you see that some of your tests are failing and you want to understand why, you can freeze containers and then navigate to continue and check the logs, see what’s going on. Also here you can see the status of containers, what containers are started, you can open terminal, you can take logs for example and you will see your Kafka container logs here. Also right now this Kafka container it’s running in the cloud. So it is pretty easy way to connect to this container when it’s running with Testcontainers Cloud.
Demo: dashboards (44:03)
Let’s go to the dashboard here. Yeah, and another benefit of Testcontainers cloud is the dashboard; you can see both local sessions. It was my session that I run with Docker Desktop. It highlights it here. So it shows the containers, how much time it takes for every container to get started and how much time it lived. You can see like some small test session that I was experimenting before this session. You can see this live session with tests that we are running right now. We can connect to the remote worker. So this is cloud VM. We can do docker ps and see actually that all containers are started. And yeah, actually all the tests are done. But if I do docker ps one more time, I see that all containers are still there. Let’s de-freeze them. Okay, containers are terminated. And if I docker ps, no containers are left. So tests are done. And this is the another benefit of cloud solution. For example, if you lack local resources, you can run containers in the cloud. Again, application is running locally, tests running locally, but containers will be started in the cloud. And this is also pretty convenient if you want to integrate your Testcontainers based testing with the CI pipeline workflows and run tests in CI. So here is an example of how you can run test with Testcontainers Cloud in CI. This is just one step where you download the Testcontainers Cloud agent and provide the secret token for the service account. Everything else will be the same. This approach is pretty convenient for the CI tools that run on the Kubernetes, for example, if you run Jenkins on Kubernetes and each worker started as a container, then you need to install Docker inside of this container in order to run Testcontainers test because they are based on the Docker API and they require Docker context to be available. So rather than doing all of these manipulations with Docker in Docker that can be not approved by security teams and sometimes it may lead to not passing the security auditions. You can use Testcontainers cloud with just one line for every CI provider and have this remote Docker context for your Testcontainers based tests. So they will run in the cloud, everything will be there. They will be destroyed after test cycle is complete. So it’s this on demand resources that are managed for the project and helps with scaling Testcontainers usage across the organization and across multiple products.
So let me show you the last example piece. Selenium tests and actually like let’s add the comment and commit these changes to the CI and see how tests were running in CI and how Selenium tests ran there. Describe a bit how this configuration was done. So if you want to run end-to-end test against your application with Testcontainers, in this springboot example, you even don’t need to package it into the image. You can run your application and navigate your Selenium tests against the local host and run couple of scenarios. Here I’m using Firefox to start the Selenium container. You can record your tests. I’m recording all but you can record failing for example. You provide the target where to save test recordings. It can be not only the project folder, but any other folder. You can provide format of the recording. And here what is important is to say to provide this URL to the driver to connect. This is Testcontainers URL that will represent local host. And this is pretty much it. So you start Firefox container piece, web browser, put driver container provision by Testcontainers. Then you create your remote driver with the Selenium address from this Firefox container. And connect to your application that is running on the local host. And this way you can easily run tests either locally or in CI and different type of tests starting from the unit tests and ending with the end-to-end Selenium heavy test. Let me commit these changes, and then comment. So you can see that my session has ended for this crowd worker. This is also one of the common questions for how long the session lives. If you are not running containers, the session will be ended within 30 minutes of the idle period. So resources are managed pretty efficiently. If you are not using cloud resources, they will just be destroyed. So here, not this project.
Alright, so you see this is the CI job that was triggered. And there will be a couple of containers here soon. I can actually refresh it and there probably will be more containers. Okay, so let’s just start it. So all of the containers that are used by the test cases. And the CI will be shown within the session. Let’s load it a couple of times and there will be containers here soon. Not here yet. Let’s check the project. Okay, it’s still building. So there should be a session here. Here it is. Also from here you can see that what containers are used. You can see what workflow triggered it. So basically it is the same to be checked right now. So you can see the status of it. You can see what project triggered it. So if you are using it with multiple projects, you can just tag it. And have a clear view of what project is using what versions of containers, what versions of images. So understand whether there is a consistency across image version used for the testing, for some cases it is pretty important. All right, and here we can see containers that are started and tests are still running. Yeah, and actually Selenium tests just started. So you can see that Firefox container is building, Firefox image is being pulled and started. So we will see in the dashboard in a couple of seconds. Yeah, here it is. So once tests are complete, we will see the status here. And also this worker will be already disconnected. In my configuration, I used GitHub actions integration, but if you’re using another CI provider. We have examples for CircleCI, Tekton, or Jenkins. This is just one step that you need to add and create the service accounts, provision credentials for it, and this is pretty much it how you can start with running Testcontainers tests in your CI. Okay, looks like test are done. Yeah, all tests are done, including Selenium. Are they passed. Yeah, and in the session you can see that containers were destroyed. They’re not running anymore. So no waste of resources.
Summary (55:17)
All right. Let’s wrap it up. The most often question that people who start using Testcontainers, still ask is “what’s the difference with Docker Compose” or “we are using Docker Compose, why should we consider Testcontainers as an alternative?” Docker Compose is great. And it provides the most value for the local development when you define static services that you want to start with your application, to check things while you develop. But Testcontainers is designed for testing purposes. So with Docker Compose, it’s hard to run tests in parallel and to maintain the stable infrastructure for tests that are running in parallel. Testcontainers run containers on random ports so you don’t need to worry about port conflicts anymore. You can run as many containers as you want. You stay within your IDE. You don’t need to learn YAML format or TOML format for describing containers that you want to start with in Docker Compose. For Testcontainers, you describe your environment in the programming language that you know. And you just use the API or Testcontainers library to run and manage this infrastructure for you.
So to summarize, Testcontainers are great for testing, especially parallel testing or dynamic environment provisioning for integration testing purposes. Docker Compose is great just for local development or when you want to move your environment like your static environment from one machine to another local to staging, let’s say, but again, this will be static and to maintain, you need to manually run and stop this environment or add additional scripts that will do it for you automatically that will do it if you have already alternative solution that will manage everything for you. Testcontainers Cloud provides you this consistent environment for your Testcontainers test whether you run your tests on Mac, your colleague has Windows, you run Jenkins on Kubernetes in pipeline, you don’t need to solve the test flakiness due to environment issues. You will have just one runtime for your containers, no matter on what environment you run your tests, containers will use the same unique environment so there will be no issues with running those tests. Again, for the local use case, you can run Testcontainers tests with Docker Desktop. The main requirement is to have Docker context available for Testcontainers tests because they use Docker API so they need Docker. This is also another example of how you can run test with GitLab CI. Very similar to GitHub Actions, very similar to Jenkins, just adding one step to your workflow. A couple of helpful links for how you can get started with end-to-end testing again with Playwright for Node or Java. If you want to do end-to-end testing, not with Selenium or integration testing at other modules, you can visit the Testcontainers website and actually check modules. If I go to the website and the modules page, you can see here, sorted by languages, by types, a lot of modules is pretty straightforward way of getting started. So yeah, this is pretty much it for today. Thank you very much. If you have any questions, please feel free to reach out. Thank you.
Learn more
- New to Docker? Get started.
- Deep dive into Docker products with free learning paths.
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.