This guide will provide a list of system requirements and links to resources to get you set up and ready for remote development on AWS using Visual Studio Code Dev Containers.
While the remote Ubuntu configuration steps are AWS-specific, the process will be almost identical for other cloud providers.
This guide assumes you’re familiar with the Terminal, SSH, Docker, Linux (Ubuntu more specifically), and AWS, so if you’re not experienced using these technologies, be prepared to spend more time getting things set up.
If you’re entirely new to Dev Containers, I’d recommend watching my webinar on Visual Studio Dev Containers for Node.js apps on AWS to give you a solid overview of what’s involved.
The Visual Studio Code Remote Development in Containers guide and tutorial are also great resources to help you get started.
Let’s get to it!
If you’re looking for an actual Node.js application using Dev Containers, check out the Mandalorion Gifs Node.js repository, including an example devcontainer.json file and launch configuration for step-debugging.
Alternatively, you can use one of Microsoft’s sample repositories with examples in Go, Python, Rust, NET core, and other popular languages.
Step 1. AWS Credentials
Your AWS credentials must have permissions required for managing a Lightsail instance and SSH keys, or if using EC2, you’ll need permissions for SSH Key Pairs, Security Groups, and EC2 instances.
If using EC2, I’d recommend checking if your organization has a base AMI you should be using.
Step 2. Install Visual Studio Code Remote Container Extensions
The following extensions are required for remote container-based development with Visual Studio Code:
Step 3. Install Docker Locally
A local installation of Docker is required for Visual Studio Code to manage containers on the AWS instance from your local machine.
Step 4. Install OpenSSH Compatible SSH Client
You will need a Visual Studio Code supported OpenSSH compatible SSH client installed locally to create an SSH tunnel that Visual Studio Code will use to control Docker on the remote host.
For Windows 10 users, if you’re currently using Putty, I strongly encourage you to use Open SSH instead, as it will save you from frustrating troubleshooting experiences now and in the future.
Step 5. Create SSH Key
If you haven’t already, you’ll need to generate an SSH key and add it to the list of known identities for your SSH client.
Step 6. Create Ubuntu 20.04 AWS instance
Configuring your AWS Ubuntu instance consists of two steps: Uploading or importing your SSH key, then creating the instance.
Step 6.1: Upload Local SSH Public Key to AWS
AWS Lightsail is recommended as it’s the easiest option for creating an Ubuntu 20.04 instance running Docker, but an EC2 instance works just as well if that’s your preference.
You can optionally use an SSH key created by AWS, but I recommend using your own if possible.
Step 6.2: Create Ubuntu Instance
The most crucial step is to select your uploaded public key when choosing the SSH key for your instance. If you have an issue connecting via SSH later, it’s almost certainly because you didn’t upload or select your SSH key when creating the instance.
Next, you can use the following code as the launch script that will be run as part of initializing the instance, but you can always run these commands later once logged in via SSH.
export DEBIAN_FRONTEND=noninteractive export HOME=/root # System dependencies apt-get update && apt-get install -y make nano # Install Docker curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null apt-get update && apt-get install -y docker-ce docker-ce-cli containerd.io # Install Docker Compose curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose # Enable standard user (ubuntu) to manage containers (required for Remote Containers) groupadd docker usermod -aG docker ubuntu newgrp docker
Step 7. SSH to the Ubuntu Instance
Once the instance has been initialized, get the IP address from the AWS console, then log in using the following SSH command:
If you’re having trouble connecting, check that the public key you imported into AWS is attached to your instance, and if not, the easiest path forward is to delete, then recreate the instance with your SSH key selected.
If you’re still having connection issues, re-run the ssh command with the `-v` flag to enable the SSH client’s debug mode.
If the debug output still isn’t helping, check out DigitalOcean’s SSH troubleshooting guide, which will hopefully offer additional solutions to try.
Once logged in, you’ll need to install and configure Docker if you haven’t already.
Start by switching to the root user (`sudo su`), then copy and paste the commands from our Ubuntu installation script.
Step 8. Check Docker Works on AWS
Once the installation commands have been run, log out, then SSH back in and try running a container:
docker run —rm hello-world
If this doesn’t work, try switching to the root user (`sudo su`), and try running the container.
If you can run the container as the root user, it’s likely a permissions issue with the ubuntu user. In this case, I recommend reading Docker’s documentation on managing Docker as a non-root user, which should fix it.
If all else fails, try rebooting the instance, SSH in again and try running the container as ubuntu again, and if still having trouble, check out Visual Studio Code’s Troubleshooting Guide.
Step 9. Create a Docker Context Locally
It’s time to start putting the pieces together!
Now create a new Docker context locally that will communicate with Docker running remotely via an SSH tunnel.
Open a terminal locally, then run the following command to create a devcontainers context:
docker context create devcontainers --docker host=ssh://ubuntu@$INSTANCE_IP_ADDRESS
Next, you’ll switch from the default context (Docker locally) to devcontainers by running:
docker context use devcontainers
Then in the same terminal window, check that the Docker CLI can communicate with Docker on the remote machine by running:
docker run —rm hello-world
If you could not run the container, check out Visual Studio Code’s SSH Tunneling for Docker guide to help you troubleshoot connection issues.
If you want to switch back to using the default context, run:
docker context use default
Step 10. Try Running a Dev Container
Phew! If you’ve made it this far, you’re now ready to try running a Dev Container remotely!
Open Visual Studio Code, then run the command:
Remote-Containers: Try a Development Container Sample...
Then select one of the available options.
This time you run a Dev Container, it can take anywhere from 5-10 minutes. Visual Studio Code needs first to set up the extension host and Visual Studio Code server before building the Dev Container environment.
Presuming all goes well, you’ll eventually see the files in the container in the Explorer panel, and clicking on Terminal will open a command-line prompt inside the container.
You’re running Dev Container remotely in the cloud with the experience of coding locally in Visual Studio Code!
Now Bring the Dev Container Workflow to Your Application
This guide has covered the required steps to set up and run Dev Containers remotely on AWS, although the same process applies regardless of which cloud provider you’re using.
Below is a list of additional resources you'll need if implementing at a team level, such as configuring Git credentials so you can push commits from your Dev Container environment.
If you’re new to Dev Containers, I recommend starting with the following videos from Microsoft, then progress to the documentation, as understanding the big picture helps before diving into the documentation.