How to Install Docker on Rocky Linux 10 A Step-by-Step Guide

How to Install Docker on Rocky Linux 10: A Comprehensive Guide for Enterprise Containerization

Welcome to Tech Today, your trusted source for cutting-edge technology insights and actionable guides. In this in-depth article, we will walk you through the process of installing Docker on Rocky Linux 10, a robust and community-driven distribution that serves as a formidable Enterprise Linux foundation. For organizations leveraging containerization to streamline development, deployment, and management of applications, understanding the intricacies of setting up Docker on a stable and performant operating system like Rocky Linux 10 is paramount. This guide is meticulously crafted to provide you with a thorough, step-by-step approach, ensuring a successful and efficient installation for your containerized workloads. We aim to equip you with the knowledge necessary to harness the full power of Docker on this enterprise-grade platform.

Understanding Docker and Its Importance on Enterprise Linux

Before we dive into the installation process, it’s crucial to appreciate what Docker offers and why its integration with Rocky Linux 10 is a strategic advantage for businesses. Docker is an open-source platform that automates the deployment, scaling, and management of applications inside containers. These containers encapsulate an application and its dependencies, ensuring that it runs consistently across different computing environments, from a developer’s laptop to a production server.

The benefits of using Docker in an enterprise setting are manifold:

Rocky Linux 10, as a descendant of the CentOS Project and a free, community-driven enterprise operating system, offers a stable, secure, and predictable environment. Its commitment to long-term support and its compatibility with enterprise-grade software make it an excellent choice for running demanding containerized applications. By combining Docker with Rocky Linux 10, organizations can build a powerful and flexible infrastructure for their modern application needs.

Prerequisites for Docker Installation on Rocky Linux 10

To ensure a smooth and successful installation of Docker on your Rocky Linux 10 system, we need to confirm a few prerequisites are met. These are standard for most Linux installations and are essential for the Docker daemon and its associated components to function correctly.

System Requirements

Verifying Existing Docker Installations

Before proceeding, it’s a good practice to check if Docker or any of its related packages are already installed on your system. Sometimes, previous attempts at installation or other containerization tools might have left remnants.

To check for existing Docker installations, you can use the following command:

sudo dnf list installed | grep docker

If this command returns any output, it indicates that Docker or related packages are already present. In such cases, it might be advisable to uninstall them to ensure a clean installation. You can do this with:

sudo dnf remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine docker-ce docker-ce-cli containerd.io docker-compose-plugin

Note: The exact package names might vary slightly depending on previous installation methods. Always perform a thorough check.

Step-by-Step Installation of Docker on Rocky Linux 10

We will now proceed with the installation of Docker CE (Community Edition) on your Rocky Linux 10 system. This is the most common and recommended version for most use cases, offering a balance of features and stability.

1. Updating Your System

The first and most critical step is to ensure your Rocky Linux 10 system is fully updated. This synchronizes your package index and upgrades existing packages to their latest versions, minimizing potential conflicts and ensuring you have the most secure and stable base.

Open your terminal and execute the following commands:

sudo dnf update -y

This command updates all installed packages to their latest versions available in the configured repositories. The -y flag automatically confirms any prompts, making the process non-interactive.

2. Installing Required Packages for Docker Repository

To install Docker from its official repository, we first need to install the dnf-plugins-core package, which provides the dnf config-manager utility. This utility is used to add new repositories to your system.

Execute the following command:

sudo dnf install dnf-plugins-core -y

This command installs the necessary plugin that allows DNF to manage repository configurations more effectively.

3. Adding the Official Docker Repository

Now, we will add the official Docker CE repository to your DNF package manager. This repository contains the latest stable releases of Docker.

Execute the following command to add the repository:

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Explanation:

4. Installing Docker Engine

With the repository added, we can now install the Docker Engine. This includes the Docker daemon, the Docker CLI (Command Line Interface), and the Docker Compose plugin.

Run the following command to install Docker CE:

sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y

Breakdown of Packages:

This command will download and install all the necessary components for Docker to function.

5. Starting and Enabling the Docker Service

After the installation is complete, the Docker service will be available, but it might not be running or configured to start automatically on boot. We need to start the Docker service and enable it to launch on system startup.

First, start the Docker service:

sudo systemctl start docker

Next, enable the Docker service to start automatically at boot time:

sudo systemctl enable docker

You can check the status of the Docker service to ensure it’s running correctly:

sudo systemctl status docker

You should see output indicating that the service is active and running. Press q to exit the status view.

6. Verifying Docker Installation with the hello-world Container

The ultimate test of a successful Docker installation is to run a simple container. Docker provides a test image called hello-world for this purpose. This container will print an informational message and then exit.

To run the hello-world container, execute the following command:

sudo docker run hello-world

If the installation is successful, you will see output similar to this:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
...
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

This output confirms that Docker is installed, running, and capable of pulling images and running containers.

7. Post-Installation: Managing Docker as a Non-Root User

By default, running Docker commands requires sudo. This is a security measure to prevent unauthorized access. However, for convenience, especially during development or routine tasks, you might want to allow your regular user to run Docker commands without sudo.

To achieve this, you need to add your user to the docker group.

First, create the docker group if it doesn’t already exist (though it’s usually created during installation):

sudo groupadd docker

Then, add your current user to the docker group:

sudo usermod -aG docker $USER

Explanation:

After executing this command, you need to either log out and log back in for the group changes to take effect, or use the newgrp command to apply the group membership to your current shell session:

newgrp docker

Once you have logged back in or used newgrp, you should be able to run Docker commands without sudo:

docker run hello-world

If you can run this command without sudo, your user has been successfully added to the docker group.

Configuring Docker Daemon Settings

While the default Docker daemon configuration is suitable for many scenarios, you might need to adjust certain settings to optimize performance or meet specific application requirements. Docker daemon configuration is typically managed through a JSON file.

Locating the Docker Daemon Configuration File

The primary configuration file for the Docker daemon is located at /etc/docker/daemon.json. If this file does not exist, you can create it.

Common Daemon Configuration Options

You can customize various aspects of the Docker daemon by modifying the daemon.json file. Here are some commonly used configurations:

1. Setting the Default Log Driver

By default, Docker uses the json-file log driver, which writes container logs to files on the host. For enterprise environments, you might want to use a more robust logging solution like syslog or journald.

To configure Docker to use journald as the default log driver:

  1. Create or edit the daemon.json file:

    sudo nano /etc/docker/daemon.json
    

    (You can use vi or your preferred text editor instead of nano).

  2. Add the following content:

    {
      "log-driver": "journald"
    }
    
  3. Save and exit the file.

  4. Restart the Docker daemon for the changes to take effect:

    sudo systemctl restart docker
    

2. Configuring the Docker Daemon’s Storage Driver

The storage driver determines how Docker stores images and manages container layers. overlay2 is the recommended storage driver for most Linux distributions, including Rocky Linux 10, as it offers better performance and disk space efficiency compared to older drivers. Your installation likely defaults to overlay2, but you can explicitly set it.

To ensure overlay2 is used:

  1. Edit the daemon.json file:

    sudo nano /etc/docker/daemon.json
    
  2. Add or update the storage-driver key:

    {
      "storage-driver": "overlay2"
    }
    

    If you have other configurations, ensure the JSON remains valid. For example:

    {
      "log-driver": "journald",
      "storage-driver": "overlay2"
    }
    
  3. Save and exit the file.

  4. Restart the Docker daemon:

    sudo systemctl restart docker
    

You can verify the storage driver being used by running:

docker info | grep "Storage Driver"

3. Setting the Docker Daemon’s Data Directory

By default, Docker stores its data (images, containers, volumes, etc.) in /var/lib/docker. If you need to relocate this directory to a different disk or partition with more space, you can configure it in daemon.json.

  1. Create a new directory for Docker data:

    sudo mkdir /path/to/new/docker-data
    

    Replace /path/to/new/docker-data with your desired location.

  2. Edit the daemon.json file:

    sudo nano /etc/docker/daemon.json
    
  3. Add the data-root key:

    {
      "data-root": "/path/to/new/docker-data"
    }
    

    Again, ensure your JSON is valid and merge this with any existing configurations.

  4. Save and exit the file.

  5. Stop the Docker service:

    sudo systemctl stop docker
    
  6. Copy the existing Docker data to the new location:

    sudo cp -au /var/lib/docker/. /path/to/new/docker-data/
    

    The -a flag preserves permissions and ownership, and -u copies only if the source file is newer than the destination file or when the destination file is missing.

  7. Remove the old Docker data directory:

    sudo rm -rf /var/lib/docker
    

    Caution: Ensure you have successfully copied the data before deleting the original directory.

  8. Restart the Docker daemon:

    sudo systemctl start docker
    

Important Note on daemon.json Syntax

The daemon.json file must be valid JSON. Any syntax errors, such as trailing commas or missing quotes, will prevent the Docker daemon from starting.

Managing Docker Containers and Images

Once Docker is installed and running, you’ll want to know how to interact with it. Here are some fundamental commands for managing containers and images.

Pulling Docker Images

Docker images are the building blocks for containers. You can pull images from Docker Hub (the default registry) or other private registries.

To pull a specific image, for example, ubuntu:

docker pull ubuntu:latest

This command downloads the latest version of the Ubuntu image. You can specify different tags for different versions (e.g., ubuntu:20.04).

Listing Docker Images

To see all the images you have downloaded locally:

docker images

This will display a table with information about each image, including its repository, tag, image ID, creation date, and size.

Running Docker Containers

To run a container from an image, you use the docker run command.

A basic example to run an Ubuntu container and execute a command:

docker run ubuntu:latest echo "Hello from an Ubuntu container!"

This will run an Ubuntu container, execute the echo command inside it, and then the container will exit.

To run a container in interactive mode and attach to its terminal:

docker run -it ubuntu:latest /bin/bash

Once inside the container, you can execute commands. Type exit to leave the container and stop it.

To run a container in detached mode (in the background):

docker run -d nginx

This starts an Nginx web server in the background.

Listing Running Docker Containers

To see all containers that are currently running:

docker ps

To see all containers, including those that have stopped:

docker ps -a

Stopping and Starting Docker Containers

To stop a running container, you need its container ID or name:

docker stop <container_id_or_name>

To start a stopped container:

docker start <container_id_or_name>

Removing Docker Containers and Images

To remove a stopped container:

docker rm <container_id_or_name>

To remove an image:

docker rmi <image_id_or_name>

You can only remove an image if no containers are using it.

To remove all stopped containers:

docker container prune -f

To remove all unused images (dangling images and unreferenced layers):

docker image prune -f

To remove all stopped containers and all unused images, networks, and dangling images:

docker system prune -a -f

This is a powerful command for freeing up disk space.

Docker Compose: Orchestrating Multi-Container Applications

For applications composed of multiple services (e.g., a web server, a database, a caching layer), Docker Compose is an invaluable tool. It allows you to define and manage these applications using a YAML file.

Installing Docker Compose

As of Docker’s recent versions, Docker Compose is provided as a plugin to the Docker CLI. If you followed the installation steps using docker-compose-plugin, you should already have it.

You can verify its installation by running:

docker compose version

If you need to install it separately or update it, you can follow the official Docker documentation, but typically the plugin installation covers this.

Creating a docker-compose.yml File

A docker-compose.yml file defines the services, networks, and volumes for your application.

Here’s a simple example for a web application with a database:

version: '3.8'

services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./html:/usr/share/nginx/html
    depends_on:
      - db

  db:
    image: postgres:13
    environment:
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: mydatabase
    volumes:
      - db_data:/var/lib/postgresql/data

volumes:
  db_data:

Running Applications with Docker Compose

  1. Save the above content into a file named docker-compose.yml in your project directory.
  2. Create an html directory in the same location and add an index.html file for Nginx to serve.
  3. Navigate to the directory containing docker-compose.yml in your terminal.
  4. Start the application in detached mode:
    docker compose up -d
    
  5. Stop the application:
    docker compose down
    

This provides a streamlined way to manage complex application stacks on your Rocky Linux 10 server.

Troubleshooting Common Docker Issues on Rocky Linux 10

While the installation process is generally straightforward, you might encounter a few issues. Here are some common problems and their solutions.

Docker Daemon Not Starting

“Cannot connect to the Docker daemon” Error

This usually means the Docker daemon is not running, or your user doesn’t have the necessary permissions.

Docker Build Failures

Network Issues with Containers

Security Best Practices for Docker on Rocky Linux 10

Running Docker in an enterprise environment necessitates a strong focus on security. Implementing these best practices will significantly harden your Docker deployment on Rocky Linux 10.

Regularly Update Docker and Rocky Linux

Keeping both Docker and the underlying operating system up-to-date is the most fundamental security measure. Updates often include critical security patches that protect against known vulnerabilities.

sudo dnf update -y
sudo systemctl restart docker # After updating Docker packages

Run Docker Containers as Non-Root Users

Whenever possible, configure your containers to run applications as a non-root user. If a container’s process is compromised, running as a non-root user limits the potential damage it can inflict on the host system.

In your Dockerfile:

# ... other instructions
USER nonrootuser
CMD ["your_application"]

Minimize Container Privileges

Avoid running containers with the --privileged flag unless absolutely necessary. This flag grants the container extensive access to the host system’s devices and capabilities. If a process inside a privileged container is compromised, it can lead to a full host compromise.

Use Read-Only Root Filesystems

Configure your containers to use a read-only root filesystem (--read-only). This prevents any modifications to the container’s base image, making it more resilient to tampering and ensuring immutability. You can then use volumes for persistent data that needs to be written.

docker run -d --read-only your_image

Manage Docker Secrets

Do not hardcode sensitive information like passwords or API keys directly into your Dockerfile or docker-compose.yml files. Instead, use Docker secrets or environment variables managed securely. Docker Swarm and Kubernetes have built-in secrets management. For Docker standalone, consider using environment variables passed at runtime.

Scan Container Images for Vulnerabilities

Utilize container vulnerability scanning tools (e.g., Trivy, Clair) to scan your container images for known security vulnerabilities before deploying them. Integrate these scans into your CI/CD pipeline.

Secure Your Docker Daemon Configuration

As discussed earlier, configure your daemon.json carefully. Limit unnecessary daemon features and ensure proper logging is in place.

Implement Network Segmentation

Use Docker networks to isolate your containers. Create specific networks for different applications or tiers of your application to limit the lateral movement of potential threats. Avoid using the default bridge network for sensitive workloads if isolation is critical.

Conclusion

Installing and configuring Docker on Rocky Linux 10 provides a robust and stable platform for your containerized workloads. By following the detailed steps outlined in this guide, you can ensure a successful installation, gain control over your Docker environment through configuration, and confidently manage your containers and images. Embracing Docker on Rocky Linux 10 equips your organization with a powerful, efficient, and scalable solution for modern application deployment, paving the way for enhanced agility and operational excellence. At Tech Today, we are committed to bringing you the most relevant and practical information to navigate the ever-evolving landscape of technology. This comprehensive setup on an enterprise-grade Linux distribution is a testament to building resilient and high-performing container infrastructures.