
How to Install Docker on Rocky Linux 10: A Comprehensive Guide for Enterprise Containerization
Welcome to Tech Today, your trusted source for cutting-edge technology insights and actionable guides. In this in-depth article, we will walk you through the process of installing Docker on Rocky Linux 10, a robust and community-driven distribution that serves as a formidable Enterprise Linux foundation. For organizations leveraging containerization to streamline development, deployment, and management of applications, understanding the intricacies of setting up Docker on a stable and performant operating system like Rocky Linux 10 is paramount. This guide is meticulously crafted to provide you with a thorough, step-by-step approach, ensuring a successful and efficient installation for your containerized workloads. We aim to equip you with the knowledge necessary to harness the full power of Docker on this enterprise-grade platform.
Understanding Docker and Its Importance on Enterprise Linux
Before we dive into the installation process, it’s crucial to appreciate what Docker offers and why its integration with Rocky Linux 10 is a strategic advantage for businesses. Docker is an open-source platform that automates the deployment, scaling, and management of applications inside containers. These containers encapsulate an application and its dependencies, ensuring that it runs consistently across different computing environments, from a developer’s laptop to a production server.
The benefits of using Docker in an enterprise setting are manifold:
- Consistency and Portability: Docker containers provide an isolated environment, guaranteeing that applications behave the same way regardless of the underlying infrastructure. This significantly reduces “it works on my machine” issues.
- Faster Deployment Cycles: The ability to package and deploy applications quickly and reliably accelerates development and release cycles, enabling businesses to respond faster to market demands.
- Resource Efficiency: Containers are more lightweight than traditional virtual machines, consuming fewer resources and allowing for higher density deployments on the same hardware.
- Scalability: Docker facilitates easy scaling of applications by simply launching more instances of the containerized application, making it ideal for dynamic workloads.
- Simplified Management: Docker’s ecosystem provides tools for managing container lifecycles, networking, and storage, simplifying the overall operational overhead.
Rocky Linux 10, as a descendant of the CentOS Project and a free, community-driven enterprise operating system, offers a stable, secure, and predictable environment. Its commitment to long-term support and its compatibility with enterprise-grade software make it an excellent choice for running demanding containerized applications. By combining Docker with Rocky Linux 10, organizations can build a powerful and flexible infrastructure for their modern application needs.
Prerequisites for Docker Installation on Rocky Linux 10
To ensure a smooth and successful installation of Docker on your Rocky Linux 10 system, we need to confirm a few prerequisites are met. These are standard for most Linux installations and are essential for the Docker daemon and its associated components to function correctly.
System Requirements
- Rocky Linux 10 Installation: You must have a functional installation of Rocky Linux 10. This guide assumes you are operating on this specific version.
- Internet Connectivity: An active internet connection is required to download the necessary Docker packages and their dependencies from official repositories.
- Sudo Privileges: You will need a user account with
sudoprivileges. Most administrative tasks, including package installation and service management, require root or superuser permissions. - System Updates: It is highly recommended to ensure your Rocky Linux 10 system is up-to-date. This includes applying the latest security patches and software updates. An outdated system might have compatibility issues or missing dependencies.
- Hardware: While Docker itself is resource-efficient, the applications you plan to run within containers will dictate your hardware requirements. For basic Docker operation, a system with at least 2GB of RAM and a dual-core processor is generally sufficient.
Verifying Existing Docker Installations
Before proceeding, it’s a good practice to check if Docker or any of its related packages are already installed on your system. Sometimes, previous attempts at installation or other containerization tools might have left remnants.
To check for existing Docker installations, you can use the following command:
sudo dnf list installed | grep docker
If this command returns any output, it indicates that Docker or related packages are already present. In such cases, it might be advisable to uninstall them to ensure a clean installation. You can do this with:
sudo dnf remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine docker-ce docker-ce-cli containerd.io docker-compose-plugin
Note: The exact package names might vary slightly depending on previous installation methods. Always perform a thorough check.
Step-by-Step Installation of Docker on Rocky Linux 10
We will now proceed with the installation of Docker CE (Community Edition) on your Rocky Linux 10 system. This is the most common and recommended version for most use cases, offering a balance of features and stability.
1. Updating Your System
The first and most critical step is to ensure your Rocky Linux 10 system is fully updated. This synchronizes your package index and upgrades existing packages to their latest versions, minimizing potential conflicts and ensuring you have the most secure and stable base.
Open your terminal and execute the following commands:
sudo dnf update -y
This command updates all installed packages to their latest versions available in the configured repositories. The -y flag automatically confirms any prompts, making the process non-interactive.
2. Installing Required Packages for Docker Repository
To install Docker from its official repository, we first need to install the dnf-plugins-core package, which provides the dnf config-manager utility. This utility is used to add new repositories to your system.
Execute the following command:
sudo dnf install dnf-plugins-core -y
This command installs the necessary plugin that allows DNF to manage repository configurations more effectively.
3. Adding the Official Docker Repository
Now, we will add the official Docker CE repository to your DNF package manager. This repository contains the latest stable releases of Docker.
Execute the following command to add the repository:
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Explanation:
sudo dnf config-manager: This is the command to manage DNF repositories.--add-repo: This flag tellsconfig-managerto add a new repository.https://download.docker.com/linux/centos/docker-ce.repo: This is the URL to the Docker CE repository configuration file for RHEL-based distributions, which includes Rocky Linux. Even though it points tocentos, it is compatible with Rocky Linux due to their shared origins.
4. Installing Docker Engine
With the repository added, we can now install the Docker Engine. This includes the Docker daemon, the Docker CLI (Command Line Interface), and the Docker Compose plugin.
Run the following command to install Docker CE:
sudo dnf install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y
Breakdown of Packages:
docker-ce: This is the main Docker Engine package.docker-ce-cli: This package provides the command-line interface for interacting with the Docker daemon.containerd.io: This is a core container runtime that Docker relies on.docker-compose-plugin: This installs the Docker Compose functionality as a plugin for the Docker CLI, allowing you to orchestrate multi-container Docker applications usingdocker compose.
This command will download and install all the necessary components for Docker to function.
5. Starting and Enabling the Docker Service
After the installation is complete, the Docker service will be available, but it might not be running or configured to start automatically on boot. We need to start the Docker service and enable it to launch on system startup.
First, start the Docker service:
sudo systemctl start docker
Next, enable the Docker service to start automatically at boot time:
sudo systemctl enable docker
You can check the status of the Docker service to ensure it’s running correctly:
sudo systemctl status docker
You should see output indicating that the service is active and running. Press q to exit the status view.
6. Verifying Docker Installation with the hello-world Container
The ultimate test of a successful Docker installation is to run a simple container. Docker provides a test image called hello-world for this purpose. This container will print an informational message and then exit.
To run the hello-world container, execute the following command:
sudo docker run hello-world
If the installation is successful, you will see output similar to this:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
...
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
This output confirms that Docker is installed, running, and capable of pulling images and running containers.
7. Post-Installation: Managing Docker as a Non-Root User
By default, running Docker commands requires sudo. This is a security measure to prevent unauthorized access. However, for convenience, especially during development or routine tasks, you might want to allow your regular user to run Docker commands without sudo.
To achieve this, you need to add your user to the docker group.
First, create the docker group if it doesn’t already exist (though it’s usually created during installation):
sudo groupadd docker
Then, add your current user to the docker group:
sudo usermod -aG docker $USER
Explanation:
sudo usermod: This command modifies user account settings.-aG: This appends the user to the specified group (-afor append,-Gfor group).docker: The name of the group to add the user to.$USER: This is an environment variable that represents the current logged-in username.
After executing this command, you need to either log out and log back in for the group changes to take effect, or use the newgrp command to apply the group membership to your current shell session:
newgrp docker
Once you have logged back in or used newgrp, you should be able to run Docker commands without sudo:
docker run hello-world
If you can run this command without sudo, your user has been successfully added to the docker group.
Configuring Docker Daemon Settings
While the default Docker daemon configuration is suitable for many scenarios, you might need to adjust certain settings to optimize performance or meet specific application requirements. Docker daemon configuration is typically managed through a JSON file.
Locating the Docker Daemon Configuration File
The primary configuration file for the Docker daemon is located at /etc/docker/daemon.json. If this file does not exist, you can create it.
Common Daemon Configuration Options
You can customize various aspects of the Docker daemon by modifying the daemon.json file. Here are some commonly used configurations:
1. Setting the Default Log Driver
By default, Docker uses the json-file log driver, which writes container logs to files on the host. For enterprise environments, you might want to use a more robust logging solution like syslog or journald.
To configure Docker to use journald as the default log driver:
Create or edit the
daemon.jsonfile:sudo nano /etc/docker/daemon.json(You can use
vior your preferred text editor instead ofnano).Add the following content:
{ "log-driver": "journald" }Save and exit the file.
Restart the Docker daemon for the changes to take effect:
sudo systemctl restart docker
2. Configuring the Docker Daemon’s Storage Driver
The storage driver determines how Docker stores images and manages container layers. overlay2 is the recommended storage driver for most Linux distributions, including Rocky Linux 10, as it offers better performance and disk space efficiency compared to older drivers. Your installation likely defaults to overlay2, but you can explicitly set it.
To ensure overlay2 is used:
Edit the
daemon.jsonfile:sudo nano /etc/docker/daemon.jsonAdd or update the
storage-driverkey:{ "storage-driver": "overlay2" }If you have other configurations, ensure the JSON remains valid. For example:
{ "log-driver": "journald", "storage-driver": "overlay2" }Save and exit the file.
Restart the Docker daemon:
sudo systemctl restart docker
You can verify the storage driver being used by running:
docker info | grep "Storage Driver"
3. Setting the Docker Daemon’s Data Directory
By default, Docker stores its data (images, containers, volumes, etc.) in /var/lib/docker. If you need to relocate this directory to a different disk or partition with more space, you can configure it in daemon.json.
Create a new directory for Docker data:
sudo mkdir /path/to/new/docker-dataReplace
/path/to/new/docker-datawith your desired location.Edit the
daemon.jsonfile:sudo nano /etc/docker/daemon.jsonAdd the
data-rootkey:{ "data-root": "/path/to/new/docker-data" }Again, ensure your JSON is valid and merge this with any existing configurations.
Save and exit the file.
Stop the Docker service:
sudo systemctl stop dockerCopy the existing Docker data to the new location:
sudo cp -au /var/lib/docker/. /path/to/new/docker-data/The
-aflag preserves permissions and ownership, and-ucopies only if the source file is newer than the destination file or when the destination file is missing.Remove the old Docker data directory:
sudo rm -rf /var/lib/dockerCaution: Ensure you have successfully copied the data before deleting the original directory.
Restart the Docker daemon:
sudo systemctl start docker
Important Note on daemon.json Syntax
The daemon.json file must be valid JSON. Any syntax errors, such as trailing commas or missing quotes, will prevent the Docker daemon from starting.
Managing Docker Containers and Images
Once Docker is installed and running, you’ll want to know how to interact with it. Here are some fundamental commands for managing containers and images.
Pulling Docker Images
Docker images are the building blocks for containers. You can pull images from Docker Hub (the default registry) or other private registries.
To pull a specific image, for example, ubuntu:
docker pull ubuntu:latest
This command downloads the latest version of the Ubuntu image. You can specify different tags for different versions (e.g., ubuntu:20.04).
Listing Docker Images
To see all the images you have downloaded locally:
docker images
This will display a table with information about each image, including its repository, tag, image ID, creation date, and size.
Running Docker Containers
To run a container from an image, you use the docker run command.
A basic example to run an Ubuntu container and execute a command:
docker run ubuntu:latest echo "Hello from an Ubuntu container!"
This will run an Ubuntu container, execute the echo command inside it, and then the container will exit.
To run a container in interactive mode and attach to its terminal:
docker run -it ubuntu:latest /bin/bash
-i(interactive): Keeps STDIN open even if not attached.-t(tty): Allocates a pseudo-TTY, which gives you a terminal interface.
Once inside the container, you can execute commands. Type exit to leave the container and stop it.
To run a container in detached mode (in the background):
docker run -d nginx
This starts an Nginx web server in the background.
Listing Running Docker Containers
To see all containers that are currently running:
docker ps
To see all containers, including those that have stopped:
docker ps -a
Stopping and Starting Docker Containers
To stop a running container, you need its container ID or name:
docker stop <container_id_or_name>
To start a stopped container:
docker start <container_id_or_name>
Removing Docker Containers and Images
To remove a stopped container:
docker rm <container_id_or_name>
To remove an image:
docker rmi <image_id_or_name>
You can only remove an image if no containers are using it.
To remove all stopped containers:
docker container prune -f
To remove all unused images (dangling images and unreferenced layers):
docker image prune -f
To remove all stopped containers and all unused images, networks, and dangling images:
docker system prune -a -f
This is a powerful command for freeing up disk space.
Docker Compose: Orchestrating Multi-Container Applications
For applications composed of multiple services (e.g., a web server, a database, a caching layer), Docker Compose is an invaluable tool. It allows you to define and manage these applications using a YAML file.
Installing Docker Compose
As of Docker’s recent versions, Docker Compose is provided as a plugin to the Docker CLI. If you followed the installation steps using docker-compose-plugin, you should already have it.
You can verify its installation by running:
docker compose version
If you need to install it separately or update it, you can follow the official Docker documentation, but typically the plugin installation covers this.
Creating a docker-compose.yml File
A docker-compose.yml file defines the services, networks, and volumes for your application.
Here’s a simple example for a web application with a database:
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "8080:80"
volumes:
- ./html:/usr/share/nginx/html
depends_on:
- db
db:
image: postgres:13
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: mydatabase
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
Running Applications with Docker Compose
- Save the above content into a file named
docker-compose.ymlin your project directory. - Create an
htmldirectory in the same location and add anindex.htmlfile for Nginx to serve. - Navigate to the directory containing
docker-compose.ymlin your terminal. - Start the application in detached mode:
docker compose up -d - Stop the application:
docker compose down
This provides a streamlined way to manage complex application stacks on your Rocky Linux 10 server.
Troubleshooting Common Docker Issues on Rocky Linux 10
While the installation process is generally straightforward, you might encounter a few issues. Here are some common problems and their solutions.
Docker Daemon Not Starting
- Check systemctl status:
sudo systemctl status docker - Examine logs: Use
journalctl -u docker.serviceto view detailed logs. Look for error messages that might indicate misconfigurations or dependency issues. - Verify
daemon.json: Ensure your/etc/docker/daemon.jsonfile has correct JSON syntax. - Check for port conflicts: If you’re running other services that use the same ports Docker might need, it could cause issues.
“Cannot connect to the Docker daemon” Error
This usually means the Docker daemon is not running, or your user doesn’t have the necessary permissions.
- Ensure Docker is running:
sudo systemctl start docker - Check user permissions: If you intended to run Docker without
sudo, ensure your user is in thedockergroup and that you have logged out and back in or usednewgrp docker. - Check Docker socket permissions: The Docker daemon communicates via a Unix socket (
/var/run/docker.sock). Ensure thedockergroup has access to this socket.
Docker Build Failures
- Check
Dockerfilesyntax: Ensure yourDockerfileis correctly written. - Verify build context: Make sure you are in the correct directory when running
docker buildand that all necessary files are present. - Insufficient resources: Some build processes can be resource-intensive. Ensure your system has enough RAM and CPU.
Network Issues with Containers
- Firewall rules: Rocky Linux’s firewall (
firewalld) might block container networking. Ensure that necessary ports are open if your containers need to communicate with the outside world or other services. You may need to allow Docker’s network interfaces. - Network configuration: For advanced networking scenarios, ensure your Docker network configuration is correct. You can inspect networks with
docker network lsanddocker network inspect <network_name>.
Security Best Practices for Docker on Rocky Linux 10
Running Docker in an enterprise environment necessitates a strong focus on security. Implementing these best practices will significantly harden your Docker deployment on Rocky Linux 10.
Regularly Update Docker and Rocky Linux
Keeping both Docker and the underlying operating system up-to-date is the most fundamental security measure. Updates often include critical security patches that protect against known vulnerabilities.
sudo dnf update -y
sudo systemctl restart docker # After updating Docker packages
Run Docker Containers as Non-Root Users
Whenever possible, configure your containers to run applications as a non-root user. If a container’s process is compromised, running as a non-root user limits the potential damage it can inflict on the host system.
In your Dockerfile:
# ... other instructions
USER nonrootuser
CMD ["your_application"]
Minimize Container Privileges
Avoid running containers with the --privileged flag unless absolutely necessary. This flag grants the container extensive access to the host system’s devices and capabilities. If a process inside a privileged container is compromised, it can lead to a full host compromise.
Use Read-Only Root Filesystems
Configure your containers to use a read-only root filesystem (--read-only). This prevents any modifications to the container’s base image, making it more resilient to tampering and ensuring immutability. You can then use volumes for persistent data that needs to be written.
docker run -d --read-only your_image
Manage Docker Secrets
Do not hardcode sensitive information like passwords or API keys directly into your Dockerfile or docker-compose.yml files. Instead, use Docker secrets or environment variables managed securely. Docker Swarm and Kubernetes have built-in secrets management. For Docker standalone, consider using environment variables passed at runtime.
Scan Container Images for Vulnerabilities
Utilize container vulnerability scanning tools (e.g., Trivy, Clair) to scan your container images for known security vulnerabilities before deploying them. Integrate these scans into your CI/CD pipeline.
Secure Your Docker Daemon Configuration
As discussed earlier, configure your daemon.json carefully. Limit unnecessary daemon features and ensure proper logging is in place.
Implement Network Segmentation
Use Docker networks to isolate your containers. Create specific networks for different applications or tiers of your application to limit the lateral movement of potential threats. Avoid using the default bridge network for sensitive workloads if isolation is critical.
Conclusion
Installing and configuring Docker on Rocky Linux 10 provides a robust and stable platform for your containerized workloads. By following the detailed steps outlined in this guide, you can ensure a successful installation, gain control over your Docker environment through configuration, and confidently manage your containers and images. Embracing Docker on Rocky Linux 10 equips your organization with a powerful, efficient, and scalable solution for modern application deployment, paving the way for enhanced agility and operational excellence. At Tech Today, we are committed to bringing you the most relevant and practical information to navigate the ever-evolving landscape of technology. This comprehensive setup on an enterprise-grade Linux distribution is a testament to building resilient and high-performing container infrastructures.