Working with the Container Storage library and tools in Red Hat Enterprise Linux (2024)

How containers are stored on disk is often a mystery to users working with the containers. In this post, we’re going to look at how containers images are stored and some of the tools that you can use to work with those images directly -Podman, Skopeo, and Buildah.

Evolution of Container Image Storage

When I first started working with containers, one of the things I did not like about Docker’s architecture was that the daemon hid the information about the image store within itself. The only realistic way someone could use the images was through the daemon. We were working on theatomic tool and wanted a way to mount the container images so that we could scan them. After all a container image was just a mount point under devicemapper or overlay.

The container runtime team at Red Hat created the atomic mountcommand to mount images under Docker and this was used within atomic scan. The issue here was that the daemon did not know about this so if someone attempted to remove the image while we mounted it, the daemon would get confused. The locking and manipulation had to be done within the daemon.

When we began to create new container engines, the first thing we required was to build a new library containers/storage, which did not require a daemon to control it. We wanted to allow multiple tools to use the storage at the same time, without needing to know about each other.

We would use file system locking to control access to storage data. The first step was to separate out the containers/storage under the Docker Project, called the graphdriver. These storage drivers implement different copy on write (COW) storage drivers including overlay, devicemapper, btrfs, xfs, vfs, aufs … If you want to use the library in a go project, then you just implement a store.

Note that the container/storage library is not related to Container Storage Interface (CSI). Container/storage is about storing container images on COW file systems, while CSI is for the volumes that containers write to. For example you might use a CSI to store the database that is used by the MariaDB container image, which is stored in container/storage. I hope this clears up any confusion.

Storage Configuration

Container storage configuration is defined in the storage.conf file. For containers engines that run as root, the storage.conf file is stored in /etc/containers/storage.conf. If you are running rootless with a tool like Podman, then the storage.conf file is stored in $HOME/.config/containers/storage.conf.

Now let’s look at storage.conf.

$ cat /etc/containers/storage.conf# This file is is the configuration file for all tools# that use the containers/storage library.# See man 5 containers-storage.conf for more information# The "container storage" table contains all of the server options.[storage]# Default Storage Driverdriver = "overlay"

The driver field is critical. In container/storage we default to the overlay driver. In Docker world there are two overlay drivers, overlay and overlay2, today most users use the overlay2 driver, so we just use that one, and called it overlay. If you accidentally use overlay2 in the config containers storage is smart enough to alias it to overlay.

# Temporary storage locationrunroot = "/var/run/containers/storage"# Primary Read/Write location of container storagegraphroot = "/var/lib/containers/storage"

The graph root, defines the location where the actual images will be stored. We recommend that you set up a lot of space on this location, since people tend to store lots of images over time. No special tools are required to set up storage, you should set up storage in any manner that best fits your needs using standard Linux commands, but we recommend that you mount a large device on /var/lib/containers.

[storage.options]# Storage options to be passed to underlying storage drivers

There are a lot of per graphdriver storage options. Some of these allow you to do interesting things with containers storage, I will talk about some of them below.

# AdditionalImageStores is used to pass paths to additional Read/Only image stores# Must be comma separated list.additionalimagestores = []

Additional image stores is a cool feature that allows you to set up additional read only stores of images. For example you could set up an NFS share with many overlay container images and share them with all of your container engines via NFS. Then rather than requiring each node running a container engine to pull down huge images, they could use the image on the NFS store, and start the container.

​# Size is used to set a maximum size of the container image. Only supported by# certain container storage drivers.size = ""

Size controls the size of a container image, if you are running a system where lots of users are going to be pulling images, you might want to set a quota to make sure that no user is able to pull in huge images. OpenShift.com for example uses this feature to control its users, especially when using OpenShift Online.

# Path to an helper program to use for mounting the file system instead of mounting it# directly.# mount_program = "/usr/bin/fuse-overlayfs"# mountopt specifies comma separated list of extra mount optionsmountopt = "nodev"

This flag allows you to pass special mount options into the drivers. For example setting the nodevfield prevents users from using device nodes that show up in a container image. Container engines provide devices on a tmpfs mounted at /dev, so there is no reason to have devices imbedded in images, especially when they could be used to circumvent security.

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of# a container, to UIDs/GIDs as they should appear outside of the container, and# the length of the range of UIDs/GIDs. Additional mapped sets can be listed# and will be heeded by libraries, but there are limits to the number of# mappings which the kernel will allow when you later attempt to run a# container.## remap-uids = 0:1668442479:65536# remap-gids = 0:1668442479:65536

The remap uids and gids flags tells container/storage to store images in a remapped format, for user within the specified user namespace. If you set a remap-uids to 0:100000:65536, this tells containers storage when storing images to remap files owned by UID=0 to 100,000, UID=1 to 100,001, UID=2 to 100,0002 and so on, up to uid 65536. Now if a container engine runs a container within the mapping, it will run more securely using the uids associated with the user rather than root.

# Remap-User/Group is a name which can be used to look up one or more UID/GID# ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting# with an in-container ID of 0 and the a host-level ID taken from the lowest# range that matches the specified name, and using the length of that range.# Additional ranges are then assigned, using the ranges which specify the# lowest host-level IDs first, to the lowest not-yet-mapped container-level ID,# until all of the entries have been used for maps.## remap-user = "storage"# remap-group = "storage"[storage.options.thinpool]# Storage Options for thinpool

The rest of the options are used for the creation of thinpool with drivers like devicemapper, along with a few other options. You can refer to the /etc/containers/storage.conf file on disk for a description as well as the storage.conf(5) man page for further information.

Using container storage

Container engines and tools like Podman, Buildah, CRI-O, Skopeo share container storage at the same time. They can all see each others images and can be used in conjunction with each other or totally separately, based on file locks. This means podman can mount and examine a container. While they share the actual storage, they do not necessarily share container information. Some tools have different use cases of containers, and will not display other tools’ containers. For example buildah creates build containers just for the process of building container images, since these do not require all of the content of a podman container, it has a separate database. Bot tools can remove each others container images, but they treat them separately.

# podman create -ti --name fedora-ctr fedora shed4b68304e9fbbbc527593c28c917535e1d67d7d5c3f25edc568b71275ab69fcsh-4.4# podman mount fedora-ctr/var/lib/containers/storage/overlay/b16991596db22b90b78ef10276e2ae73a1c2ca9605014cad95aac00bff6608bc/merged# ls /var/lib/containers/storage/overlay/b16991596db22b90b78ef10276e2ae73a1c2ca9605014cad95aac00bff6608bc/mergedbin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var

While buildah and CRI-O can use the same Fedora image

buildah from fedorafedora-working-container

You can even use Skopeo to preload the container/storage at boot time, to have the container/images ready for use by any of the container tool technologies. And remember there is no container daemon controlling this access, just standard file system tools.

In the next post, I will show you more things you can do with container/storage.

Working with the Container Storage library and tools in Red Hat Enterprise Linux (2024)

FAQs

What Red Hat Enterprise Linux tool is used to manage containers? ›

Podman and other open standards-based container tools make Red Hat Enterprise Linux a powerful container host that delivers production-grade support, stability, and security features as well as a path forward to Kubernetes and Red Hat OpenShift.

What are the three core technologies used to implement containers in Red Hat Enterprise Linux? ›

Red Hat Enterprise Linux implements Linux Containers using core technologies such as Control Groups (Cgroups) for Resource Management, Namespaces for Process Isolation, SELinux for Security, enabling secure multi-tenancy and reducing the risk of security exploits.

Which of the following files can be used to manage containers in RHEL? ›

podman: The podman command can run and manage containers and container images.

What is a container in Red Hat? ›

Containers are technologies that allow the packaging and isolation of applications with their entire runtime environment—all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.)

How to use containers in Linux? ›

How to start using containers on Linux
  1. Install LXC: sudo apt-get install lxc.
  2. Create a container: sudo lxc-create -t fedora -n fed-01.
  3. List your containers: sudo lxc-ls.
  4. Start a container: sudo lxc-start -d -n fed-01.
  5. Get a console for your container: sudo lxc-console -n fed-01.
Jan 23, 2018

How to check storage in redhat Linux? ›

The df command displays the amount of disk space available on the filesystem with each file name's argument.

What are the 4 Linux technologies fundamental to containers? ›

Technologies include:
  • Namespaces.
  • Control groups (cgroups)
  • Seccomp.
  • SELinux.
Sep 2, 2021

How to create container in Linux? ›

Creating the Container

Go to the command line of your system. Use the command docker create plus any relevant options. This command creates a layer over the original image which is writeable and ready to run specific commands.

Which tools can you use in Red Hat Linux to install an application package? ›

YUM is the primary package management tool for installing, updating, removing, and managing software packages in Red Hat Enterprise Linux.

What is container tools? ›

A container management tool that allows deployment of Docker containers on hosts, in a cluster and across its distribute services. Docker Swarm. This tool facilitates native clustering functionality for Docker containers. Thus, multiple Docker engines can be made into a single, virtual Docker engine.

Where are Docker images stored in RHEL? ›

Docker data architecture and persistent storage

On a linux system, docker stores data pertaining to images, containers, volumes, etc under /var/lib/docker.

Which software can run container in Linux? ›

Docker developed a Linux container technology – one that is portable, flexible and easy to deploy. Docker open sourced libcontainer and partnered with a worldwide community of contributors to further its development.

What is container with example? ›

Containers are packages of software that contain all of the necessary elements to run in any environment. In this way, containers virtualize the operating system and run anywhere, from a private data center to the public cloud or even on a developer's personal laptop.

What is Red Hat requirements? ›

Intel Pentium II processor 400 MHz or greater. 128 MB RAM or greater recommended. Approximately 20 MB free disk space. 50 MB minimum swap space recommended.

What is Red Hat mainly used for? ›

Today, Red Hat Enterprise Linux supports and powers software and technologies for automation, cloud, containers, middleware, storage, application development, microservices, virtualization, management, and more. Linux plays a major role as the core of many of Red Hat's offerings.

Which GUI tool can you use for managing Linux container? ›

Portainer. Portainer is a lightweight Docker management GUI that centralizes configuration, management, security, and deployments of containers. Portainer makes it easier for your developers to adopt container technology, more easily manage and automate their deployments, and secure your container platform.

Which Linux features are used for running containers? ›

Namespaces, cgroups, seccomp, and SELinux are the Linux technologies that make up the foundations of building and running a container process on your system.

What is container management in Linux? ›

Container management is a process for automating the creation, deployment and scaling of containers. Container management facilitates the addition, replacement and organization of containers on a large scale.

Which OpenShift container platform is required to run Red Hat CodeReady containers? ›

Red Hat CodeReady Containers brings a minimal OpenShift Container Platform 4 cluster and Podman container runtime to your local computer. These runtimes provide minimal environments for development and testing purposes. CodeReady Containers is mainly targeted at running on developers' desktops.

References

Top Articles
Latest Posts
Article information

Author: Edmund Hettinger DC

Last Updated:

Views: 5875

Rating: 4.8 / 5 (78 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Edmund Hettinger DC

Birthday: 1994-08-17

Address: 2033 Gerhold Pine, Port Jocelyn, VA 12101-5654

Phone: +8524399971620

Job: Central Manufacturing Supervisor

Hobby: Jogging, Metalworking, Tai chi, Shopping, Puzzles, Rock climbing, Crocheting

Introduction: My name is Edmund Hettinger DC, I am a adventurous, colorful, gifted, determined, precious, open, colorful person who loves writing and wants to share my knowledge and understanding with you.