Kubernetes Basics 2025: Complete Beginner’s Guide from an Expert

Posts

Kubernetes is a comprehensive platform designed to simplify and automate the deployment, scaling, and management of containerized applications. In modern application development, containers have become an essential building block because they bundle the application code along with all its dependencies and configuration into portable units. These containers ensure consistency across different computing environments, allowing developers and operators to move applications seamlessly from development to production.

Containers provide many benefits including automation of deployment processes, easier scaling to meet demand, and flexibility to run applications on various infrastructures. Kubernetes enhances these benefits by orchestrating containers across clusters of machines, handling tasks such as scheduling, load balancing, and self-healing. This orchestration makes it easier for organizations to maintain highly available, fault-tolerant applications while minimizing manual intervention.

The popularity of Kubernetes stems largely from its ability to automate complex operational tasks, its flexibility to work with different container runtimes and infrastructures, and its scalability to manage thousands of containers simultaneously. These features make Kubernetes an essential tool for modern cloud-native applications.

Choosing a Kubernetes Distribution

Before diving into the technical aspects of Kubernetes, it’s important to understand that there are multiple Kubernetes distributions available, each tailored to different environments and use cases. The best choice depends on your specific needs, such as the scale of deployment, hardware resources, and intended use case.

Some commonly used lightweight distributions include Minikube, Kind, MicroK8s, and K3s. These options are popular for development, testing, or smaller production environments because they require fewer resources and are simpler to install and manage.

Among these, K3s stands out as a minimalistic Kubernetes distribution that packages all essential components into a single binary. This lightweight setup makes it an excellent choice for edge computing, IoT, and resource-constrained environments while still providing a full Kubernetes experience. Additionally, K3s comes bundled with Kubectl, a command-line tool used to interact with the cluster and manage resources.

Installing Kubernetes with K3s

The installation of Kubernetes via K3s is designed to be straightforward and automated. K3s simplifies the complex installation process by providing a single executable that handles the setup of the Kubernetes control plane and worker nodes.

Once the installation command is executed, K3s downloads the necessary components and configures the Kubernetes system service. This approach allows users to quickly get a Kubernetes cluster up and running with minimal manual setup.

After installation, users need to configure Kubectl to communicate with the newly created cluster. This involves placing the Kubernetes configuration file, which contains credentials and cluster details, in the user’s home directory. Adjusting the file permissions ensures that Kubectl has access to this configuration to manage cluster resources securely.

By exporting the path to this configuration file as an environment variable, Kubectl commands will automatically use the correct cluster context. For convenience, users often add this environment variable to their shell configuration files so it is applied automatically on login.

Verifying Your Kubernetes Cluster

Once the setup and configuration are complete, verifying the cluster status is essential. Using Kubectl, users can query the cluster to list all nodes and check their health and readiness.

The output of this verification command confirms that the master node (the control plane) and any worker nodes are active and ready to run workloads. This step ensures that the Kubernetes cluster is operational and ready for deploying applications.

Understanding the Architecture of Kubernetes

To effectively manage and deploy applications on Kubernetes, it is crucial to understand its architecture. Kubernetes follows a master-worker model consisting mainly of two types of nodes: the master node and the worker nodes. Each plays a distinct role in the cluster and works together to maintain the desired state of the applications running inside containers.

The Master Node

The master node acts as the control plane of the Kubernetes cluster. It is responsible for managing the cluster’s overall state, scheduling workloads, and making decisions to keep the system healthy and responsive.

Key components of the master node include the API server, which serves as the front end for the Kubernetes control plane. It exposes the Kubernetes API, allowing users and other components to interact with the cluster. All cluster operations such as deploying applications, scaling pods, or monitoring cluster status are performed through this API.

Another critical component is etcd, a distributed key-value store that maintains the configuration data and state of the entire cluster. It stores information such as cluster metadata, pod locations, and secrets in a consistent and highly available manner.

The controller manager is responsible for ensuring that the cluster’s desired state matches the actual state. It runs various controllers that monitor the cluster and take corrective actions if there are discrepancies, such as replacing failed pods or nodes.

The scheduler assigns newly created pods to worker nodes based on resource availability and workload requirements. It ensures an even distribution of tasks across the cluster and optimizes resource utilization.

The Worker Nodes

Worker nodes are the machines where the actual workloads run. These nodes host the containers within pods and execute the applications.

Each worker node runs several components critical for communication and operation. The kubelet is an agent that runs on every worker node and ensures that containers described in the pod specifications are running and healthy. It constantly communicates with the master node to report node and pod status.

The container runtime is the software responsible for running containers. Kubernetes supports multiple container runtimes such as Docker or containerd, which pull container images and manage container lifecycle on worker nodes.

Kube-proxy runs on each worker node and handles network routing within the cluster. It manages the communication between pods and services, including load balancing traffic to the appropriate containers.

Basic Kubernetes Concepts and Terminology

Understanding key Kubernetes terms helps in navigating and using the platform effectively. Some fundamental concepts include nodes, pods, replica sets, services, and jobs.

Nodes are the physical or virtual machines in the Kubernetes cluster that provide the resources needed to run containers.

Pods are the smallest deployable units in Kubernetes. A pod can contain one or more containers that share networking and storage resources. Pods are always scheduled on specific nodes.

ReplicaSets ensure that a specified number of pod replicas are running at any given time. They monitor pod health and replace any pods that fail or become unresponsive, maintaining availability.

Services provide stable endpoints to access pods, abstracting the dynamic nature of pods that may be created or destroyed. They enable communication within the cluster or expose applications externally, using different types of service configurations.

Jobs are used to run finite tasks that terminate after completion. They manage one or more pods to ensure a specified number of successful completions, retrying pods if they fail.

Interacting with Kubernetes Using Kubectl

Kubectl is the command-line interface used to manage Kubernetes clusters. It allows users to deploy applications, inspect cluster resources, and troubleshoot issues.

Some common kubectl commands include listing pods and deployments, creating or applying configurations, viewing detailed resource descriptions, and retrieving logs from running pods.

These commands enable users to control almost every aspect of the cluster lifecycle, from deploying new applications to scaling and updating existing ones.

Deploying Your First Application on Kubernetes

Deploying your first application on Kubernetes is a key milestone that demonstrates how container orchestration works in a real-world scenario. This process involves multiple steps, from setting up the Kubernetes environment to building and deploying container images, and finally exposing your application so it can be accessed both inside and outside the cluster. Understanding this workflow not only helps beginners grasp Kubernetes fundamentals but also lays a solid foundation for managing more complex applications in the future.

Preparing the Environment: Installing Kubernetes

Before you can deploy any application, you need a running Kubernetes cluster. Depending on your goals, the cluster could be local, cloud-based, or managed by a third party. For beginners, lightweight distributions like K3s, Minikube, or Kind are popular because they are easy to set up and require minimal resources.

Once the cluster is operational, the next step is to configure your environment so that you can interact with Kubernetes. This involves setting up kubectl, the command-line tool used for managing Kubernetes clusters. You must configure kubectl to communicate with your cluster by copying the kubeconfig file to your user’s configuration directory and setting the appropriate environment variables. This ensures all subsequent commands are directed at your Kubernetes environment.

Cloning the Application Repository

Most applications you deploy will have their source code stored in a version control system like Git. For this tutorial, you’ll clone a sample repository containing a simple Django-based todo application.

Cloning the repository is straightforward. Using the Git command-line tool, you download the entire codebase and associated files to your local machine. This repository typically contains the application source code, a Dockerfile for containerization, and sometimes Kubernetes manifests or scripts for deployment.

Working with a ready-made repository allows you to focus on Kubernetes concepts without worrying about application logic. It also demonstrates how containerized applications are packaged and deployed in real environments.

Containerizing the Application with Docker

Before Kubernetes can run your application, it needs to be packaged into a container image. This image contains the application code, runtime, dependencies, and environment required to run the application consistently across environments.

The Dockerfile included in the repository defines how to build this image. It specifies the base image, working directory, dependencies to install, application files to copy, database migrations to run, ports to expose, and the command to start the application.

Building the Docker image involves running the Docker CLI with the build command, which reads the Dockerfile and executes its instructions step-by-step. This process results in a self-contained image that can be stored in a container registry or used directly on your Kubernetes cluster.

This step illustrates the key advantage of containers: the ability to package and run applications identically regardless of the underlying infrastructure.

Pushing the Docker Image to a Registry

For Kubernetes to deploy your application, the container images must be accessible to all nodes in the cluster. This typically means storing your images in a container registry like Docker Hub, Google Container Registry, or a private registry.

After building the image locally, you push it to the registry by first authenticating with your credentials. Pushing the image layers to the remote server, where they are stored and made available to your Kubernetes nodes.

This step decouples image creation from deployment and allows multiple Kubernetes nodes to pull the same image independently, ensuring consistency and reliability.

Crafting Kubernetes Deployment Manifests

With the image available in a registry, you next define how Kubernetes should deploy and run the application using YAML manifests. These declarative files describe Kubernetes resources like Deployments and Services.

The Deployment manifest specifies important details, including the number of pod replicas to run, the container image to use, ports to expose, labels for selection, and update strategies. Kubernetes uses this information to manage pod lifecycle, ensure the desired number of replicas are running, and perform rolling updates without downtime.

The Service manifest defines how the application is exposed to network traffic. This could be internally within the cluster (ClusterIP), on a node port, or externally via a LoadBalancer. Services provide stable IP addresses and DNS names for pods, allowing reliable communication between components and with external clients.

Writing these manifests carefully is critical as they control the behavior, availability, and accessibility of your application.

Applying Manifests to Deploy the Application

Once the manifests are ready, you use the kubectl apply command to send them to the Kubernetes API server. Kubernetes processes these manifests and starts creating the described resources.

The deployment controller begins scheduling pods on available nodes. Each pod pulls the container image from the registry and starts running the application. Kubernetes continuously monitors pod health, ensuring that if a pod crashes or a node fails, new pods are created to maintain the desired state.

The service resource configures the network routing, load balances incoming requests across the healthy pods. This mechanism ensures high availability and scalability of your application.

Verifying Deployment Success

After applying the manifests, it is essential to verify that the application is deployed correctly and running smoothly. This involves inspecting the status of deployments, pods, and services using kubectl commands.

  • Checking deployment status reveals whether Kubernetes has created all requested replicas and if the pods are up to date.
  • Listing pods shows individual pod states, restarts, and readiness.
  • Describing pods and deployments gives insight into configuration details and events that may indicate problems.
  • Examining service endpoints and IP addresses confirms that the application is accessible as expected.

Together, these commands form a powerful toolkit to monitor deployment progress and diagnose issues quickly.

Accessing the Application

Once the deployment and service are running, you can access your application. How you do this depends on the service type used.

  • For LoadBalancer services, your cloud provider typically assigns an external IP address. You can access the application via this IP on the specified port.
  • For NodePort services, Kubernetes exposes the service on a port on each node, which you can use to reach the application externally.
  • For local clusters, you might use kubectl port-forward to map a pod port to your local machine, enabling testing without exposing the app publicly.

Testing the application in this way confirms that your deployment is fully operational.

Understanding Rolling Updates During Deployment

When updating your application, Kubernetes uses rolling updates to minimize downtime. Instead of stopping all old pods at once, Kubernetes gradually replaces them with new pods running the updated container image.

This strategy ensures continuous availability. You can monitor the rollout process using kubectl rollout status deployment/<deployment-name>. If anything goes wrong, you can pause or roll back the update to the previous stable version.

Rolling updates embody Kubernetes’ design philosophy of declarative, self-healing infrastructure.

Importance of Labels and Selectors

Labels are key-value pairs attached to Kubernetes objects. They are essential for grouping and selecting resources. In deployment manifests, labels identify pods belonging to a deployment, and selectors in services use these labels to route traffic properly.

Understanding how to design and use labels effectively helps maintain a clean and scalable Kubernetes environment. It also allows you to organize resources by environment, application version, or team.

Leveraging Namespaces for Isolation

Namespaces provide logical separation within a Kubernetes cluster. Deploying your application in a dedicated namespace helps isolate it from other workloads, improving security and resource management.

Namespaces enable running multiple instances of the same application without conflict, useful in multi-tenant or multi-environment clusters.

Automating Deployment with CI/CD Pipelines

For production environments, manual deployment is often replaced by continuous integration and continuous deployment (CI/CD) pipelines. These pipelines automate building container images, running tests, pushing images to registries, and applying Kubernetes manifests.

CI/CD automation reduces errors, accelerates release cycles, and improves consistency, helping teams deliver features and fixes faster.

Deploying your first application on Kubernetes is a comprehensive process that covers cluster setup, containerization, image registry management, manifest creation, deployment, and verification. Each step builds your understanding of how Kubernetes orchestrates containers and manages application lifecycles.

Mastering this process empowers you to deploy scalable, resilient, and manageable applications that leverage Kubernetes’ powerful capabilities. As you gain experience, you can explore advanced topics like Helm charts, persistent storage, secrets management, and multi-cluster deployments.

Preparing the Application Container Image

The first step in deploying an application on Kubernetes is to create a container image that packages the application code along with its runtime environment and dependencies. This image acts as a portable unit that Kubernetes can run consistently on any node in the cluster.

Developers typically write a manifest that defines the environment, installs required software libraries, copies the application files, and specifies how to start the application. This ensures the application runs the same way regardless of where it is deployed.

Once the container image is prepared, it is built and stored in a container registry. This registry serves as a repository from which Kubernetes nodes can pull the image to run containers. Public registries like Docker Hub or private registries hosted by organizations are commonly used for this purpose.

Defining the Deployment Configuration

With the container image ready, the next step is to instruct Kubernetes how to run the application through a deployment configuration.

This configuration specifies important details such as the number of replicas of the application to run, the container image to use, and the ports the application listens on. It also includes labels to organize and identify pods, selectors that determine which pods belong to the deployment, and update strategies for rolling out new versions with minimal downtime.

The deployment configuration ensures that Kubernetes manages the application’s lifecycle, including starting, stopping, and scaling pods according to the defined desired state.

Exposing the Application to the Outside World

Once the application is deployed, it needs to be accessible either internally within the cluster or externally to users. Kubernetes achieves this by defining services that act as stable endpoints.

A service selects pods based on labels and provides a single IP address or DNS name to access them. This abstraction allows pods to be created or destroyed dynamically without affecting how users or other services reach the application.

Services can be configured to expose the application via different methods, including internal cluster networking, node ports, or cloud provider load balancers that distribute incoming traffic across multiple pod instances.

Deploying the Application and Verifying Status

Deploying an application on Kubernetes involves applying the configurations that define how the application should run within the cluster. This step transforms your manifest files and container images into live running workloads managed by Kubernetes. Successful deployment requires careful planning, precise configuration, and continuous monitoring to ensure the application runs reliably and efficiently.

Applying Deployment and Service Configurations

The primary method of deploying your application is by using the kubectl apply command, which applies the declarative configuration files you created. These YAML files specify the desired state for your application, including the number of replicas, container images, resource limits, and networking rules.

When you run the command to apply your deployment configuration, Kubernetes starts the process of creating the specified number of pod replicas. It schedules these pods on the available nodes in the cluster based on resource availability and constraints. Each pod runs one or more containers based on the container image defined in your manifest.

Similarly, applying the service configuration creates a stable network endpoint for your application. The service ensures that traffic is properly routed to the running pods, handling load balancing and service discovery within the cluster.

This approach—using declarative configuration files—enables Kubernetes to manage your application lifecycle declaratively. If any pods fail or nodes become unhealthy, Kubernetes automatically attempts to recreate the pods to maintain the desired state you specified.

Understanding the Deployment Process Internally

Behind the scenes, when you deploy an application, the Kubernetes control plane takes several steps to bring your application to life:

  • API Server Receives Request: The API Server accepts your deployment manifest and validates it. It stores the desired state in the cluster’s key-value store (etcd).
  • Scheduler Assigns Pods: The Scheduler looks at the new pods needing to run and selects appropriate worker nodes where these pods will be placed. The decision is based on resource availability, node taints and tolerations, affinity rules, and other scheduling policies.
  • Kubelet Starts Containers: Once a node is assigned, the Kubelet agent running on that node pulls the container image from the registry and starts the containers defined in the pod specification.
  • Networking Setup: The node’s network components and kube-proxy ensure that the pod is connected to the cluster network and that the pod can communicate with other services.
  • Health Checks: Kubernetes monitors pod health using liveness and readiness probes, restarting containers or marking pods as unavailable if health checks fail.

Understanding these internal steps helps troubleshoot deployment issues and optimize cluster performance.

Monitoring Deployment Status with Kubectl Commands

To verify the status of your deployment, Kubernetes offers a suite of kubectl commands that provide detailed information about the state of your application.

  • Checking Deployments: Running kubectl get deployments lists all deployments in the current namespace, showing how many replicas are desired, current, updated, and available. This helps confirm whether Kubernetes has successfully created the requested number of pods.
  • Listing Pods: The command kubectl get pods displays all pods running within the namespace, showing their current status, restarts, and age. Pod statuses like Running, Pending, or CrashLoopBackOff provide insights into pod health and potential problems.
  • Describing Resources: Using kubectl describe deployment <deployment-name> or kubectl describe pod <pod-name> provides detailed information about the resource’s configuration, events, and current state. This is useful to diagnose failures or misconfigurations.
  • Viewing Logs: Accessing pod logs with kubectl logs <pod-name> is critical for understanding application behavior and troubleshooting errors. You can also stream logs in real-time to monitor application activity.
  • Checking Services: The command kubectl get services shows all services running in the namespace, their types (ClusterIP, NodePort, LoadBalancer), and associated IP addresses or ports. This helps verify that your application is accessible through the intended endpoints.

Handling Deployment Failures and Troubleshooting

Deployments don’t always go as planned, especially when deploying complex applications or working in resource-constrained environments. Common issues include pods failing to start, containers crashing, or services not exposing the application correctly.

When you encounter problems, start troubleshooting by examining pod statuses and events with the describe command. Kubernetes events often indicate issues such as image pull errors, insufficient resources, or configuration problems.

Logs are invaluable for diagnosing runtime errors within your application. If a container repeatedly crashes (CrashLoopBackOff), logs will often reveal the root cause, whether it’s a missing dependency, configuration error, or runtime exception.

Resource limits and quotas can also affect deployments. Pods may fail to schedule if the cluster lacks sufficient CPU or memory. Reviewing resource usage and adjusting limits or scaling the cluster can resolve such issues.

If a service does not expose your application properly, verify the service selector matches pod labels, and confirm that networking policies or firewall rules are not blocking traffic.

Rolling Updates and Rollbacks

One of Kubernetes’ most powerful features is its ability to perform rolling updates, which allow you to update applications with zero downtime. When you change the container image version or deployment configuration, Kubernetes gradually replaces old pods with new ones while maintaining service availability.

By default, Kubernetes updates pods in a controlled manner based on the rollout strategy specified in the deployment manifest. You can configure parameters like max surge (extra pods created during update) and max unavailable (pods taken down during update) to fine-tune the update process.

If an update introduces problems, Kubernetes allows you to roll back to a previous stable version quickly. The kubectl rollout undo deployment/<deployment-name> command reverts the deployment to its last known good state, minimizing disruptions.

Rolling updates and rollbacks help maintain application uptime and reliability, especially in production environments.

Scaling the Application

Scaling is a core capability in Kubernetes that allows you to adjust the number of pod replicas running your application to match demand.

You can manually scale a deployment using the kubectl scale deployment/<deployment-name>– replicas=<number> command, increasing or decreasing the number of pods instantly. Kubernetes ensures that the desired number of replicas is maintained by creating or terminating pods as needed.

For automated scaling, Kubernetes supports the Horizontal Pod Autoscaler (HPA), which adjusts the number of replicas dynamically based on observed metrics such as CPU usage or custom metrics. This allows your application to respond to fluctuating traffic patterns efficiently.

Proper scaling helps optimize resource usage and ensures your application remains responsive during traffic spikes.

Continuous Monitoring and Health Checks

Continuous monitoring of your deployment is essential for maintaining application performance and availability. Kubernetes uses health probes defined in the pod specification to monitor container health:

  • Liveness Probes: These checks determine if a container is alive or dead. If a liveness probe fails, Kubernetes restarts the container to recover from failures.
  • Readiness Probes: These probes check if a container is ready to serve traffic. Pods failing readiness probes are temporarily removed from service endpoints until they become healthy again.

Setting up these probes correctly is crucial for smooth deployment operations and avoiding downtime caused by unhealthy containers.

Beyond health probes, integrating monitoring tools like Prometheus, Grafana, or cloud provider solutions provides deeper insights into application performance, resource usage, and alerting for potential issues.

Best Practices for Deployment and Verification

To ensure successful deployment and ongoing reliability, consider these best practices:

  • Always test your deployment manifests in a staging or test environment before production.
  • Use version control for configuration files to track changes and enable rollback.
  • Automate deployment and verification processes using CI/CD pipelines.
  • Define clear resource requests and limits to avoid scheduling and performance issues.
  • Implement thorough logging and monitoring to detect and resolve issues early.
  • Use namespaces to isolate environments and organize resources logically.
  • Regularly update your Kubernetes cluster and tools to benefit from security patches and new features.

Following these practices helps maintain robust, scalable, and secure applications on Kubernetes.

This detailed overview of deploying applications and verifying their status provides a comprehensive understanding of what happens when your application moves from code and configuration to running workloads in Kubernetes. It emphasizes the importance of monitoring, troubleshooting, and managing deployments to maintain healthy and scalable applications.

Managing Applications with Kubectl and Dashboard

Managing running applications is a critical part of operating Kubernetes. There are two primary interfaces for management: the command-line tool Kubectl and the Kubernetes Dashboard, a web-based user interface.

Kubectl offers powerful commands to update application configurations, scale replicas, inspect logs, and troubleshoot running pods. It is favored by experienced users who prefer fine-grained control over their clusters.

The Kubernetes Dashboard provides a user-friendly way to deploy, monitor, and manage applications without using the command line. It offers visual insights into workloads, logs, and cluster resources, making it easier for administrators to operate Kubernetes.

Both methods allow for scaling applications up or down, rolling out updates, and maintaining the desired state with minimal downtime.

Service Discovery and Load Balancing in Kubernetes

Service discovery is a vital feature in Kubernetes that allows pods to find and communicate with each other efficiently. Since pods can be created and destroyed dynamically, their IP addresses change frequently. Kubernetes solves this problem by providing stable network identities through Services.

A Service acts as an abstraction that defines a logical set of pods and a policy to access them. When a Service is created, Kubernetes assigns it a unique IP address and a DNS name within the cluster. This stable endpoint allows other pods and components to connect without needing to track individual pod IPs.

Load balancing is integrated with Services to distribute network traffic evenly across the pods backing the Service. This helps to optimize resource usage, improve fault tolerance, and ensure high availability of applications.

Configuration Management in Kubernetes

Configuration management in Kubernetes involves defining and maintaining the desired state of resources such as deployments, services, and pods. This is typically done using declarative configuration files written in YAML format.

These configuration files specify details like the number of replicas, container images, resource limits, and networking rules. By applying these configurations to the cluster, Kubernetes continuously works to ensure that the actual state matches the desired state.

Several approaches exist for managing configurations, each suited to different use cases:

  • Replicate and Customize: This method involves copying existing configuration files and modifying them for different environments. While simple, it can be hard to maintain due to duplication.
  • Parameterized Templating: This approach uses reusable templates with parameters that can be customized. Tools like Helm package manager automate the generation of deployment files for different scenarios, reducing repetition.
  • Overlay Configuration: Overlay techniques allow users to modify base configuration files by applying overlays for environment-specific settings. Kustomize is a tool that supports this layered approach to configuration management.
  • Programmatic Configuration: This involves generating configurations dynamically through code using dedicated languages or general-purpose programming languages. It offers flexibility and automation for complex or large-scale deployments.

Comparing Kubernetes with Other Container Orchestrators

Kubernetes stands out among container orchestration platforms for its scalability, rich ecosystem, and flexibility. Unlike simpler tools, Kubernetes provides advanced features like automated scaling, self-healing, rolling updates, and extensive network policies.

Other container orchestrators may focus on specific use cases or offer less complex management but might lack the broad community support and integrations that Kubernetes enjoys. Its open-source nature and cloud provider support make it a preferred choice for organizations managing modern containerized applications.

Understanding these differences helps teams select the right tool based on their requirements, balancing complexity, scalability, and ease of use.

Final Thoughts

Kubernetes has become the standard for container orchestration, offering powerful tools to deploy, scale, and manage containerized applications reliably. This tutorial covered its installation, architecture, key concepts, deployment processes, and management strategies.

Mastering Kubernetes opens up opportunities for developing and operating cloud-native applications with agility and efficiency. Continued learning through hands-on practice, exploring advanced features, and following emerging best practices will deepen your expertise.

Additional resources such as online courses, community forums, and official documentation can guide you toward becoming a Kubernetes expert, capable of leveraging this technology to meet modern application demands.