copyright | lastupdated | keywords | subcollection | content-type | ||
---|---|---|---|---|---|---|
|
2024-09-19 |
openshift, ocp, compliance, security standards, faq, openshift pricing, ocp pricing, openshift charges, ocp charges, openshift price, ocp price, openshift billing, ocp billing, openshift costs, ocp costs |
openshift |
faq |
{{site.data.keyword.attribute-definition-list}}
{: #faqs}
Review frequently asked questions (FAQs) for using {{site.data.keyword.openshiftlong}}. {: shortdesc}
{: #kubernetes} {: faq}
Kubernetes is an open source platform for managing containerized workloads and services across multiple hosts, and offers managements tools for deploying, automating, monitoring, and scaling containerized apps with minimal to no manual intervention. All containers that make up your microservice are grouped into pods, a logical unit to ensure easy management and discovery. These pods run on compute hosts that are managed in a Kubernetes cluster that is portable, extensible, and self-healing in case of failures. {: shortdesc}
For more information about Kubernetes, see the Kubernetes documentation{: external}.
{: #faq-cluster-create} {: faq} {: support}
To create an {{site.data.keyword.openshiftlong_notm}} cluster, first decide if you to want follow a tutorial for a basic cluster setup or design your own cluster environment.
I want to follow a tutorial : Begin by reviewing the Getting started doc, then choose one of the available tutorials.
I want to design my own cluster environment : Begin by reviewing the Getting started doc, then create your cluster environment strategy.
{: #kubernetes_service} {: faq} {: support}
With {{site.data.keyword.openshiftlong_notm}}, you can create your own {{site.data.keyword.redhat_openshift_notm}} cluster to deploy and manage containerized apps on {{site.data.keyword.cloud_notm}}. Your containerized apps are hosted on IBM Cloud infrastructure compute hosts that are called worker nodes. You can choose to provision your compute hosts as virtual machines with shared or dedicated resources, or as bare metal machines that can be optimized for GPU and software-defined storage (SDS) usage. Your worker nodes are controlled by a highly available {{site.data.keyword.redhat_openshift_notm}} master that is configured, monitored, and managed by IBM. You can use the {{site.data.keyword.containerlong_notm}} API or CLI to work with your cluster infrastructure resources and the Kubernetes API or CLI to manage your deployments and services.
For more information about how your cluster resources are set up, see the Service architecture. To find a list of capabilities and benefits, see Benefits and service offerings.
{: #faq_benefits} {: faq}
{{site.data.keyword.openshiftlong_notm}} is a managed {{site.data.keyword.redhat_openshift_notm}} offering that delivers powerful tools, an intuitive user experience, and built-in security for rapid delivery of apps that you can bind to cloud services that are related to {{site.data.keyword.ibmwatson}}, AI, IoT, DevOps, security, and data analytics. As a certified Kubernetes provider, {{site.data.keyword.openshiftlong_notm}} provides intelligent scheduling, self-healing, horizontal scaling, service discovery and load balancing, automated rollouts and rollbacks, and secret and configuration management. The service also has advanced capabilities around simplified cluster management, container security and isolation policies, the ability to design your own cluster, and integrated operational tools for consistency in deployment.
For a detailed overview of capabilities and benefits, see Benefits of using the service.
{: #container_platforms} {: faq} {: support}
With {{site.data.keyword.cloud_notm}}, you can create clusters for your containerized workloads from two different container management platforms: the IBM version of community Kubernetes and {{site.data.keyword.openshiftlong_notm}}. The container platform that you select is installed on your cluster master and worker nodes. Later, you can update the version but can't roll back to a previous version or switch to a different container platform. If you want to use multiple container platforms, create a separate cluster for each.
For more information, see Comparison between {{site.data.keyword.redhat_openshift_notm}} and community Kubernetes clusters.
Kubernetes : Kubernetes{: external} is a production-grade, open source container orchestration platform that you can use to automate, scale, and manage your containerized apps that run on an Ubuntu operating system. With the {{site.data.keyword.containerlong_notm}} version, you get access to community Kubernetes API features that are considered beta or higher by the community. Kubernetes alpha features, which are subject to change, are generally not enabled by default. With Kubernetes, you can combine various resources such as secrets, deployments, and services to securely create and manage highly available, containerized apps.
{{site.data.keyword.redhat_openshift_notm}} : {{site.data.keyword.openshiftlong_notm}} is a Kubernetes-based platform that is designed especially to accelerate your containerized app delivery processes that run on a Red Hat Enterprise Linux operating system. You can orchestrate and scale your existing {{site.data.keyword.redhat_openshift_notm}} workloads across on-prem and off-prem clouds for a portable, hybrid solution that works the same in multicloud scenarios. To get started, try out the {{site.data.keyword.openshiftlong_notm}} tutorial.
Does the service come with a managed {{site.data.keyword.redhat_openshift_notm}} master and worker nodes?
{: #managed_master_worker} {: faq}
Every cluster in {{site.data.keyword.openshiftlong_notm}} is controlled by a dedicated {{site.data.keyword.redhat_openshift_notm}} master that is managed by IBM in an IBM-owned {{site.data.keyword.cloud_notm}} infrastructure account. The {{site.data.keyword.redhat_openshift_notm}} master, including all the master components, compute, networking, and storage resources, is continuously monitored by IBM Site Reliability Engineers (SREs). The SREs apply the latest security standards, detect and remediate malicious activities, and work to ensure reliability and availability of {{site.data.keyword.openshiftlong_notm}}.
Periodically, {{site.data.keyword.redhat_openshift_notm}} releases major, minor, or patch updates. These updates can affect the {{site.data.keyword.redhat_openshift_notm}} API server version or other components in your {{site.data.keyword.redhat_openshift_notm}} master. IBM automatically updates the patch version, but you must update the master major and minor versions. For more information, see Updating the master.
Worker nodes in standard clusters are provisioned in to your {{site.data.keyword.cloud_notm}} infrastructure account. The worker nodes are dedicated to your account and you are responsible to request timely updates to the worker nodes to ensure that the worker node OS and {{site.data.keyword.openshiftlong_notm}} components apply the latest security updates and patches. Security updates and patches are made available by IBM Site Reliability Engineers (SREs) who continuously monitor the Linux image that is installed on your worker nodes to detect vulnerabilities and security compliance issues. For more information, see Updating worker nodes.
{: #flavor-master-role}
When you run oc get nodes
or oc describe node <worker_node>
, you might see that the worker nodes have master,worker
roles. In OpenShift Container Platform clusters, operators use the master role as a nodeSelector
so that OCP can deploy default components that are controlled by operators, such as the internal registry, in your cluster. No master node processes, such as the API server or Kubernetes scheduler, run on your worker nodes. For more information about master and worker node components, see {{site.data.keyword.redhat_openshift_notm}} architecture.
{: #move_to_cloud}
The following table provides some examples of what types of workloads that users typically move to the various types of clouds. You might also choose a hybrid approach where you have clusters that run in both environments. {: shortdesc}
Workload | {{site.data.keyword.containershort_notm}} off-prem | on-prem |
---|---|---|
DevOps enablement tools | Yes | |
Developing and testing apps | Yes | |
Apps have major shifts in demand and need to scale rapidly | Yes | |
Business apps such as CRM, HCM, ERP, and E-commerce | Yes | |
Collaboration and social tools such as email | Yes | |
RHEL workloads | Yes | |
Bare metal | Yes | Yes |
GPU compute resources | Yes | Yes |
PCI and HIPAA-compliant workloads | Yes | Yes |
Legacy apps with platform and infrastructure constraints and dependencies | Yes | |
Proprietary apps with strict designs, licensing, or heavy regulations | Yes | |
{: caption="{{site.data.keyword.cloud_notm}} implementations support your workloads" caption-side="bottom"} |
Ready to run workloads off-premises in {{site.data.keyword.openshiftlong_notm}}? : Great! You're already in the public cloud documentation. Keep reading for more strategy ideas, or hit the ground running by creating a cluster now.
Want to run workloads in both on-premises and off-premises clouds? : Explore {{site.data.keyword.satellitelong_notm}} to extend the flexibility and scalability of {{site.data.keyword.cloud_notm}} into your on-premises, edge, or other cloud provider environments.
{: #infra_packaging}
If you want to run your app in multiple clusters, public and private environments, or even multiple cloud providers, you might wonder how you can make your deployment strategy work across these environments.
You can use the open source Terraform tool to automate the provisioning of {{site.data.keyword.cloud_notm}} infrastructure, including Kubernetes clusters. Follow along with this tutorial to create single and multizone Kubernetes and OpenShift clusters. After you create a cluster, you can also set up the {{site.data.keyword.openshiftlong_notm}} cluster autoscaler so that your worker pool scales up and down worker nodes in response to your workload's resource requests.
{: #app_kinds_dev}
Your containerized app must be able to run on one of the supported operating systems for your cluster version. You also want to consider the statefulness of your app. For more information about the kinds of apps that can run in {{site.data.keyword.openshiftlong_notm}}, see Planning app deployments.
If you already have an app, you can migrate it to {{site.data.keyword.openshiftlong_notm}}. If you want to develop a new app, check out the guidelines for developing stateless, cloud-native apps.
{: #apps_serverless-strategy}
You can run serverless apps and jobs through the {{site.data.keyword.codeenginefull_notm}} service. {{site.data.keyword.codeengineshort}} can also build your images for you. {: shortdesc}
{: #knowledge_skills}
{{site.data.keyword.redhat_openshift_notm}} is designed to provide capabilities to two main personas, the cluster admin and the app developer. Each persona uses different technical skills to successfully run and deploy apps to a cluster. {: shortdesc}
What are a cluster admin's main tasks and technical knowledge? : As a cluster admin, you are responsible to set up, operate, secure, and manage the {{site.data.keyword.cloud_notm}} infrastructure of your cluster. Typical tasks include: - Size the cluster to provide enough capacity for your workloads. - Design a cluster to meet the high availability, disaster recovery, and compliance standards of your company. - Secure the cluster by setting up user permissions and limiting actions within the cluster to protect your compute resources, your network, and data. - Plan and manage network communication between infrastructure components to ensure network security, segmentation, and compliance. - Plan persistent storage options to meet data residency and data protection requirements.
The cluster admin persona must have a broad knowledge that includes compute, network, storage, security, and compliance. In a typical company, this knowledge is spread across multiple specialists, such as System Engineers, System Administrators, Network Engineers, Network Architects, IT Managers, or Security and Compliance Specialists. Consider assigning the cluster admin role to multiple people in your company so that you have the required knowledge to successfully operate your cluster.
What are an app developer's main tasks and technical skills? : As a developer, you design, create, secure, deploy, test, run, and monitor cloud-native, containerized apps in an {{site.data.keyword.redhat_openshift_notm}} cluster. To create and run these apps, you must be familiar with the concept of microservices, the 12-factor app guidelines, Docker and containerization principles{: external}, and available {{site.data.keyword.redhat_openshift_notm}} deployment options.
{{site.data.keyword.redhat_openshift_notm}} and {{site.data.keyword.openshiftlong_notm}} provide multiple options for how to expose an app and keep an app private, add persistent storage, integrate other services, and how you can secure your workloads and protect sensitive data. Before you move your app to a cluster in {{site.data.keyword.openshiftlong_notm}}, verify that you can run your app as a containerized app on the supported operating system and that {{site.data.keyword.redhat_openshift_notm}} and {{site.data.keyword.openshiftlong_notm}} provide the capabilities that your workload needs.
Do cluster administrators and developers interact with each other? : Yes. Cluster administrators and developers must interact frequently so that cluster administrators understand workload requirements to provide this capability in the cluster, and so that developers know about available limitations, integrations, and security principles that they must consider in their app development process.
{: #secure_cluster} {: faq} {: support}
You can use built-in security features in {{site.data.keyword.openshiftlong_notm}} to protect the components in your cluster, your data, and app deployments to ensure security compliance and data integrity. Use these features to secure your {{site.data.keyword.redhat_openshift_notm}} API server, etcd data store, worker node, network, storage, images, and deployments against malicious attacks. You can also leverage built-in logging and monitoring tools to detect malicious attacks and suspicious usage patterns.
For more information about the components of your cluster and how you can meet security standards for each component, see Security for {{site.data.keyword.openshiftlong_notm}}.
{: #faq_access} {: faq} {: support}
{{site.data.keyword.openshiftlong_notm}} uses {{site.data.keyword.iamshort}} (IAM) to grant access to cluster resources through IAM platform access roles and Kubernetes role-based access control (RBAC) policies through IAM service access roles. For more information about types of access policies, see Pick the correct access policy and role for your users.
{: #what-perms-api-key}
At a minimum, the Administrators or Compliance Management roles have permissions to create a cluster. However, you might need additional permissions for other services and integrations that you use in your cluster. For more information, see Permissions to create a cluster.
To check a user's permissions, review the access policies and access groups of the user in the {{site.data.keyword.cloud_notm}} console{: external}, or use the ibmcloud iam user-policies <user>
command.
If the API key is based on one user, how are other cluster users in the region and resource group affected? {: #apikey-users}
Other users within the region and resource group of the account share the API key for accessing the infrastructure and other services with {{site.data.keyword.openshiftlong_notm}} clusters. When users log in to the {{site.data.keyword.cloud_notm}} account, an {{site.data.keyword.cloud_notm}} IAM token that is based on the API key is generated for the CLI session and enables infrastructure-related commands to be run in a cluster.
{: #apikey-user-leaves}
If the user is leaving your organization, the {{site.data.keyword.cloud_notm}} account owner can remove that user's permissions. However, before you remove a user's specific access permissions or remove a user from your account completely, you must reset the API key with another user's infrastructure credentials. Otherwise, the other users in the account might lose access to the IBM Cloud infrastructure portal and infrastructure-related commands might fail. For more information, see Removing user permissions.
{: #apikey-lockdown}
If an API key that is set for a region and resource group in your cluster is compromised, delete it so that no further calls can be made by using the API key as authentication. For more information about securing access to the Kubernetes API server, see the Kubernetes API server and etcd security topic.
{: #faq_api_key_leak} {: faq} {: support}
For instructions on how to rotate your API key, see How do I rotate the cluster API key in the event of a leak?.
{: #faq_security_bulletins} {: faq} {: support}
If vulnerabilities are found in {{site.data.keyword.redhat_openshift_notm}}, {{site.data.keyword.redhat_openshift_notm}} releases CVEs in security bulletins to inform users and to describe the actions that users must take to remediate the vulnerability. {{site.data.keyword.redhat_openshift_notm}} security bulletins that affect {{site.data.keyword.openshiftlong_notm}} users or the {{site.data.keyword.cloud_notm}} platform are published in the {{site.data.keyword.cloud_notm}} security bulletin.
Some CVEs require the latest patch update for a version that you can install as part of the regular cluster update process in {{site.data.keyword.openshiftlong_notm}}. Make sure to apply security patches in time to protect your cluster from malicious attacks. For more information about what is included in a security patch, refer to the version change log.
{: #bare_metal_gpu} {: faq}
Certain VPC worker node flavors offer GPU support. For more information, see the VPC flavors. {: tip}
Yes, you can provision your worker node as a single-tenant physical bare metal server. Bare metal servers come with high-performance benefits for workloads such as data, GPU, and AI. Additionally, all the hardware resources are dedicated to your workloads, so you don't have to worry about "noisy neighbors".
For more information about available bare metal flavors and how bare metal is different from virtual machines, see the planning guidance.
{: #smallest_cluster} {: faq}
Note that running the smallest possible cluster does not meet the service level agreement (SLA) to receive support. Also, keep in mind that some services, such as Ingress, require highly available worker node setups. You might not be able to run these services or your apps in clusters with only two nodes in a worker pool. For more information, see the Planning your cluster for high availability. {: important}
Classic or VPC clusters : Clusters must always have at least 2 worker nodes. Note that you can't have a cluster with 0 worker nodes, and you can't power off or suspend billing for your worker nodes.
{{site.data.keyword.satelliteshort}} clusters : Clusters can be created using the single-replica topology, which means only 1 worker node. Note that if you create a {{site.data.keyword.satelliteshort}} cluster using single-replica topology, you can't add worker nodes later.
{: #supported_kube_versions} {: faq} {: support}
{{site.data.keyword.openshiftlong_notm}} concurrently supports multiple versions of {{site.data.keyword.redhat_openshift_notm}}. When a new version (n) is released, versions up to 2 behind (n-2) are supported. Versions more than 2 behind the latest (n-3) are first deprecated and then unsupported.
For more information about supported versions and update actions that you must take to move from one version to another, see the {{site.data.keyword.openshiftshort}} version information.
{: #supported_os_versions} {: faq} {: support}
For a list of supported worker node operated systems by cluster version, see the {{site.data.keyword.openshiftshort}} version information.
{: #supported_regions} {: faq}
{{site.data.keyword.openshiftlong_notm}} is available worldwide. You can create clusters in every supported {{site.data.keyword.openshiftlong_notm}} region.
For more information about supported regions, see Locations.
{: #ha_sla} {: faq}
Yes. By default, {{site.data.keyword.openshiftlong_notm}} sets up many components such as the cluster master with replicas, anti-affinity, and other options to increase the high availability (HA) of the service. You can increase the redundancy and failure toleration of your cluster worker nodes, storage, networking, and workloads by configuring them in a highly available architecture. For an overview of the default setup and your options to increase HA, see Creating a highly available cluster strategy.
For the latest HA service level agreement terms, refer to the {{site.data.keyword.cloud_notm}} terms of service. Generally, the SLA availability terms require that when you configure your infrastructure resources in an HA architecture, you must distribute them evenly across three different availability zones. For example, to receive full HA coverage under the SLA terms, you must set up a multizone cluster with a total of at least 6 worker nodes, two worker nodes per zone that are evenly spread across three zones.
{: #mz-cluster-faq}
{: #mz-master-setup}
When you create a cluster in a multizone location, a highly available master is automatically deployed and three replicas are spread across the zones of the metro. For example, if the cluster is in dal10
, dal12
, or dal13
zones, the replicas of the master are spread across each zone in the Dallas multizone metro.
{: #mz-master-communication}
If you created a VPC multizone cluster, the subnets in each zone are automatically set up with Access Control Lists (ACLs) that allow communication between the master and the worker nodes across zones. In classic clusters, if you have multiple VLANs for your cluster, multiple subnets on the same VLAN, or a multizone classic cluster, you must enable a Virtual Router Function (VRF) for your IBM Cloud infrastructure account so your worker nodes can communicate with each other on the private network. To enable VRF, see Enabling VRF. To check whether a VRF is already enabled, use the ibmcloud account show
command. If you can't or don't want to enable VRF, enable VLAN spanning. To perform this action, you need the Network > Manage Network VLAN Spanning infrastructure permission, or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the ibmcloud oc vlan spanning get --region <region>
command.
{: #convert-sz-to-mz}
To convert a single zone cluster to a multizone cluster, your cluster must be set up in a location that has more than one availability zone.
- VPC clusters can be set up only in multizone regions, and as such can always be converted from a single zone cluster to a multizone cluster. For more information, see Adding worker nodes to VPC clusters.
- Classic clusters that are set up in data centers with only one zone can't be converted to a multizone cluster. For more information, see Adding worker nodes to Classic clusters.
{: #multiple-regions-setup}
You can set up multiple clusters in different regions of one geolocation (such as US South and US East) or across geolocations (such as US South and EU Central). Both setups offer the same level of availability for your app, but also add complexity when it comes to data sharing and data replication. For most cases, staying within the same geolocation is sufficient. But if you have users across the world, it might be better to set up a cluster where your users are so that your users don't experience long waiting times when they send a request to your app.
{: #multiple-cluster-lb-options}
To load balance workloads across multiple clusters, you must make your apps available on the public network by using Ingress, routers, or Network Load Balancers (NLBs). The router services and NLBs are assigned a public IP address that you can use to access your apps.
To load balance workloads across your apps, add the public IP addresses of your router services and NLBs to a CIS global load balancer or your own global load balancer.
{: #glb-private}
{{site.data.keyword.cloud_notm}} does not offer a global load balancer service on the private network. However, you can connect your cluster to a private load balancer that you host in your on-prem network by using one of the supported VPN options. Make sure to expose your apps on the private network by using Ingress, routers, or Network Load Balancers (NLBs), and use the private IP address in your VPN settings to connect your app to your on-prem network.
{: #faq_ha} {: faq}
The {{site.data.keyword.openshiftlong_notm}} architecture and infrastructure is designed to ensure reliability, low processing latency, and a maximum uptime of the service. By default, every cluster in {{site.data.keyword.openshiftlong_notm}} is set up with multiple {{site.data.keyword.redhat_openshift_notm}} master instances to ensure availability and accessibility of your cluster resources, even if one or more instances of your {{site.data.keyword.redhat_openshift_notm}} master are unavailable.
You can make your cluster even more highly available and protect your app from a downtime by spreading your workloads across multiple worker nodes in multiple zones of a region. This setup is called a multizone cluster and ensures that your app is accessible, even if a worker node or an entire zone is not available.
To protect against an entire region failure, create multiple clusters and spread them across {{site.data.keyword.cloud_notm}} regions. By setting up a network load balancer (NLB) for your clusters, you can achieve cross-region load balancing and cross-region networking for your clusters.
If you have data that must be available, even if an outage occurs, make sure to store your data on persistent storage.
For more information about how to achieve high availability for your cluster, see High availability for {{site.data.keyword.openshiftlong_notm}}.
{: #multizone-apps-faq}
It depends on how you set up the app. See Planning highly available deployments and Planning highly available persistent storage.
{: #encrypted-flavors}
The secondary disk of the worker node is encrypted. For more information, see Overview of cluster encryption. After you create a worker pool, you might notice that the worker node flavor has .encrypted
in the name, such as b3c.4x16.encrypted
.
{: #standards} {: faq}
{{site.data.keyword.cloud_notm}} is built by following many data, finance, health, insurance, privacy, security, technology, and other international compliance standards. For more information, see {{site.data.keyword.cloud_notm}} compliance.
To view detailed system requirements, you can run a software product compatibility report for {{site.data.keyword.openshiftlong_notm}}{: external}. Note that compliance depends on the underlying infrastructure provider for the cluster worker nodes, networking, and storage resources.
Classic infrastructure: {{site.data.keyword.openshiftlong_notm}} implements controls commensurate with the following security standards:
- EU-US Privacy Shield and Swiss-US Privacy Shield Framework
- Health Insurance Portability and Accountability Act (HIPAA)
- Service Organization Control standards (SOC 1 Type 2, SOC 2 Type 2)
- International Standard on Assurance Engagements 3402 (ISAE 3402), Assurance Reports on Controls at a Service Organization
- International Organization for Standardization (ISO 27001, ISO 27017, ISO 27018)
- Payment Card Industry Data Security Standard (PCI DSS)
VPC infrastructure: {{site.data.keyword.openshiftlong_notm}} implements controls commensurate with the following security standards:
- EU-US Privacy Shield and Swiss-US Privacy Shield Framework
- Health Insurance Portability and Accountability Act (HIPAA)
- International Standard on Assurance Engagements 3402 (ISAE 3402), Assurance Reports on Controls at a Service Organization
{{site.data.keyword.satelliteshort}}: See the {{site.data.keyword.satellitelong_notm}} documentation.
{: #faq_integrations} {: faq}
You can add {{site.data.keyword.cloud_notm}} platform and infrastructure services as well as services from third-party vendors to your {{site.data.keyword.openshiftlong_notm}} cluster to enable automation, improve security, or enhance your monitoring and logging capabilities in the cluster.
For a list of supported services, see Integrating services.
{: #cloud_pak_manage}
Cloud Paks are integrated with the {{site.data.keyword.cloud_notm}} catalog so that you can quickly configure and install the all the Cloud Pak components into an existing or new {{site.data.keyword.redhat_openshift_notm}} cluster. When you install the Cloud Pak, the Cloud Pak is provisioned with {{site.data.keyword.bpshort}} and a {{site.data.keyword.bpshort}} workspace is created for you. You can use the workspace later to access information about your Cloud Pak installation. You access your Cloud Pak services from the Cloud Pak URL. For more information, consult the Cloud Pak documentation{: external}.
Can I use the {{site.data.keyword.redhat_openshift_notm}} entitlement that comes with my Cloud Pak for my cluster?
{: #cloud_pak_byo_entitlement}
Yes, if your Cloud Pak includes an entitlement to run certain worker node flavors that are installed with OpenShift Container Platform. To view your entitlements, check in IBM Passport Advantage{: external}. Note that your {{site.data.keyword.cloud_notm}} ID must match your IBM Passport Advantage ID.
You can create the cluster or the worker pool within an existing cluster with the Cloud Pak entitlement in the console or by using the --entitlement ocp_entitled
option in the ibmcloud oc cluster create classic
or ibmcloud oc worker-pool create classic
CLI commands. Make sure to specify the correct number and flavor of worker nodes that you are entitled to use.
Do not exceed your entitlement. Keep in mind that your OpenShift Container Platform entitlements can be used with other cloud providers or in other environments. To avoid billing issues later, make sure that you use only what you are entitled to use. For example, you might have an entitlement for the OCP licenses for two worker nodes of 4 CPU and 16 GB memory, and you create this worker pool with two worker nodes of 4 CPU and 16 GB memory. You used your entire entitlement, and you can't use the same entitlement for other worker pools, cloud providers, or environments. {: important}
{: #cloud_pak_multiple}
Yes, but you might need to add more worker nodes so that each Cloud Pak has enough compute resources to run. Additionally, you might install only one instance of the same Cloud Pak per cluster, such as Cloud Pak for Data; or multiple instances to different projects in the same cluster, such as Cloud Pak for Automation. For sizing information, consult the Cloud Pak documentation{: external}.
{: #cloud_pak_included}
Cloud Paks are bundled, licensed, containerized software that is optimized to work together for enterprise use cases, including consistent deployment, access control, and billing. You can flexibly use parts of the Cloud Paks when you need them by choosing the correct mix of virtual processor cores of the software to suit your workloads. You can also change the mix of virtual processor cores as your workloads evolve.
Depending on the Cloud Pak, you get licensed IBM and open source software bundled together in a unified management experience with logging, monitoring, security, and access capabilities.
- IBM products: Cloud Paks extend licensed IBM software and middleware from IBM Marketplace{: external}, and integrate these products with your cluster to modernize, optimize, and run hybrid cloud workloads.
- Open-source software: Cloud Paks might also include open source components for cloud-native and portable hybrid cloud solutions. Typically, open source software is unmanaged and you are responsible to keep your components up-to-date and secure. However, Cloud Paks help you consistently manage the entire lifecycle of the Cloud Pak components and the workloads that you run with them. Because the open source software is bundled together with the Cloud Pak, you get the benefits of IBM support and integration with select {{site.data.keyword.cloud_notm}} features such as access control and billing.
To see the components of each Cloud Pak, consult the Cloud Pak documentation{: external}.
{: #cloud_paks_other}
When you set up your Cloud Pak, you might need to work with {{site.data.keyword.redhat_openshift_notm}}-specific resources, such as security context constraints. Make sure that you use the oc
CLI or kubectl
version 1.12 CLI to interact with these resources, such as oc get scc
. The kubectl
CLI version 1.11 has a bug that yields an error when you run commands against {{site.data.keyword.redhat_openshift_notm}}-specific resources, such as kubectl get scc
.
{: #faq_thirdparty_oss} {: faq}
See the IBM Open Source and Third Party policy{: external}.
{: #charges} {: faq}
See Managing costs for your clusters.
{: #downgrade} {: faq}
No, you cannot downgrade your cluster to a previous version.
{: #migrate-cluster-account} {: faq}
No, you cannot move cluster to a different account from the one it was created in.
{: #updating_kube}
- Make sure that your cluster always runs a supported {{site.data.keyword.redhat_openshift_notm}} version.
- When a new {{site.data.keyword.redhat_openshift_notm}} minor version is released, an older version is shortly deprecated after and then becomes unsupported.
For more information, see Updating the master and worker nodes.
{: #unsupported_os}
The following operations are blocked when an operating system is unsupported:
- worker reload
- worker replace without update
- worker replace with update
- worker update
- worker pool create (with an unsupported OS)
- worker pool rebalance
- worker pool resize (scale up)
- worker pool zone add
- instance group resize (patch)
- autoscaler remove worker (v2/autoscalerRemoveWorker)