ceph.rook.io / v1 / CephCluster
- string
.apiVersion
APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
- string
.kind
Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
- object required
.metadata
- object required
.spec
ClusterSpec represents the specification of Ceph Cluster
- object | null
.spec .annotations
The annotations-related configuration to add/set on each Pod related object.
- object | null
.spec .cephConfig
Ceph Config options
- object | null
.spec .cephConfigFromSecret
CephConfigFromSecret works exactly like CephConfig but takes config value from Secret Key reference.
- object | null
.spec .cephVersion
The version information that instructs Rook to orchestrate a particular version of Ceph.
- boolean
.spec .cephVersion .allowUnsupported
Whether to allow unsupported versions (do not set to true in production)
- string
.spec .cephVersion .image
Image is the container image used to launch the ceph daemons, such as quay.io/ceph/ceph:
The full list of images can be found at https://quay.io/repository/ceph/ceph?tab=tags - string
.spec .cephVersion .imagePullPolicy
ImagePullPolicy describes a policy for if/when to pull a container image One of Always, Never, IfNotPresent.
- object | null
.spec .cleanupPolicy
Indicates user intent when deleting a cluster; blocks orchestration and should not be set if cluster deletion is not imminent.
- boolean
.spec .cleanupPolicy .allowUninstallWithVolumes
AllowUninstallWithVolumes defines whether we can proceed with the uninstall if they are RBD images still present
- string | null
.spec .cleanupPolicy .confirmation
Confirmation represents the cleanup confirmation
- object | null
.spec .cleanupPolicy .sanitizeDisks
SanitizeDisks represents way we sanitize disks
- string
.spec .cleanupPolicy .sanitizeDisks .dataSource
DataSource is the data source to use to sanitize the disk with
- integer
.spec .cleanupPolicy .sanitizeDisks .iteration
Iteration is the number of pass to apply the sanitizing
- string
.spec .cleanupPolicy .sanitizeDisks .method
Method is the method we use to sanitize disks
- boolean
.spec .continueUpgradeAfterChecksEvenIfNotHealthy
ContinueUpgradeAfterChecksEvenIfNotHealthy defines if an upgrade should continue even if PGs are not clean
- object | null
.spec .crashCollector
A spec for the crash controller
- integer
.spec .crashCollector .daysToRetain
DaysToRetain represents the number of days to retain crash until they get pruned
- boolean
.spec .crashCollector .disable
Disable determines whether we should enable the crash collector
- object
.spec .csi
CSI Driver Options applied per cluster.
- object
.spec .csi .cephfs
CephFS defines CSI Driver settings for CephFS driver.
- string
.spec .csi .cephfs .fuseMountOptions
FuseMountOptions defines the mount options for ceph fuse mounter.
- string
.spec .csi .cephfs .kernelMountOptions
KernelMountOptions defines the mount options for kernel mounter.
- object
.spec .csi .readAffinity
ReadAffinity defines the read affinity settings for CSI driver.
- array
.spec .csi .readAffinity .crushLocationLabels
CrushLocationLabels defines which node labels to use as CRUSH location. This should correspond to the values set in the CRUSH map.
- boolean
.spec .csi .readAffinity .enabled
Enables read affinity for CSI driver.
- object | null
.spec .dashboard
Dashboard settings
- boolean
.spec .dashboard .enabled
Enabled determines whether to enable the dashboard
- integer
.spec .dashboard .port
Port is the dashboard webserver port
- string
.spec .dashboard .prometheusEndpoint
Endpoint for the Prometheus host
- boolean
.spec .dashboard .prometheusEndpointSSLVerify
Whether to verify the ssl endpoint for prometheus. Set to false for a self-signed cert.
- boolean
.spec .dashboard .ssl
SSL determines whether SSL should be used
- string
.spec .dashboard .urlPrefix
URLPrefix is a prefix for all URLs to use the dashboard with a reverse proxy
- string
.spec .dataDirHostPath
The path on the host where config and data can be persisted
- object | null
.spec .disruptionManagement
A spec for configuring disruption management.
- string
.spec .disruptionManagement .machineDisruptionBudgetNamespace
Deprecated. Namespace to look for MDBs by the machineDisruptionBudgetController
- boolean
.spec .disruptionManagement .manageMachineDisruptionBudgets
Deprecated. This enables management of machinedisruptionbudgets.
- boolean
.spec .disruptionManagement .managePodBudgets
This enables management of poddisruptionbudgets
- integer
.spec .disruptionManagement .osdMaintenanceTimeout
OSDMaintenanceTimeout sets how many additional minutes the DOWN/OUT interval is for drained failure domains it only works if managePodBudgets is true. the default is 30 minutes
- integer
.spec .disruptionManagement .pgHealthCheckTimeout
DEPRECATED: PGHealthCheckTimeout is no longer implemented
- string
.spec .disruptionManagement .pgHealthyRegex
PgHealthyRegex is the regular expression that is used to determine which PG states should be considered healthy. The default is
^(active\+clean|active\+clean\+scrubbing|active\+clean\+scrubbing\+deep)$
- object | null
.spec .external
Whether the Ceph Cluster is running external to this Kubernetes cluster mon, mgr, osd, mds, and discover daemons will not be created for external clusters.
- boolean
.spec .external .enable
Enable determines whether external mode is enabled or not
- object | null
.spec .healthCheck
Internal daemon healthchecks and liveness probe
- object | null
.spec .healthCheck .daemonHealth
DaemonHealth is the health check for a given daemon
- object | null
.spec .healthCheck .daemonHealth .mon
Monitor represents the health check settings for the Ceph monitor
- boolean
.spec .healthCheck .daemonHealth .mon .disabled
- string
.spec .healthCheck .daemonHealth .mon .interval
Interval is the internal in second or minute for the health check to run like 60s for 60 seconds
- string
.spec .healthCheck .daemonHealth .mon .timeout
- object | null
.spec .healthCheck .daemonHealth .osd
ObjectStorageDaemon represents the health check settings for the Ceph OSDs
- boolean
.spec .healthCheck .daemonHealth .osd .disabled
- string
.spec .healthCheck .daemonHealth .osd .interval
Interval is the internal in second or minute for the health check to run like 60s for 60 seconds
- string
.spec .healthCheck .daemonHealth .osd .timeout
- object | null
.spec .healthCheck .daemonHealth .status
Status represents the health check settings for the Ceph health
- boolean
.spec .healthCheck .daemonHealth .status .disabled
- string
.spec .healthCheck .daemonHealth .status .interval
Interval is the internal in second or minute for the health check to run like 60s for 60 seconds
- string
.spec .healthCheck .daemonHealth .status .timeout
- object
.spec .healthCheck .livenessProbe
LivenessProbe allows changing the livenessProbe configuration for a given daemon
- object
.spec .healthCheck .startupProbe
StartupProbe allows changing the startupProbe configuration for a given daemon
- object | null
.spec .labels
The labels-related configuration to add/set on each Pod related object.
- object | null
.spec .logCollector
Logging represents loggings settings
- boolean
.spec .logCollector .enabled
Enabled represents whether the log collector is enabled
- integer | string
.spec .logCollector .maxLogSize
MaxLogSize is the maximum size of the log per ceph daemons. Must be at least 1M.
- string
.spec .logCollector .periodicity
Periodicity is the periodicity of the log rotation.
- object | null
.spec .mgr
A spec for mgr related options
- boolean
.spec .mgr .allowMultiplePerNode
AllowMultiplePerNode allows to run multiple managers on the same node (not recommended)
- integer
.spec .mgr .count
Count is the number of manager daemons to run
- array | null
.spec .mgr .modules
Modules is the list of ceph manager modules to enable/disable
- object | null
.spec .mon
A spec for mon related options
- boolean
.spec .mon .allowMultiplePerNode
AllowMultiplePerNode determines if we can run multiple monitors on the same node (not recommended)
- integer
.spec .mon .count
Count is the number of Ceph monitors
- array
.spec .mon .externalMonIDs
ExternalMonIDs - optional list of monitor IDs which are deployed externally and not managed by Rook. If set, Rook will not remove mons with given IDs from quorum. This parameter is used only for local Rook cluster running in normal mode and will be ignored if external or stretched mode is used. leading
- string
.spec .mon .failureDomainLabel
- object
.spec .mon .stretchCluster
StretchCluster is the stretch cluster specification
- string
.spec .mon .stretchCluster .failureDomainLabel
FailureDomainLabel the failure domain name (e,g: zone)
- string
.spec .mon .stretchCluster .subFailureDomain
SubFailureDomain is the failure domain within a zone
- array | null
.spec .mon .stretchCluster .zones
Zones is the list of zones
- object
.spec .mon .volumeClaimTemplate
VolumeClaimTemplate is the PVC definition
- object
.spec .mon .volumeClaimTemplate .metadata
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- object
.spec .mon .volumeClaimTemplate .metadata .annotations
- array
.spec .mon .volumeClaimTemplate .metadata .finalizers
- object
.spec .mon .volumeClaimTemplate .metadata .labels
- string
.spec .mon .volumeClaimTemplate .metadata .name
- string
.spec .mon .volumeClaimTemplate .metadata .namespace
- object
.spec .mon .volumeClaimTemplate .spec
spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
- array
.spec .mon .volumeClaimTemplate .spec .accessModes
accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
- object
.spec .mon .volumeClaimTemplate .spec .dataSource
dataSource field can be used to specify either:
- An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
- An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
- string
.spec .mon .volumeClaimTemplate .spec .dataSource .apiGroup
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
- string required
.spec .mon .volumeClaimTemplate .spec .dataSource .kind
Kind is the type of resource being referenced
- string required
.spec .mon .volumeClaimTemplate .spec .dataSource .name
Name is the name of resource being referenced
- object
.spec .mon .volumeClaimTemplate .spec .dataSourceRef
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef:
- While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects.
- While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified.
- While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
- string
.spec .mon .volumeClaimTemplate .spec .dataSourceRef .apiGroup
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
- string required
.spec .mon .volumeClaimTemplate .spec .dataSourceRef .kind
Kind is the type of resource being referenced
- string required
.spec .mon .volumeClaimTemplate .spec .dataSourceRef .name
Name is the name of resource being referenced
- string
.spec .mon .volumeClaimTemplate .spec .dataSourceRef .namespace
Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace’s owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
- object
.spec .mon .volumeClaimTemplate .spec .resources
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
- object
.spec .mon .volumeClaimTemplate .spec .resources .limits
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- object
.spec .mon .volumeClaimTemplate .spec .resources .requests
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- object
.spec .mon .volumeClaimTemplate .spec .selector
selector is a label query over volumes to consider for binding.
- array
.spec .mon .volumeClaimTemplate .spec .selector .matchExpressions
matchExpressions is a list of label selector requirements. The requirements are ANDed.
- string required
.spec .mon .volumeClaimTemplate .spec .selector .matchExpressions[] .key
key is the label key that the selector applies to.
- string required
.spec .mon .volumeClaimTemplate .spec .selector .matchExpressions[] .operator
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
- array
.spec .mon .volumeClaimTemplate .spec .selector .matchExpressions[] .values
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
- object
.spec .mon .volumeClaimTemplate .spec .selector .matchLabels
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
- string
.spec .mon .volumeClaimTemplate .spec .storageClassName
storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
- string
.spec .mon .volumeClaimTemplate .spec .volumeAttributesClassName
volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it’s not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
- string
.spec .mon .volumeClaimTemplate .spec .volumeMode
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
- string
.spec .mon .volumeClaimTemplate .spec .volumeName
volumeName is the binding reference to the PersistentVolume backing this claim.
- array
.spec .mon .zones
Zones are specified when we want to provide zonal awareness to mons
- boolean
.spec .mon .zones[] .arbiter
Arbiter determines if the zone contains the arbiter used for stretch cluster mode
- string
.spec .mon .zones[] .name
Name is the name of the zone
- object
.spec .mon .zones[] .volumeClaimTemplate
VolumeClaimTemplate is the PVC template
- object
.spec .mon .zones[] .volumeClaimTemplate .metadata
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- object
.spec .mon .zones[] .volumeClaimTemplate .metadata .annotations
- array
.spec .mon .zones[] .volumeClaimTemplate .metadata .finalizers
- object
.spec .mon .zones[] .volumeClaimTemplate .metadata .labels
- string
.spec .mon .zones[] .volumeClaimTemplate .metadata .name
- string
.spec .mon .zones[] .volumeClaimTemplate .metadata .namespace
- object
.spec .mon .zones[] .volumeClaimTemplate .spec
spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
- array
.spec .mon .zones[] .volumeClaimTemplate .spec .accessModes
accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSource
dataSource field can be used to specify either:
- An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
- An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSource .apiGroup
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
- string required
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSource .kind
Kind is the type of resource being referenced
- string required
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSource .name
Name is the name of resource being referenced
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSourceRef
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef:
- While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects.
- While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified.
- While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSourceRef .apiGroup
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
- string required
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSourceRef .kind
Kind is the type of resource being referenced
- string required
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSourceRef .name
Name is the name of resource being referenced
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .dataSourceRef .namespace
Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace’s owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .resources
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .resources .limits
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .resources .requests
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .selector
selector is a label query over volumes to consider for binding.
- array
.spec .mon .zones[] .volumeClaimTemplate .spec .selector .matchExpressions
matchExpressions is a list of label selector requirements. The requirements are ANDed.
- string required
.spec .mon .zones[] .volumeClaimTemplate .spec .selector .matchExpressions[] .key
key is the label key that the selector applies to.
- string required
.spec .mon .zones[] .volumeClaimTemplate .spec .selector .matchExpressions[] .operator
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
- array
.spec .mon .zones[] .volumeClaimTemplate .spec .selector .matchExpressions[] .values
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
- object
.spec .mon .zones[] .volumeClaimTemplate .spec .selector .matchLabels
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .storageClassName
storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .volumeAttributesClassName
volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it’s not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .volumeMode
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
- string
.spec .mon .zones[] .volumeClaimTemplate .spec .volumeName
volumeName is the binding reference to the PersistentVolume backing this claim.
- object | null
.spec .monitoring
Prometheus based Monitoring settings
- boolean
.spec .monitoring .enabled
Enabled determines whether to create the prometheus rules for the ceph cluster. If true, the prometheus types must exist or the creation will fail. Default is false.
- object
.spec .monitoring .exporter
Ceph exporter configuration
- integer
.spec .monitoring .exporter .perfCountersPrioLimit
Only performance counters greater than or equal to this option are fetched
- integer
.spec .monitoring .exporter .statsPeriodSeconds
Time to wait before sending requests again to exporter server (seconds)
- array | null
.spec .monitoring .externalMgrEndpoints
ExternalMgrEndpoints points to an existing Ceph prometheus exporter endpoint
- integer
.spec .monitoring .externalMgrPrometheusPort
ExternalMgrPrometheusPort Prometheus exporter port
- string
.spec .monitoring .interval
Interval determines prometheus scrape interval
- boolean
.spec .monitoring .metricsDisabled
Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled. If true, the prometheus mgr module and Ceph exporter are both disabled. Default is false.
- integer
.spec .monitoring .port
Port is the prometheus server port
- object | null
.spec .network
Network related configuration
- object | null
.spec .network .addressRanges
AddressRanges specify a list of CIDRs that Rook will apply to Ceph’s ‘public_network’ and/or ‘cluster_network’ configurations. This config section may be used for the “host” or “multus” network providers.
- array
.spec .network .addressRanges .cluster
Cluster defines a list of CIDRs to use for Ceph cluster network communication.
- array
.spec .network .addressRanges .public
Public defines a list of CIDRs to use for Ceph public network communication.
- object | null
.spec .network .connections
Settings for network connections such as compression and encryption across the wire.
- object | null
.spec .network .connections .compression
Compression settings for the network connections.
- boolean
.spec .network .connections .compression .enabled
Whether to compress the data in transit across the wire. The default is not set.
- object | null
.spec .network .connections .encryption
Encryption settings for the network connections.
- boolean
.spec .network .connections .encryption .enabled
Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. The default is not set. Even if encryption is not enabled, clients still establish a strong initial authentication for the connection and data integrity is still validated with a crc check. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted.
- boolean
.spec .network .connections .requireMsgr2
Whether to require msgr2 (port 3300) even if compression or encryption are not enabled. If true, the msgr1 port (6789) will be disabled. Requires a kernel that supports msgr2 (kernel 5.11 or CentOS 8.4 or newer).
- boolean
.spec .network .dualStack
DualStack determines whether Ceph daemons should listen on both IPv4 and IPv6
- boolean
.spec .network .hostNetwork
HostNetwork to enable host network. If host networking is enabled or disabled on a running cluster, then the operator will automatically fail over all the mons to apply the new network settings.
- string | null
.spec .network .ipFamily
IPFamily is the single stack IPv6 or IPv4 protocol
- object
.spec .network .multiClusterService
Enable multiClusterService to export the Services between peer clusters
- string
.spec .network .multiClusterService .clusterID
ClusterID uniquely identifies a cluster. It is used as a prefix to nslookup exported services. For example:
. . .svc.clusterset.local - boolean
.spec .network .multiClusterService .enabled
Enable multiClusterService to export the mon and OSD services to peer cluster. Ensure that peer clusters are connected using an MCS API compatible application, like Globalnet Submariner.
- string | null
.spec .network .provider
Provider is what provides network connectivity to the cluster e.g. “host” or “multus”. If the Provider is updated from being empty to “host” on a running cluster, then the operator will automatically fail over all the mons to apply the “host” network settings.
- object | null
.spec .network .selectors
Selectors define NetworkAttachmentDefinitions to be used for Ceph public and/or cluster networks when the “multus” network provider is used. This config section is not used for other network providers.
Valid keys are “public” and “cluster”. Refer to Ceph networking documentation for more: https://docs.ceph.com/en/latest/rados/configuration/network-config-ref/
Refer to Multus network annotation documentation for help selecting values: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation
Rook will make a best-effort attempt to automatically detect CIDR address ranges for given network attachment definitions. Rook’s methods are robust but may be imprecise for sufficiently complicated networks. Rook’s auto-detection process obtains a new IP address lease for each CephCluster reconcile. If Rook fails to detect, incorrectly detects, only partially detects, or if underlying networks do not support reusing old IP addresses, it is best to use the ‘addressRanges’ config section to specify CIDR ranges for the Ceph cluster.
As a contrived example, one can use a theoretical Kubernetes-wide network for Ceph client traffic and a theoretical Rook-only network for Ceph replication traffic as shown: selectors: public: “default/cluster-fast-net” cluster: “rook-ceph/ceph-backend-net”
- object | null
.spec .placement
- object | null
.spec .priorityClassNames
PriorityClassNames sets priority classes on components
- boolean
.spec .removeOSDsIfOutAndSafeToRemove
Remove the OSD that is out and safe to remove only if this option is true
- object | null
.spec .resources
Resources set resource requests and limits
- object | null
.spec .security
Security represents security settings
- object | null
.spec .security .keyRotation
KeyRotation defines options for Key Rotation.
- boolean
.spec .security .keyRotation .enabled
Enabled represents whether the key rotation is enabled.
- string
.spec .security .keyRotation .schedule
Schedule represents the cron schedule for key rotation.
- object | null
.spec .security .kms
KeyManagementService is the main Key Management option
- object | null
.spec .security .kms .connectionDetails
ConnectionDetails contains the KMS connection details (address, port etc)
- string
.spec .security .kms .tokenSecretName
TokenSecretName is the kubernetes secret containing the KMS token
- boolean
.spec .skipUpgradeChecks
SkipUpgradeChecks defines if an upgrade should be forced even if one of the check fails
- object | null
.spec .storage
A spec for available storage in the cluster and how it should be used
- boolean
.spec .storage .allowDeviceClassUpdate
Whether to allow updating the device class after the OSD is initially provisioned
- boolean
.spec .storage .allowOsdCrushWeightUpdate
Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. This allows cluster data to be rebalanced to make most effective use of new OSD space. The default is false since data rebalancing can cause temporary cluster slowdown.
- number | null
.spec .storage .backfillFullRatio
BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90.
- object | null
.spec .storage .config
- string
.spec .storage .deviceFilter
A regular expression to allow more fine-grained selection of devices on nodes across the cluster
- string
.spec .storage .devicePathFilter
A regular expression to allow more fine-grained selection of devices with path names
- array | null
.spec .storage .devices
List of devices to use as storage devices
- integer
.spec .storage .flappingRestartIntervalHours
FlappingRestartIntervalHours defines the time for which the OSD pods, that failed with zero exit code, will sleep before restarting. This is needed for OSD flapping where OSD daemons are marked down more than 5 times in 600 seconds by Ceph. Preventing the OSD pods to restart immediately in such scenarios will prevent Rook from marking OSD as
up
and thus peering of the PGs mapped to the OSD. User needs to manually restart the OSD pod if they manage to fix the underlying OSD flapping issue before the restart interval. The sleep will be disabled if this interval is set to 0. - number | null
.spec .storage .fullRatio
FullRatio is the ratio at which the cluster is considered full and ceph will stop accepting writes. Default is 0.95.
- object
.spec .storage .migration
Migration handles the OSD migration
- string
.spec .storage .migration .confirmation
A user confirmation to migrate the OSDs. It destroys each OSD one at a time, cleans up the backing disk and prepares OSD with same ID on that disk
- number | null
.spec .storage .nearFullRatio
NearFullRatio is the ratio at which the cluster is considered nearly full and will raise a ceph health warning. Default is 0.85.
- array | null
.spec .storage .nodes
- boolean
.spec .storage .onlyApplyOSDPlacement
- boolean
.spec .storage .scheduleAlways
Whether to always schedule OSDs on a node even if the node is not currently scheduleable or ready
- array | null
.spec .storage .storageClassDeviceSets
- object
.spec .storage .store
OSDStore is the backend storage type used for creating the OSDs
- string
.spec .storage .store .type
Type of backend storage to be used while creating OSDs. If empty, then bluestore will be used
- string
.spec .storage .store .updateStore
UpdateStore updates the backend store for existing OSDs. It destroys each OSD one at a time, cleans up the backing disk and prepares same OSD on that disk
- boolean
.spec .storage .useAllDevices
Whether to consume all the storage devices found on a machine
- boolean
.spec .storage .useAllNodes
- array
.spec .storage .volumeClaimTemplates
PersistentVolumeClaims to use as storage
- object
.spec .storage .volumeClaimTemplates[] .metadata
Standard object’s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
- object
.spec .storage .volumeClaimTemplates[] .metadata .annotations
- array
.spec .storage .volumeClaimTemplates[] .metadata .finalizers
- object
.spec .storage .volumeClaimTemplates[] .metadata .labels
- string
.spec .storage .volumeClaimTemplates[] .metadata .name
- string
.spec .storage .volumeClaimTemplates[] .metadata .namespace
- object
.spec .storage .volumeClaimTemplates[] .spec
spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
- array
.spec .storage .volumeClaimTemplates[] .spec .accessModes
accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
- object
.spec .storage .volumeClaimTemplates[] .spec .dataSource
dataSource field can be used to specify either:
- An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
- An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.
- string
.spec .storage .volumeClaimTemplates[] .spec .dataSource .apiGroup
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
- string required
.spec .storage .volumeClaimTemplates[] .spec .dataSource .kind
Kind is the type of resource being referenced
- string required
.spec .storage .volumeClaimTemplates[] .spec .dataSource .name
Name is the name of resource being referenced
- object
.spec .storage .volumeClaimTemplates[] .spec .dataSourceRef
dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn’t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn’t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef:
- While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects.
- While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified.
- While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
- string
.spec .storage .volumeClaimTemplates[] .spec .dataSourceRef .apiGroup
APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
- string required
.spec .storage .volumeClaimTemplates[] .spec .dataSourceRef .kind
Kind is the type of resource being referenced
- string required
.spec .storage .volumeClaimTemplates[] .spec .dataSourceRef .name
Name is the name of resource being referenced
- string
.spec .storage .volumeClaimTemplates[] .spec .dataSourceRef .namespace
Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace’s owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled.
- object
.spec .storage .volumeClaimTemplates[] .spec .resources
resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
- object
.spec .storage .volumeClaimTemplates[] .spec .resources .limits
Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- object
.spec .storage .volumeClaimTemplates[] .spec .resources .requests
Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
- object
.spec .storage .volumeClaimTemplates[] .spec .selector
selector is a label query over volumes to consider for binding.
- array
.spec .storage .volumeClaimTemplates[] .spec .selector .matchExpressions
matchExpressions is a list of label selector requirements. The requirements are ANDed.
- string required
.spec .storage .volumeClaimTemplates[] .spec .selector .matchExpressions[] .key
key is the label key that the selector applies to.
- string required
.spec .storage .volumeClaimTemplates[] .spec .selector .matchExpressions[] .operator
operator represents a key’s relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
- array
.spec .storage .volumeClaimTemplates[] .spec .selector .matchExpressions[] .values
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
- object
.spec .storage .volumeClaimTemplates[] .spec .selector .matchLabels
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is “key”, the operator is “In”, and the values array contains only “value”. The requirements are ANDed.
- string
.spec .storage .volumeClaimTemplates[] .spec .storageClassName
storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
- string
.spec .storage .volumeClaimTemplates[] .spec .volumeAttributesClassName
volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it’s not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).
- string
.spec .storage .volumeClaimTemplates[] .spec .volumeMode
volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
- string
.spec .storage .volumeClaimTemplates[] .spec .volumeName
volumeName is the binding reference to the PersistentVolume backing this claim.
- boolean
.spec .upgradeOSDRequiresHealthyPGs
UpgradeOSDRequiresHealthyPGs defines if OSD upgrade requires PGs are clean. If set to
true
OSD upgrade process won’t start until PGs are healthy. This configuration will be ignored ifskipUpgradeChecks
istrue
. Default is false. - integer
.spec .waitTimeoutForHealthyOSDInMinutes
WaitTimeoutForHealthyOSDInMinutes defines the time the operator would wait before an OSD can be stopped for upgrade or restart. If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one if
continueUpgradeAfterChecksEvenIfNotHealthy
isfalse
. IfcontinueUpgradeAfterChecksEvenIfNotHealthy
istrue
, then operator would continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won’t be applied ifskipUpgradeChecks
istrue
. The default wait timeout is 10 minutes. - object | null
.status
ClusterStatus represents the status of a Ceph cluster
- object
.status .ceph
CephStatus is the details health of a Ceph Cluster
- object
.status .ceph .capacity
Capacity is the capacity information of a Ceph Cluster
- integer
.status .ceph .capacity .bytesAvailable
- integer
.status .ceph .capacity .bytesTotal
- integer
.status .ceph .capacity .bytesUsed
- string
.status .ceph .capacity .lastUpdated
- object
.status .ceph .details
- string
.status .ceph .fsid
- string
.status .ceph .health
- string
.status .ceph .lastChanged
- string
.status .ceph .lastChecked
- string
.status .ceph .previousHealth
- object
.status .ceph .versions
CephDaemonsVersions show the current ceph version for different ceph daemons
- object
.status .ceph .versions .cephfs-mirror
CephFSMirror shows CephFSMirror Ceph version
- object
.status .ceph .versions .mds
Mds shows Mds Ceph version
- object
.status .ceph .versions .mgr
Mgr shows Mgr Ceph version
- object
.status .ceph .versions .mon
Mon shows Mon Ceph version
- object
.status .ceph .versions .osd
Osd shows Osd Ceph version
- object
.status .ceph .versions .overall
Overall shows overall Ceph version
- object
.status .ceph .versions .rbd-mirror
RbdMirror shows RbdMirror Ceph version
- object
.status .ceph .versions .rgw
Rgw shows Rgw Ceph version
- array
.status .conditions
- string
.status .conditions[] .lastHeartbeatTime
- string
.status .conditions[] .lastTransitionTime
- string
.status .conditions[] .message
- string
.status .conditions[] .reason
ConditionReason is a reason for a condition
- string
.status .conditions[] .status
- string
.status .conditions[] .type
ConditionType represent a resource’s status
- string
.status .message
- integer
.status .observedGeneration
ObservedGeneration is the latest generation observed by the controller.
- string
.status .phase
ConditionType represent a resource’s status
- string
.status .state
ClusterState represents the state of a Ceph Cluster
- object
.status .storage
CephStorage represents flavors of Ceph Cluster Storage
- object
.status .storage .deprecatedOSDs
- array
.status .storage .deviceClasses
- string
.status .storage .deviceClasses[] .name
- object
.status .storage .osd
OSDStatus represents OSD status of the ceph Cluster
- object
.status .storage .osd .migrationStatus
MigrationStatus status represents the current status of any OSD migration.
- integer
.status .storage .osd .migrationStatus .pending
- object
.status .storage .osd .storeType
StoreType is a mapping between the OSD backend stores and number of OSDs using these stores
- object
.status .version
ClusterVersion represents the version of a Ceph Cluster
- string
.status .version .image
- string
.status .version .version