GitHub - cortexproject/cortex-helm-chart: Helm chart for Cortex
alertmanager.affinity
object
{}
alertmanager.annotations
object
{}
alertmanager.containerSecurityContext.enabled
bool
true
alertmanager.containerSecurityContext.readOnlyRootFilesystem
bool
true
alertmanager.enabled
bool
true
alertmanager.env
list
[]
Extra env variables to pass to the cortex container
alertmanager.extraArgs
object
{}
Additional Cortex container arguments, e.g. log level (debug, info, warn, error)
alertmanager.extraContainers
list
[]
Additional containers to be added to the cortex pod.
alertmanager.extraPorts
list
[]
Additional ports to the cortex services. Useful to expose extra container ports.
alertmanager.extraVolumeMounts
list
[]
Extra volume mounts that will be added to the cortex container
alertmanager.extraVolumes
list
[]
Additional volumes to the cortex pod.
alertmanager.initContainers
list
[]
Init containers to be added to the cortex pod.
alertmanager.livenessProbe.httpGet.path
string
"/ready"
alertmanager.livenessProbe.httpGet.port
string
"http-metrics"
alertmanager.nodeSelector
object
{}
alertmanager.persistentVolume.accessModes
list
["ReadWriteOnce"]
Alertmanager data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
alertmanager.persistentVolume.annotations
object
{}
Alertmanager data Persistent Volume Claim annotations
alertmanager.persistentVolume.enabled
bool
true
If true and alertmanager.statefulSet.enabled is true, Alertmanager will create/use a Persistent Volume Claim If false, use emptyDir
alertmanager.persistentVolume.retentionPolicy
object
{}
StatefulSetAutoDeletePVC feature https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
alertmanager.persistentVolume.size
string
"2Gi"
Alertmanager data Persistent Volume size
alertmanager.persistentVolume.storageClass
string
nil
Alertmanager data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner.
alertmanager.persistentVolume.subPath
string
""
Subdirectory of Alertmanager data Persistent Volume to mount Useful if the volume's root directory is not empty
alertmanager.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
alertmanager.podDisruptionBudget
object
{"maxUnavailable":1}
If not set then a PodDisruptionBudget will not be created
alertmanager.podLabels
object
{}
Pod Labels
alertmanager.readinessProbe.httpGet.path
string
"/ready"
alertmanager.readinessProbe.httpGet.port
string
"http-metrics"
alertmanager.replicas
int
1
alertmanager.resources
object
{}
alertmanager.securityContext
object
{}
alertmanager.service.annotations
object
{}
alertmanager.service.labels
object
{}
alertmanager.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
alertmanager.serviceMonitor.additionalLabels
object
{}
alertmanager.serviceMonitor.enabled
bool
false
alertmanager.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
alertmanager.serviceMonitor.metricRelabelings
list
[]
alertmanager.serviceMonitor.podTargetLabels
list
[]
alertmanager.serviceMonitor.relabelings
list
[]
alertmanager.sidecar.containerSecurityContext.enabled
bool
true
alertmanager.sidecar.containerSecurityContext.readOnlyRootFilesystem
bool
true
alertmanager.sidecar.defaultFolderName
string
""
The default folder name, it will create a subfolder under the folder and put rules in there instead
alertmanager.sidecar.enableUniqueFilenames
bool
false
A value of true will produce unique filenames to avoid issues when duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces.
alertmanager.sidecar.enabled
bool
false
Enable sidecar that collect the configmaps with specified label and stores the included files them into the respective folders
alertmanager.sidecar.folder
string
"/data"
Folder where the files should be placed.
alertmanager.sidecar.folderAnnotation
string
"k8s-sidecar-target-directory"
The annotation the sidecar will look for in ConfigMaps and/or Secrets to override the destination folder for files. If the value is a relative path, it will be relative to FOLDER
alertmanager.sidecar.healthPort
int
8081
The port the kiwigrid/k8s-sidecar listens on for health checks. The image default matches the cortex default listen port (8080), so it must be overridden here.
alertmanager.sidecar.image.repository
string
"kiwigrid/k8s-sidecar"
alertmanager.sidecar.image.sha
string
""
alertmanager.sidecar.image.tag
string
"2.5.0"
alertmanager.sidecar.imagePullPolicy
string
"IfNotPresent"
alertmanager.sidecar.label
string
"cortex_alertmanager"
Label that should be used for filtering
alertmanager.sidecar.labelValue
string
""
The value for the label you want to filter your resources on. Don't set a value to filter by any value
alertmanager.sidecar.readinessProbe.httpGet.path
string
"/healthz"
alertmanager.sidecar.readinessProbe.httpGet.port
string
"sidecar-health"
alertmanager.sidecar.readinessProbe.periodSeconds
int
5
alertmanager.sidecar.resource
string
"both"
The resource type that the operator will filter for. Can be configmap, secret or both
alertmanager.sidecar.resources
object
{}
alertmanager.sidecar.searchNamespace
string
""
The Namespace(s) from which resources will be watched. For multiple namespaces, use a comma-separated string like "default,test". If not set or set to ALL, it will watch all Namespaces.
alertmanager.sidecar.skipTlsVerify
bool
false
Set to true to skip tls verification for kube api calls
alertmanager.sidecar.startupProbe.httpGet.path
string
"/healthz"
alertmanager.sidecar.startupProbe.httpGet.port
string
"sidecar-health"
alertmanager.sidecar.startupProbe.periodSeconds
int
5
alertmanager.sidecar.watchMethod
string
""
Determines how kopf-k8s-sidecar will run. If WATCH it will run like a normal operator forever. If LIST it will gather the matching configmaps and secrets currently present, write those files to the destination directory and die
alertmanager.startupProbe.failureThreshold
int
10
alertmanager.startupProbe.httpGet.path
string
"/ready"
alertmanager.startupProbe.httpGet.port
string
"http-metrics"
alertmanager.statefulSet.enabled
bool
false
If true, use a statefulset instead of a deployment for pod management. This is useful for using a persistent volume for storing silences between restarts.
alertmanager.statefulStrategy.type
string
"RollingUpdate"
alertmanager.strategy.rollingUpdate.maxSurge
int
0
alertmanager.strategy.rollingUpdate.maxUnavailable
int
1
alertmanager.strategy.type
string
"RollingUpdate"
alertmanager.terminationGracePeriodSeconds
int
60
alertmanager.tolerations
list
[]
Tolerations for pod assignment ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
alertmanager.topologySpreadConstraints
list
[]
clusterDomain
string
"cluster.local"
Kubernetes cluster DNS domain
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"compactor"
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
compactor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
compactor.annotations
object
{}
compactor.containerSecurityContext.enabled
bool
true
compactor.containerSecurityContext.readOnlyRootFilesystem
bool
true
compactor.enabled
bool
true
compactor.env
list
[]
compactor.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
compactor.extraContainers
list
[]
compactor.extraPorts
list
[]
compactor.extraVolumeMounts
list
[]
compactor.extraVolumes
list
[]
compactor.initContainers
list
[]
compactor.livenessProbe
object
{}
compactor.nodeSelector
object
{}
compactor.persistentVolume.accessModes
list
["ReadWriteOnce"]
compactor data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
compactor.persistentVolume.annotations
object
{}
compactor data Persistent Volume Claim annotations
compactor.persistentVolume.enabled
bool
true
If true compactor will create/use a Persistent Volume Claim If false, use emptyDir
compactor.persistentVolume.retentionPolicy
object
{}
StatefulSetAutoDeletePVC feature https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
compactor.persistentVolume.size
string
"2Gi"
compactor.persistentVolume.storageClass
string
nil
compactor data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner.
compactor.persistentVolume.subPath
string
""
Subdirectory of compactor data Persistent Volume to mount Useful if the volume's root directory is not empty
compactor.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
compactor.podDisruptionBudget.maxUnavailable
string
"30%"
compactor.podLabels
object
{}
Pod Labels
compactor.readinessProbe.httpGet.path
string
"/ready"
compactor.readinessProbe.httpGet.port
string
"http-metrics"
compactor.replicas
int
1
compactor.resources
object
{}
compactor.securityContext
object
{}
compactor.service.annotations
object
{}
compactor.service.labels
object
{}
compactor.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
compactor.serviceMonitor.additionalLabels
object
{}
compactor.serviceMonitor.enabled
bool
false
compactor.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
compactor.serviceMonitor.metricRelabelings
list
[]
compactor.serviceMonitor.podTargetLabels
list
[]
compactor.serviceMonitor.relabelings
list
[]
compactor.startupProbe
object
{}
compactor.strategy.type
string
"RollingUpdate"
compactor.terminationGracePeriodSeconds
int
240
compactor.tolerations
list
[]
compactor.topologySpreadConstraints
list
[]
config.alertmanager.cluster
object
{"listen_address":"0.0.0.0:9094"}
Disable alertmanager gossip cluster by setting empty listen_address to empty string
config.alertmanager.enable_api
bool
false
Enable the experimental alertmanager config api.
config.alertmanager.external_url
string
"/api/prom/alertmanager"
config.api.prometheus_http_prefix
string
"/prometheus"
config.api.response_compression_enabled
bool
true
Use GZIP compression for API responses. Some endpoints serve large YAML or JSON blobs which can benefit from compression.
config.auth_enabled
bool
false
config.blocks_storage.bucket_store.bucket_index.enabled
bool
true
config.blocks_storage.bucket_store.sync_dir
string
"/data/tsdb-sync"
config.blocks_storage.tsdb.dir
string
"/data/tsdb"
config.distributor.pool.health_check_ingesters
bool
true
config.distributor.shard_by_all_labels
bool
true
Distribute samples based on all labels, as opposed to solely by user and metric name.
config.frontend.log_queries_longer_than
string
"10s"
config.ingester.lifecycler.final_sleep
string
"30s"
Duration to sleep for before exiting, to ensure metrics are scraped.
config.ingester.lifecycler.join_after
string
"10s"
We don't want to join immediately, but wait a bit to see other ingesters and their tokens first. It can take a while to have the full picture when using gossip
config.ingester.lifecycler.observe_period
string
"10s"
To avoid generating same tokens by multiple ingesters, they can "observe" the ring for a while, after putting their own tokens into it. This is only useful when using gossip, since multiple ingesters joining at the same time can have conflicting tokens if they don't see each other yet.
config.ingester.lifecycler.ring.kvstore.store
string
"memberlist"
config.ingester.lifecycler.ring.replication_factor
int
3
Ingester replication factor per default is 3
config.ingester_client.grpc_client_config.max_recv_msg_size
int
10485760
config.ingester_client.grpc_client_config.max_send_msg_size
int
10485760
config.limits.enforce_metric_name
bool
true
Enforce that every sample has a metric name
config.limits.max_query_lookback
string
"0s"
config.limits.reject_old_samples
bool
true
config.limits.reject_old_samples_max_age
string
"168h"
config.memberlist.bind_port
int
7946
config.memberlist.join_members
list
["{{ include \"cortex.fullname\" $ }}-memberlist"]
the service name of the memberlist if using memberlist discovery
config.querier.active_query_tracker_dir
string
"/data/active-query-tracker"
config.querier.store_gateway_addresses
string
automatic
Comma separated list of store-gateway addresses in DNS Service Discovery format. This option should is set automatically when using the blocks storage and the store-gateway sharding is disabled (when enabled, the store-gateway instances form a ring and addresses are picked from the ring).
config.query_range.align_queries_with_step
bool
false
config.query_range.cache_results
bool
true
config.query_range.results_cache.cache.memcached.expiration
string
"1h"
config.query_range.results_cache.cache.memcached_client.timeout
string
"1s"
config.query_range.split_queries_by_interval
string
"24h"
config.ruler.enable_alertmanager_discovery
bool
false
config.ruler.enable_api
bool
true
Enable the experimental ruler config api.
config.runtime_config.file
string
"/etc/cortex-runtime-config/runtime_config.yaml"
config.server.grpc_listen_port
int
9095
config.server.grpc_server_max_concurrent_streams
int
10000
config.server.grpc_server_max_recv_msg_size
int
10485760
config.server.grpc_server_max_send_msg_size
int
10485760
config.server.http_listen_port
int
8080
config.store_gateway
object
{"sharding_enabled":false}
https://cortexmetrics.io/docs/configuration/configuration-file/#store_gateway_config
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"distributor"
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
distributor.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
distributor.annotations
object
{}
distributor.autoscaling.behavior
object
{}
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
distributor.autoscaling.enabled
bool
false
Creates a HorizontalPodAutoscaler for the distributor pods.
distributor.autoscaling.extraMetrics
list
[]
Optional custom and external metrics for the distributor pods to scale on In order to use this option , define a list of of specific following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics and https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
distributor.autoscaling.maxReplicas
int
30
distributor.autoscaling.minReplicas
int
2
distributor.autoscaling.targetCPUUtilizationPercentage
int
80
distributor.autoscaling.targetMemoryUtilizationPercentage
int
0
distributor.containerSecurityContext.enabled
bool
true
distributor.containerSecurityContext.readOnlyRootFilesystem
bool
true
distributor.enabled
bool
true
distributor.env
list
[]
distributor.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
distributor.extraContainers
list
[]
distributor.extraPorts
list
[]
distributor.extraVolumeMounts
list
[]
distributor.extraVolumes
list
[]
distributor.initContainers
list
[]
distributor.lifecycle
object
{}
distributor.livenessProbe.httpGet.path
string
"/ready"
distributor.livenessProbe.httpGet.port
string
"http-metrics"
distributor.nodeSelector
object
{}
distributor.persistentVolume.subPath
string
nil
distributor.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
distributor.podDisruptionBudget.maxUnavailable
string
"30%"
distributor.podLabels
object
{}
Pod Labels
distributor.readinessProbe.httpGet.path
string
"/ready"
distributor.readinessProbe.httpGet.port
string
"http-metrics"
distributor.replicas
int
2
distributor.resources
object
{}
distributor.securityContext
object
{}
distributor.service.annotations
object
{}
distributor.service.labels
object
{}
distributor.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
distributor.serviceMonitor.additionalLabels
object
{}
distributor.serviceMonitor.enabled
bool
false
distributor.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
distributor.serviceMonitor.metricRelabelings
list
[]
distributor.serviceMonitor.podTargetLabels
list
[]
distributor.serviceMonitor.relabelings
list
[]
distributor.startupProbe.failureThreshold
int
10
distributor.startupProbe.httpGet.path
string
"/ready"
distributor.startupProbe.httpGet.port
string
"http-metrics"
distributor.strategy.rollingUpdate.maxSurge
int
0
distributor.strategy.rollingUpdate.maxUnavailable
int
1
distributor.strategy.type
string
"RollingUpdate"
distributor.terminationGracePeriodSeconds
int
60
distributor.tolerations
list
[]
distributor.topologySpreadConstraints
list
[]
externalConfigSecretName
string
"secret-with-config.yaml"
externalConfigVersion
string
"0"
image.pullPolicy
string
"IfNotPresent"
image.pullSecrets
list
[]
Optionally specify an array of imagePullSecrets. Secrets must be manually created in the namespace. ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
image.repository
string
"quay.io/cortexproject/cortex"
image.tag
string
""
Allows you to override the cortex version in this chart. Use at your own risk.
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"ingester"
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[1]
string
"querier"
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
ingester.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
ingester.annotations
object
{}
ingester.autoscaling.behavior.scaleDown.policies
list
[{"periodSeconds":1800,"type":"Pods","value":1}]
see https://cortexmetrics.io/docs/guides/ingesters-scaling-up-and-down/#scaling-down for scaledown details
ingester.autoscaling.behavior.scaleDown.stabilizationWindowSeconds
int
3600
uses metrics from the past 1h to make scaleDown decisions
ingester.autoscaling.behavior.scaleUp.policies
list
[{"periodSeconds":1800,"type":"Pods","value":1}]
This default scaleup policy allows adding 1 pod every 30 minutes. Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
ingester.autoscaling.enabled
bool
false
ingester.autoscaling.extraMetrics
list
[]
Optional custom and external metrics for the ingester pods to scale on In order to use this option , define a list of of specific following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics and https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
ingester.autoscaling.maxReplicas
int
30
ingester.autoscaling.minReplicas
int
3
ingester.autoscaling.targetMemoryUtilizationPercentage
int
80
ingester.containerSecurityContext.enabled
bool
true
ingester.containerSecurityContext.readOnlyRootFilesystem
bool
true
ingester.enabled
bool
true
ingester.env
list
[]
ingester.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
ingester.extraContainers
list
[]
ingester.extraPorts
list
[]
ingester.extraVolumeMounts
list
[]
ingester.extraVolumes
list
[]
ingester.initContainers
list
[]
ingester.lifecycle.preStop
object
{"httpGet":{"path":"/ingester/shutdown","port":"http-metrics"}}
The /shutdown preStop hook is recommended as part of the ingester scaledown process, but can be removed to optimize rolling restarts in instances that will never be scaled down. https://cortexmetrics.io/docs/guides/ingesters-scaling-up-and-down/#scaling-down
ingester.livenessProbe
object
{}
Startup/liveness probes for ingesters are not recommended. Ref: https://cortexmetrics.io/docs/guides/running-cortex-on-kubernetes/#take-extra-care-with-ingesters
ingester.nodeSelector
object
{}
ingester.persistentVolume.accessModes
list
["ReadWriteOnce"]
Ingester data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
ingester.persistentVolume.annotations
object
{}
Ingester data Persistent Volume Claim annotations
ingester.persistentVolume.enabled
bool
true
If true and ingester.statefulSet.enabled is true, Ingester will create/use a Persistent Volume Claim If false, use emptyDir
ingester.persistentVolume.retentionPolicy
object
{}
StatefulSetAutoDeletePVC feature https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
ingester.persistentVolume.size
string
"2Gi"
Ingester data Persistent Volume size
ingester.persistentVolume.storageClass
string
nil
Ingester data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner.
ingester.persistentVolume.subPath
string
""
Subdirectory of Ingester data Persistent Volume to mount Useful if the volume's root directory is not empty
ingester.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
ingester.podDisruptionBudget.maxUnavailable
int
1
ingester.podLabels
object
{}
Pod Labels
ingester.readinessProbe.httpGet.path
string
"/ready"
ingester.readinessProbe.httpGet.port
string
"http-metrics"
ingester.replicas
int
3
ingester.resources
object
{}
ingester.securityContext
object
{}
ingester.service.annotations
object
{}
ingester.service.labels
object
{}
ingester.serviceAccount.name
string
nil
ingester.serviceMonitor.additionalLabels
object
{}
ingester.serviceMonitor.enabled
bool
false
ingester.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
ingester.serviceMonitor.metricRelabelings
list
[]
ingester.serviceMonitor.podTargetLabels
list
[]
ingester.serviceMonitor.relabelings
list
[]
ingester.startupProbe
object
{}
Startup/liveness probes for ingesters are not recommended. Ref: https://cortexmetrics.io/docs/guides/running-cortex-on-kubernetes/#take-extra-care-with-ingesters
ingester.statefulSet.enabled
bool
false
If true, use a statefulset instead of a deployment for pod management. This is useful when using WAL
ingester.statefulSet.podManagementPolicy
string
"OrderedReady"
ref: https://cortexmetrics.io/docs/guides/ingesters-scaling-up-and-down/#scaling-down and https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies for scaledown details
ingester.statefulStrategy.type
string
"RollingUpdate"
ingester.strategy.rollingUpdate.maxSurge
int
0
ingester.strategy.rollingUpdate.maxUnavailable
int
1
ingester.strategy.type
string
"RollingUpdate"
ingester.terminationGracePeriodSeconds
int
240
ingester.tolerations
list
[]
ingester.topologySpreadConstraints
list
[]
ingress.annotations
object
{}
ingress.enabled
bool
false
ingress.hosts[0].host
string
"chart-example.local"
ingress.hosts[0].paths[0]
string
"/"
ingress.ingressClass.enabled
bool
false
ingress.ingressClass.name
string
"nginx"
ingress.tls
list
[]
memberlist.service.annotations
object
{}
memberlist.service.labels
object
{}
memcached-blocks-index.architecture
string
"high-availability"
memcached-blocks-index.args
list
["-m 1024"]
Command line argument supplied to memcached
memcached-blocks-index.args[0]
string
"-m 1024"
The amount of memory allocated to memcached for object storage
memcached-blocks-index.disableValidation
bool
false
Bypass validation of the memcached configuration in case a custom image is in use
memcached-blocks-index.enabled
bool
true
Enables support for block index caching
memcached-blocks-index.image.repository
string
"memcached"
memcached-blocks-index.image.tag
string
"1.6.40"
memcached-blocks-index.metrics.enabled
bool
true
memcached-blocks-index.metrics.image.repository
string
"prom/memcached-exporter"
memcached-blocks-index.metrics.image.tag
string
"v0.15.5"
memcached-blocks-index.metrics.serviceMonitor.enabled
bool
false
memcached-blocks-index.replicaCount
int
2
memcached-blocks-index.resources
object
{}
memcached-blocks-index.service.clusterIP
string
"None"
memcached-blocks-metadata.architecture
string
"high-availability"
memcached-blocks-metadata.args
list
["-m 1024"]
Command line argument supplied to memcached
memcached-blocks-metadata.args[0]
string
"-m 1024"
The amount of memory allocated to memcached for object storage
memcached-blocks-metadata.disableValidation
bool
false
Bypass validation of the memcached configuration in case a custom image is in use
memcached-blocks-metadata.enabled
bool
true
Enables support for block metadata caching
memcached-blocks-metadata.image.repository
string
"memcached"
memcached-blocks-metadata.image.tag
string
"1.6.40"
memcached-blocks-metadata.metrics.enabled
bool
true
memcached-blocks-metadata.metrics.image.repository
string
"prom/memcached-exporter"
memcached-blocks-metadata.metrics.image.tag
string
"v0.15.5"
memcached-blocks-metadata.metrics.serviceMonitor.enabled
bool
false
memcached-blocks-metadata.replicaCount
int
2
memcached-blocks-metadata.resources
object
{}
memcached-blocks-metadata.service.clusterIP
string
"None"
memcached-blocks.architecture
string
"high-availability"
memcached-blocks.args
list
["-m 1024"]
Command line argument supplied to memcached
memcached-blocks.args[0]
string
"-m 1024"
The amount of memory allocated to memcached for object storage
memcached-blocks.disableValidation
bool
false
Bypass validation of the memcached configuration in case a custom image is in use
memcached-blocks.enabled
bool
true
Enables support for block caching
memcached-blocks.image.repository
string
"memcached"
memcached-blocks.image.tag
string
"1.6.40"
memcached-blocks.metrics.enabled
bool
true
memcached-blocks.metrics.image.repository
string
"prom/memcached-exporter"
memcached-blocks.metrics.image.tag
string
"v0.15.5"
memcached-blocks.metrics.serviceMonitor.enabled
bool
false
memcached-blocks.replicaCount
int
2
memcached-blocks.resources
object
{}
memcached-blocks.service.clusterIP
string
"None"
memcached-frontend.architecture
string
"high-availability"
memcached-frontend.args
list
["-m 1024"]
Command line argument supplied to memcached
memcached-frontend.args[0]
string
"-m 1024"
The amount of memory allocated to memcached for object storage
memcached-frontend.disableValidation
bool
false
Bypass validation of the memcached configuration in case a custom image is in use
memcached-frontend.enabled
bool
true
Enables support for caching queries in the frontend
memcached-frontend.image.repository
string
"memcached"
memcached-frontend.image.tag
string
"1.6.40"
memcached-frontend.metrics.enabled
bool
true
memcached-frontend.metrics.image.repository
string
"prom/memcached-exporter"
memcached-frontend.metrics.image.tag
string
"v0.15.5"
memcached-frontend.metrics.serviceMonitor.enabled
bool
false
memcached-frontend.replicaCount
int
2
memcached-frontend.resources
object
{}
memcached-frontend.service.clusterIP
string
"None"
nginx.affinity
object
{}
nginx.annotations
object
{}
nginx.autoscaling.behavior
object
{}
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
nginx.autoscaling.enabled
bool
false
Creates a HorizontalPodAutoscaler for the nginx pods.
nginx.autoscaling.extraMetrics
list
[]
Optional custom and external metrics for the nginx pods to scale on In order to use this option , define a list of of specific following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics and https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
nginx.autoscaling.maxReplicas
int
30
nginx.autoscaling.minReplicas
int
2
nginx.autoscaling.targetCPUUtilizationPercentage
int
80
nginx.autoscaling.targetMemoryUtilizationPercentage
int
0
nginx.config.auth_orgs
list
[]
Optional list of auth tenants to set in the nginx config
nginx.config.basicAuthSecretName
string
""
Optional name of basic auth secret. In order to use this option, a secret with htpasswd formatted contents at the key ".htpasswd" must exist. For example: apiVersion: v1 kind: Secret metadata: name: my-secret namespace: stringData: .htpasswd:
nginx.config.client_max_body_size
string
"1M"
ref: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
nginx.config.dnsResolver
string
"kube-dns.kube-system.svc.cluster.local"
nginx.config.dnsTTL
string
"15s"
Including the valid parameter to the resolver directive to re-resolve names every dnsTTL seconds/minutes
nginx.config.httpSnippet
string
""
arbitrary snippet to inject in the http { } section of the nginx config
nginx.config.mainSnippet
string
""
arbitrary snippet to inject in the top section of the nginx config
nginx.config.override_push_endpoint
string
""
nginx.config.serverSnippet
string
""
arbitrary snippet to inject in the server { } section of the nginx config
nginx.config.setHeaders
object
{}
nginx.config.upstream_protocol
string
"http"
protocol for the communication with the upstream
nginx.config.verboseLogging
bool
true
Enables all access logs from nginx, otherwise ignores 2XX and 3XX status codes
nginx.containerSecurityContext.enabled
bool
true
nginx.containerSecurityContext.readOnlyRootFilesystem
bool
false
nginx.enabled
bool
true
nginx.env
list
[]
nginx.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
nginx.extraContainers
list
[]
nginx.extraPorts
list
[]
nginx.extraVolumeMounts
list
[]
nginx.extraVolumes
list
[]
nginx.http_listen_port
int
80
nginx.image.pullPolicy
string
"IfNotPresent"
nginx.image.repository
string
"nginx"
nginx.image.tag
float
1.29
nginx.initContainers
list
[]
nginx.livenessProbe.httpGet.path
string
"/healthz"
nginx.livenessProbe.httpGet.port
string
"http-metrics"
nginx.nodeSelector
object
{}
nginx.persistentVolume.subPath
string
nil
nginx.podAnnotations
object
{}
Pod Annotations
nginx.podDisruptionBudget.maxUnavailable
string
"30%"
nginx.podLabels
object
{}
Pod Labels
nginx.readinessProbe.httpGet.path
string
"/healthz"
nginx.readinessProbe.httpGet.port
string
"http-metrics"
nginx.replicas
int
2
nginx.resources
object
{}
nginx.securityContext
object
{}
nginx.service.annotations
object
{}
nginx.service.labels
object
{}
nginx.service.port
string
""
Replaces default port value from nginx.http_listen_port when set
nginx.service.type
string
"ClusterIP"
nginx.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
nginx.startupProbe.failureThreshold
int
10
nginx.startupProbe.httpGet.path
string
"/healthz"
nginx.startupProbe.httpGet.port
string
"http-metrics"
nginx.strategy.rollingUpdate.maxSurge
int
0
nginx.strategy.rollingUpdate.maxUnavailable
int
1
nginx.strategy.type
string
"RollingUpdate"
nginx.terminationGracePeriodSeconds
int
10
nginx.tolerations
list
[]
nginx.topologySpreadConstraints
list
[]
overrides_exporter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
overrides_exporter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
overrides_exporter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"overrides-exporter"
overrides_exporter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
overrides_exporter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
overrides_exporter.annotations
object
{}
overrides_exporter.containerSecurityContext.enabled
bool
true
overrides_exporter.containerSecurityContext.readOnlyRootFilesystem
bool
true
overrides_exporter.enabled
bool
false
https://cortexmetrics.io/docs/guides/overrides-exporter/
overrides_exporter.env
list
[]
overrides_exporter.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
overrides_exporter.extraContainers
list
[]
overrides_exporter.extraPorts
list
[]
overrides_exporter.extraVolumeMounts
list
[]
overrides_exporter.extraVolumes
list
[]
overrides_exporter.initContainers
list
[]
overrides_exporter.lifecycle
object
{}
overrides_exporter.livenessProbe.httpGet.path
string
"/ready"
overrides_exporter.livenessProbe.httpGet.port
string
"http-metrics"
overrides_exporter.nodeSelector
object
{}
overrides_exporter.podAnnotations
object
{"prometheus.io/port":"http-metrics","prometheus.io/scrape":"true"}
Pod Annotations
overrides_exporter.podDisruptionBudget.maxUnavailable
string
"30%"
overrides_exporter.podLabels
object
{}
Pod Labels
overrides_exporter.readinessProbe.httpGet.path
string
"/ready"
overrides_exporter.readinessProbe.httpGet.port
string
"http-metrics"
overrides_exporter.replicas
int
1
overrides_exporter.resources
object
{}
overrides_exporter.securityContext
object
{}
overrides_exporter.service.annotations
object
{}
overrides_exporter.service.labels
object
{}
overrides_exporter.serviceMonitor.additionalLabels
object
{}
overrides_exporter.serviceMonitor.enabled
bool
false
overrides_exporter.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
overrides_exporter.serviceMonitor.metricRelabelings
list
[]
overrides_exporter.serviceMonitor.podTargetLabels
list
[]
overrides_exporter.serviceMonitor.relabelings
list
[]
overrides_exporter.startupProbe.failureThreshold
int
10
overrides_exporter.startupProbe.httpGet.path
string
"/ready"
overrides_exporter.startupProbe.httpGet.port
string
"http-metrics"
overrides_exporter.strategy.rollingUpdate.maxSurge
int
0
overrides_exporter.strategy.rollingUpdate.maxUnavailable
int
1
overrides_exporter.strategy.type
string
"RollingUpdate"
overrides_exporter.terminationGracePeriodSeconds
int
180
overrides_exporter.tolerations
list
[]
overrides_exporter.topologySpreadConstraints
list
[]
parquet_converter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
parquet_converter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
parquet_converter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"parquet-converter"
parquet_converter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
parquet_converter.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
parquet_converter.annotations
object
{}
parquet_converter.containerSecurityContext.enabled
bool
true
parquet_converter.containerSecurityContext.readOnlyRootFilesystem
bool
true
parquet_converter.enabled
bool
false
https://cortexmetrics.io/docs/guides/parquet-mode/
parquet_converter.env
list
[]
parquet_converter.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
parquet_converter.extraContainers
list
[]
parquet_converter.extraPorts
list
[]
parquet_converter.extraVolumeMounts
list
[]
parquet_converter.extraVolumes
list
[]
parquet_converter.initContainers
list
[]
parquet_converter.lifecycle
object
{}
parquet_converter.livenessProbe.httpGet.path
string
"/ready"
parquet_converter.livenessProbe.httpGet.port
string
"http-metrics"
parquet_converter.nodeSelector
object
{}
parquet_converter.podAnnotations
object
{"prometheus.io/port":"http-metrics","prometheus.io/scrape":"true"}
Pod Annotations
parquet_converter.podLabels
object
{}
Pod Labels
parquet_converter.readinessProbe.httpGet.path
string
"/ready"
parquet_converter.readinessProbe.httpGet.port
string
"http-metrics"
parquet_converter.replicas
int
1
parquet_converter.resources
object
{}
parquet_converter.securityContext
object
{}
parquet_converter.service.annotations
object
{}
parquet_converter.service.labels
object
{}
parquet_converter.serviceMonitor.additionalLabels
object
{}
parquet_converter.serviceMonitor.enabled
bool
false
parquet_converter.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
parquet_converter.serviceMonitor.metricRelabelings
list
[]
parquet_converter.serviceMonitor.podTargetLabels
list
[]
parquet_converter.serviceMonitor.relabelings
list
[]
parquet_converter.startupProbe.failureThreshold
int
10
parquet_converter.startupProbe.httpGet.path
string
"/ready"
parquet_converter.startupProbe.httpGet.port
string
"http-metrics"
parquet_converter.strategy.rollingUpdate.maxSurge
int
0
parquet_converter.strategy.rollingUpdate.maxUnavailable
int
1
parquet_converter.strategy.type
string
"RollingUpdate"
parquet_converter.terminationGracePeriodSeconds
int
180
parquet_converter.tolerations
list
[]
parquet_converter.topologySpreadConstraints
list
[]
purger.affinity
object
{}
purger.annotations
object
{}
purger.containerSecurityContext.enabled
bool
true
purger.containerSecurityContext.readOnlyRootFilesystem
bool
true
purger.enabled
bool
false
purger.env
list
[]
Extra env variables to pass to the cortex container
purger.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
purger.extraContainers
list
[]
Additional containers to be added to the cortex pod.
purger.extraPorts
list
[]
Additional ports to the cortex services. Useful to expose extra container ports.
purger.extraVolumeMounts
list
[]
Extra volume mounts that will be added to the cortex container
purger.extraVolumes
list
[]
Additional volumes to the cortex pod.
purger.initContainers
list
[]
Init containers to be added to the cortex pod.
purger.lifecycle
object
{}
purger.livenessProbe.httpGet.path
string
"/ready"
purger.livenessProbe.httpGet.port
string
"http-metrics"
purger.livenessProbe.httpGet.scheme
string
"HTTP"
purger.nodeSelector
object
{}
purger.podAnnotations."prometheus.io/port"
string
"8080"
purger.podAnnotations."prometheus.io/scrape"
string
"true"
purger.podLabels
object
{}
purger.readinessProbe.httpGet.path
string
"/ready"
purger.readinessProbe.httpGet.port
string
"http-metrics"
purger.replicas
int
1
purger.resources
object
{}
purger.securityContext
object
{}
purger.service.annotations
object
{}
purger.service.labels
object
{}
purger.serviceAccount.name
string
""
purger.serviceMonitor.additionalLabels
object
{}
purger.serviceMonitor.enabled
bool
false
purger.serviceMonitor.extraEndpointSpec
object
{}
purger.serviceMonitor.metricRelabelings
list
[]
purger.serviceMonitor.podTargetLabels
list
[]
purger.serviceMonitor.relabelings
list
[]
purger.startupProbe.failureThreshold
int
60
purger.startupProbe.httpGet.path
string
"/ready"
purger.startupProbe.httpGet.port
string
"http-metrics"
purger.startupProbe.httpGet.scheme
string
"HTTP"
purger.startupProbe.initialDelaySeconds
int
120
purger.startupProbe.periodSeconds
int
30
purger.strategy.type
string
"RollingUpdate"
purger.terminationGracePeriodSeconds
int
60
purger.topologySpreadConstraints
list
[]
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"ingester"
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[1]
string
"querier"
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
querier.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
querier.annotations
object
{}
querier.autoscaling.behavior
object
{}
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
querier.autoscaling.enabled
bool
false
Creates a HorizontalPodAutoscaler for the querier pods.
querier.autoscaling.extraMetrics
list
[]
Optional custom and external metrics for the querier pods to scale on In order to use this option , define a list of of specific following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics and https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
querier.autoscaling.maxReplicas
int
30
querier.autoscaling.minReplicas
int
2
querier.autoscaling.targetCPUUtilizationPercentage
int
80
querier.autoscaling.targetMemoryUtilizationPercentage
int
0
querier.containerSecurityContext.enabled
bool
true
querier.containerSecurityContext.readOnlyRootFilesystem
bool
true
querier.enabled
bool
true
querier.env
list
[]
querier.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
querier.extraContainers
list
[]
querier.extraPorts
list
[]
querier.extraVolumeMounts
list
[]
querier.extraVolumes
list
[]
querier.initContainers
list
[]
querier.lifecycle
object
{}
querier.livenessProbe.httpGet.path
string
"/ready"
querier.livenessProbe.httpGet.port
string
"http-metrics"
querier.nodeSelector
object
{}
querier.persistentVolume.subPath
string
nil
querier.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
querier.podDisruptionBudget.maxUnavailable
string
"30%"
querier.podLabels
object
{}
Pod Labels
querier.readinessProbe.httpGet.path
string
"/ready"
querier.readinessProbe.httpGet.port
string
"http-metrics"
querier.replicas
int
2
querier.resources
object
{}
querier.securityContext
object
{}
querier.service.annotations
object
{}
querier.service.labels
object
{}
querier.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
querier.serviceMonitor.additionalLabels
object
{}
querier.serviceMonitor.enabled
bool
false
querier.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
querier.serviceMonitor.metricRelabelings
list
[]
querier.serviceMonitor.podTargetLabels
list
[]
querier.serviceMonitor.relabelings
list
[]
querier.startupProbe.failureThreshold
int
10
querier.startupProbe.httpGet.path
string
"/ready"
querier.startupProbe.httpGet.port
string
"http-metrics"
querier.strategy.rollingUpdate.maxSurge
int
0
querier.strategy.rollingUpdate.maxUnavailable
int
1
querier.strategy.type
string
"RollingUpdate"
querier.terminationGracePeriodSeconds
int
180
querier.tolerations
list
[]
querier.topologySpreadConstraints
list
[]
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"query-frontend"
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
query_frontend.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
query_frontend.annotations
object
{}
query_frontend.containerSecurityContext.enabled
bool
true
query_frontend.containerSecurityContext.readOnlyRootFilesystem
bool
true
query_frontend.enabled
bool
true
query_frontend.env
list
[]
query_frontend.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
query_frontend.extraContainers
list
[]
query_frontend.extraPorts
list
[]
query_frontend.extraVolumeMounts
list
[]
query_frontend.extraVolumes
list
[]
query_frontend.initContainers
list
[]
query_frontend.lifecycle
object
{}
query_frontend.livenessProbe.httpGet.path
string
"/ready"
query_frontend.livenessProbe.httpGet.port
string
"http-metrics"
query_frontend.nodeSelector
object
{}
query_frontend.persistentVolume.subPath
string
nil
query_frontend.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
query_frontend.podDisruptionBudget.maxUnavailable
string
"30%"
query_frontend.podLabels
object
{}
Pod Labels
query_frontend.readinessProbe.httpGet.path
string
"/ready"
query_frontend.readinessProbe.httpGet.port
string
"http-metrics"
query_frontend.replicas
int
2
query_frontend.resources
object
{}
query_frontend.securityContext
object
{}
query_frontend.service.annotations
object
{}
query_frontend.service.labels
object
{}
query_frontend.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
query_frontend.serviceMonitor.additionalLabels
object
{}
query_frontend.serviceMonitor.enabled
bool
false
query_frontend.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
query_frontend.serviceMonitor.metricRelabelings
list
[]
query_frontend.serviceMonitor.podTargetLabels
list
[]
query_frontend.serviceMonitor.relabelings
list
[]
query_frontend.startupProbe.failureThreshold
int
10
query_frontend.startupProbe.httpGet.path
string
"/ready"
query_frontend.startupProbe.httpGet.port
string
"http-metrics"
query_frontend.strategy.rollingUpdate.maxSurge
int
0
query_frontend.strategy.rollingUpdate.maxUnavailable
int
1
query_frontend.strategy.type
string
"RollingUpdate"
query_frontend.terminationGracePeriodSeconds
int
180
query_frontend.tolerations
list
[]
query_frontend.topologySpreadConstraints
list
[]
query_scheduler.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
query_scheduler.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
query_scheduler.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"query-scheduler"
query_scheduler.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
query_scheduler.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
query_scheduler.annotations
object
{}
query_scheduler.containerSecurityContext.enabled
bool
true
query_scheduler.containerSecurityContext.readOnlyRootFilesystem
bool
true
query_scheduler.enabled
bool
false
If true, querier and query-frontend will connect to it (requires Cortex v1.6.0+) https://cortexmetrics.io/docs/operations/scaling-query-frontend/#query-scheduler
query_scheduler.env
list
[]
query_scheduler.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
query_scheduler.extraContainers
list
[]
query_scheduler.extraPorts
list
[]
query_scheduler.extraVolumeMounts
list
[]
query_scheduler.extraVolumes
list
[]
query_scheduler.initContainers
list
[]
query_scheduler.lifecycle
object
{}
query_scheduler.livenessProbe.httpGet.path
string
"/ready"
query_scheduler.livenessProbe.httpGet.port
string
"http-metrics"
query_scheduler.nodeSelector
object
{}
query_scheduler.persistentVolume.subPath
string
nil
query_scheduler.podAnnotations
object
{"prometheus.io/port":"http-metrics","prometheus.io/scrape":"true"}
Pod Annotations
query_scheduler.podDisruptionBudget.maxUnavailable
int
1
query_scheduler.podLabels
object
{}
Pod Labels
query_scheduler.readinessProbe.httpGet.path
string
"/ready"
query_scheduler.readinessProbe.httpGet.port
string
"http-metrics"
query_scheduler.replicas
int
2
query_scheduler.resources
object
{}
query_scheduler.securityContext
object
{}
query_scheduler.service.annotations
object
{}
query_scheduler.service.labels
object
{}
query_scheduler.serviceMonitor.additionalLabels
object
{}
query_scheduler.serviceMonitor.enabled
bool
false
query_scheduler.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
query_scheduler.serviceMonitor.metricRelabelings
list
[]
query_scheduler.serviceMonitor.podTargetLabels
list
[]
query_scheduler.serviceMonitor.relabelings
list
[]
query_scheduler.startupProbe.failureThreshold
int
10
query_scheduler.startupProbe.httpGet.path
string
"/ready"
query_scheduler.startupProbe.httpGet.port
string
"http-metrics"
query_scheduler.strategy.rollingUpdate.maxSurge
int
0
query_scheduler.strategy.rollingUpdate.maxUnavailable
int
1
query_scheduler.strategy.type
string
"RollingUpdate"
query_scheduler.terminationGracePeriodSeconds
int
180
query_scheduler.tolerations
list
[]
query_scheduler.topologySpreadConstraints
list
[]
ruler.affinity
object
{}
ruler.annotations
object
{}
ruler.autoscaling.behavior
object
{}
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
ruler.autoscaling.enabled
bool
false
Creates a HorizontalPodAutoscaler for the ruler.
ruler.autoscaling.extraMetrics
list
[]
Optional custom and external metrics for the ruler pods to scale on In order to use this option , define a list of of specific following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics and https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
ruler.autoscaling.maxReplicas
int
30
ruler.autoscaling.minReplicas
int
2
ruler.autoscaling.targetCPUUtilizationPercentage
int
80
ruler.autoscaling.targetMemoryUtilizationPercentage
int
80
ruler.containerSecurityContext.enabled
bool
true
ruler.containerSecurityContext.readOnlyRootFilesystem
bool
true
ruler.directories
object
{}
allow configuring rules via configmap. ref: https://cortexproject.github.io/cortex-helm-chart/guides/configure_rules_via_configmap.html
ruler.enabled
bool
true
ruler.env
list
[]
ruler.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
ruler.extraContainers
list
[]
ruler.extraPorts
list
[]
ruler.extraVolumeMounts
list
[]
ruler.extraVolumes
list
[]
ruler.initContainers
list
[]
ruler.livenessProbe.httpGet.path
string
"/ready"
ruler.livenessProbe.httpGet.port
string
"http-metrics"
ruler.nodeSelector
object
{}
ruler.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
ruler.podDisruptionBudget.maxUnavailable
string
"30%"
ruler.podLabels
object
{}
Pod Labels
ruler.readinessProbe.httpGet.path
string
"/ready"
ruler.readinessProbe.httpGet.port
string
"http-metrics"
ruler.replicas
int
1
ruler.resources
object
{}
ruler.securityContext
object
{}
ruler.service.annotations
object
{}
ruler.service.labels
object
{}
ruler.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
ruler.serviceMonitor.additionalLabels
object
{}
ruler.serviceMonitor.enabled
bool
false
ruler.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
ruler.serviceMonitor.metricRelabelings
list
[]
ruler.serviceMonitor.podTargetLabels
list
[]
ruler.serviceMonitor.relabelings
list
[]
ruler.sidecar.containerSecurityContext.enabled
bool
true
ruler.sidecar.containerSecurityContext.readOnlyRootFilesystem
bool
true
ruler.sidecar.defaultFolderName
string
""
The default folder name, it will create a subfolder under the folder and put rules in there instead
ruler.sidecar.enableUniqueFilenames
bool
false
A value of true will produce unique filenames to avoid issues when duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces.
ruler.sidecar.enabled
bool
false
Enable sidecar that collect the configmaps with specified label and stores the included files them into the respective folders
ruler.sidecar.folder
string
"/data/rules"
Folder where the files should be placed.
ruler.sidecar.folderAnnotation
string
"k8s-sidecar-target-directory"
The annotation the sidecar will look for in ConfigMaps and/or Secrets to override the destination folder for files. If the value is a relative path, it will be relative to FOLDER
ruler.sidecar.healthPort
int
8081
The port the kiwigrid/k8s-sidecar listens on for health checks. The image default matches the cortex default listen port (8080), so it must be overridden here.
ruler.sidecar.image.repository
string
"kiwigrid/k8s-sidecar"
ruler.sidecar.image.sha
string
""
ruler.sidecar.image.tag
string
"2.5.0"
ruler.sidecar.imagePullPolicy
string
"IfNotPresent"
ruler.sidecar.label
string
"cortex_rules"
label that the configmaps with rules are marked with
ruler.sidecar.labelValue
string
""
The value for the label you want to filter your resources on. Don't set a value to filter by any value
ruler.sidecar.readinessProbe.httpGet.path
string
"/healthz"
ruler.sidecar.readinessProbe.httpGet.port
string
"sidecar-health"
ruler.sidecar.readinessProbe.periodSeconds
int
5
ruler.sidecar.resource
string
"both"
The resource type that the operator will filter for. Can be configmap, secret or both
ruler.sidecar.resources
object
{}
ruler.sidecar.searchNamespace
string
""
The Namespace(s) from which resources will be watched. For multiple namespaces, use a comma-separated string like "default,test". If not set or set to ALL, it will watch all Namespaces.
ruler.sidecar.skipTlsVerify
bool
false
Set to true to skip tls verification for kube api calls
ruler.sidecar.startupProbe.httpGet.path
string
"/healthz"
ruler.sidecar.startupProbe.httpGet.port
string
"sidecar-health"
ruler.sidecar.startupProbe.periodSeconds
int
5
ruler.sidecar.watchMethod
string
""
Determines how kopf-k8s-sidecar will run. If WATCH it will run like a normal operator forever. If LIST it will gather the matching configmaps and secrets currently present, write those files to the destination directory and die
ruler.startupProbe.failureThreshold
int
10
ruler.startupProbe.httpGet.path
string
"/ready"
ruler.startupProbe.httpGet.port
string
"http-metrics"
ruler.strategy.rollingUpdate.maxSurge
int
0
ruler.strategy.rollingUpdate.maxUnavailable
int
1
ruler.strategy.type
string
"RollingUpdate"
ruler.terminationGracePeriodSeconds
int
180
ruler.tolerations
list
[]
ruler.topologySpreadConstraints
list
[]
ruler.validation.enabled
bool
true
Checks that the ruler is compatible with horizontal scaling, as documented in https://cortexmetrics.io/docs/guides/ruler-sharding/. You may need to disable this if your config is compatible, but not understood by the validator.
runtimeconfigmap.annotations
object
{}
runtimeconfigmap.create
bool
true
If true, a configmap for the runtime_config will be created. If false, the configmap must exist already on the cluster or pods will fail to create.
runtimeconfigmap.runtime_config
object
{}
https://cortexmetrics.io/docs/configuration/arguments/#runtime-configuration-file
serviceAccount.annotations
object
{}
serviceAccount.automountServiceAccountToken
bool
true
serviceAccount.create
bool
true
serviceAccount.name
string
nil
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].key
string
"app.kubernetes.io/component"
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].operator
string
"In"
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.labelSelector.matchExpressions[0].values[0]
string
"store-gateway"
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].podAffinityTerm.topologyKey
string
"kubernetes.io/hostname"
store_gateway.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight
int
100
store_gateway.annotations
object
{}
store_gateway.autoscaling.behavior
object
{}
Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior
store_gateway.autoscaling.enabled
bool
false
store_gateway.autoscaling.extraMetrics
list
[]
Optional custom and external metrics for the store gateway pods to scale on In order to use this option , define a list of of specific following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics and https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
store_gateway.autoscaling.maxReplicas
int
30
store_gateway.autoscaling.minReplicas
int
3
store_gateway.autoscaling.targetMemoryUtilizationPercentage
int
80
store_gateway.containerSecurityContext.enabled
bool
true
store_gateway.containerSecurityContext.readOnlyRootFilesystem
bool
true
store_gateway.enabled
bool
true
store_gateway.env
list
[]
store_gateway.extraArgs
object
{}
Additional Cortex container arguments, e.g. log.level (debug, info, warn, error)
store_gateway.extraContainers
list
[]
store_gateway.extraPorts
list
[]
store_gateway.extraVolumeMounts
list
[]
store_gateway.extraVolumes
list
[]
store_gateway.initContainers
list
[]
store_gateway.livenessProbe
object
{}
store_gateway.nodeSelector
object
{}
store_gateway.persistentVolume.accessModes
list
["ReadWriteOnce"]
Store-gateway data Persistent Volume access modes Must match those of existing PV or dynamic provisioner Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
store_gateway.persistentVolume.annotations
object
{}
Store-gateway data Persistent Volume Claim annotations
store_gateway.persistentVolume.enabled
bool
true
If true Store-gateway will create/use a Persistent Volume Claim If false, use emptyDir
store_gateway.persistentVolume.retentionPolicy
object
{}
StatefulSetAutoDeletePVC feature https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention
store_gateway.persistentVolume.size
string
"2Gi"
Store-gateway data Persistent Volume size
store_gateway.persistentVolume.storageClass
string
nil
Store-gateway data Persistent Volume Storage Class If defined, storageClassName: If set to "-", storageClassName: "", which disables dynamic provisioning If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner.
store_gateway.persistentVolume.subPath
string
""
Subdirectory of Store-gateway data Persistent Volume to mount Useful if the volume's root directory is not empty
store_gateway.podAnnotations
object
{"prometheus.io/port":"8080","prometheus.io/scrape":"true"}
Pod Annotations
store_gateway.podDisruptionBudget.maxUnavailable
int
1
store_gateway.podLabels
object
{}
Pod Labels
store_gateway.podManagementPolicy
string
"OrderedReady"
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
store_gateway.readinessProbe.httpGet.path
string
"/ready"
store_gateway.readinessProbe.httpGet.port
string
"http-metrics"
store_gateway.replicas
int
1
store_gateway.resources
object
{}
store_gateway.securityContext
object
{}
store_gateway.service.annotations
object
{}
store_gateway.service.labels
object
{}
store_gateway.serviceAccount.name
string
""
"" disables the individual serviceAccount and uses the global serviceAccount for that component
store_gateway.serviceMonitor.additionalLabels
object
{}
store_gateway.serviceMonitor.enabled
bool
false
store_gateway.serviceMonitor.extraEndpointSpec
object
{}
Additional endpoint configuration https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#endpoint
store_gateway.serviceMonitor.metricRelabelings
list
[]
store_gateway.serviceMonitor.podTargetLabels
list
[]
store_gateway.serviceMonitor.relabelings
list
[]
store_gateway.startupProbe.failureThreshold
int
60
store_gateway.startupProbe.httpGet.path
string
"/ready"
store_gateway.startupProbe.httpGet.port
string
"http-metrics"
store_gateway.startupProbe.httpGet.scheme
string
"HTTP"
store_gateway.startupProbe.initialDelaySeconds
int
120
store_gateway.startupProbe.periodSeconds
int
30
store_gateway.strategy.type
string
"RollingUpdate"
store_gateway.terminationGracePeriodSeconds
int
240
store_gateway.tolerations
list
[]
store_gateway.topologySpreadConstraints
list
[]
useConfigMap
bool
false
useExternalConfig
bool
false