yandex_mdb_kafka_cluster (Data Source)
- Example usage
- Schema
- Optional
- Read-Only
- Nested Schema for config
- Nested Schema for config.kafka
- Nested Schema for config.kafka.resources
- Nested Schema for config.kafka.kafka_config
- Nested Schema for config.access
- Nested Schema for config.disk_size_autoscaling
- Nested Schema for config.kafka_ui
- Nested Schema for config.kraft
- Nested Schema for config.kraft.resources
- Nested Schema for config.rest_api
- Nested Schema for config.zookeeper
- Nested Schema for config.zookeeper.resources
- Nested Schema for topic
- Nested Schema for topic.topic_config
- Nested Schema for user
- Nested Schema for user.permission
- Nested Schema for host
- Nested Schema for maintenance_window
Get information about a Yandex Managed Kafka cluster. For more information, see the official documentation.
Важно
Either cluster_id or name should be specified.
Example usage
//
// Get information about existing MDB Kafka Cluster.
//
data "yandex_mdb_kafka_cluster" "my_cluster" {
name = "test"
}
output "network_id" {
value = data.yandex_mdb_kafka_cluster.my_cluster.network_id
}
Schema
Optional
cluster_id(String) The ID of the Kafka cluster.config(Block List, Max: 1) Configuration of the Kafka cluster. (see below for nested schema)deletion_protection(Boolean) Thetruevalue means that resource is protected from accidental deletion.folder_id(String) The folder identifier that resource belongs to. If it is not provided, the default providerfolder-idis used.name(String) The resource name.subnet_ids(List of String) The list of VPC subnets identifiers which resource is attached.topic(Block List, Deprecated) List of kafka topics. (see below for nested schema)user(Block Set, Deprecated) List of kafka users. (see below for nested schema)
Read-Only
created_at(String) The creation timestamp of the resource.description(String) The resource description.environment(String) Deployment environment of the Kafka cluster. Can be eitherPRESTABLEorPRODUCTION. The default isPRODUCTION.health(String) Aggregated health of the cluster. Can be eitherALIVE,DEGRADED,DEADorHEALTH_UNKNOWN. For more information seehealthfield of JSON representation in the official documentation.host(Set of Object) A host of the Kafka cluster. (see below for nested schema)host_group_ids(Set of String) A list of IDs of the host groups to place VMs of the cluster on.id(String) The ID of this resource.labels(Map of String) A set of key/value label pairs which assigned to resource.maintenance_window(List of Object) Maintenance policy of the Kafka cluster. (see below for nested schema)network_id(String) TheVPC Network IDof subnets which resource attached to.security_group_ids(Set of String) The list of security groups applied to resource or their components.status(String) Status of the cluster. Can be eitherCREATING,STARTING,RUNNING,UPDATING,STOPPING,STOPPED,ERRORorSTATUS_UNKNOWN. For more information seestatusfield of JSON representation in the official documentation.
Nested Schema for config
Required:
-
kafka(Block List, Min: 1, Max: 1) Configuration of the Kafka subcluster. (see below for nested schema) -
version(String) Version of the Kafka server software. Version of the Kafka server software. -
zones(List of String) List of availability zones. List of availability zones.
Optional:
-
access(Block List, Max: 1) Access policy to the Kafka cluster. (see below for nested schema) -
assign_public_ip(Boolean) Determines whether each broker will be assigned a public IP address. The default isfalse. Determines whether each broker will be assigned a public IP address. The default isfalse. -
brokers_count(Number) Count of brokers per availability zone. The default is1. Count of brokers per availability zone. The default is1. -
disk_size_autoscaling(Block List, Max: 1) Disk autoscaling settings of the Kafka cluster. (see below for nested schema) -
kafka_ui(Block List, Max: 1) KAFKA UI settings of the Kafka cluster. (see below for nested schema) -
kraft(Block List, Max: 1) Configuration of the KRaft-controller subcluster. (see below for nested schema) -
rest_api(Block List, Max: 1) REST API settings of the Kafka cluster. (see below for nested schema) -
schema_registry(Boolean) Enables managed schema registry on cluster. The default isfalse. Enables managed schema registry on cluster. The default isfalse. -
unmanaged_topics(Boolean, Deprecated) -
zookeeper(Block List, Max: 1) Configuration of the ZooKeeper subcluster. (see below for nested schema)
Nested Schema for config.kafka
Required:
resources(Block List, Min: 1, Max: 1) Resources allocated to hosts of the Kafka subcluster. (see below for nested schema)
Optional:
kafka_config(Block List, Max: 1) User-defined settings for the Kafka cluster. For more information, see the official documentation and the Kafka documentation . (see below for nested schema)
Nested Schema for config.kafka.resources
Required:
-
disk_size(Number) Volume of the storage available to a Kafka host, in gigabytes. Volume of the storage available to a Kafka host, in gigabytes. -
disk_type_id(String) Type of the storage of Kafka hosts. For more information see the official documentation. Type of the storage of Kafka hosts. For more information see the official documentation. -
resource_preset_id(String) The ID of the preset for computational resources available to a Kafka host (CPU, memory etc.). For more information, see the official documentation. The ID of the preset for computational resources available to a Kafka host (CPU, memory etc.). For more information, see the official documentation.
Nested Schema for config.kafka.kafka_config
Optional:
-
auto_create_topics_enable(Boolean) Enable auto creation of topic on the server. Enable auto creation of topic on the server. -
compression_type(String) Compression type of kafka topics. Compression type of kafka topics. -
default_replication_factor(String) The replication factor for automatically created topics, and for topics created with -1 as the replication factor. The replication factor for automatically created topics, and for topics created with -1 as the replication factor. -
log_flush_interval_messages(String) The number of messages accumulated on a log partition before messages are flushed to disk. The number of messages accumulated on a log partition before messages are flushed to disk. -
log_flush_interval_ms(String) The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used. The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used. -
log_flush_scheduler_interval_ms(String) The frequency in ms that the log flusher checks whether any log needs to be flushed to disk. The frequency in ms that the log flusher checks whether any log needs to be flushed to disk. -
log_preallocate(Boolean, Deprecated) Should pre allocate file when create new segment? Should pre allocate file when create new segment? -
log_retention_bytes(String) The maximum size of the log before deleting it. The maximum size of the log before deleting it. -
log_retention_hours(String) The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property. The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property. -
log_retention_minutes(String) The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used. The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used. -
log_retention_ms(String) The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. -
log_segment_bytes(String) The maximum size of a single log file. The maximum size of a single log file. -
message_max_bytes(String) The largest record batch size allowed by Kafka (after compression if compression is enabled). The largest record batch size allowed by Kafka (after compression if compression is enabled). -
num_partitions(String) The default number of log partitions per topic. The default number of log partitions per topic. -
offsets_retention_minutes(String) For subscribed consumers, committed offset of a specific partition will be expired and discarded after this period of time. For subscribed consumers, committed offset of a specific partition will be expired and discarded after this period of time. -
replica_fetch_max_bytes(String) The number of bytes of messages to attempt to fetch for each partition. The number of bytes of messages to attempt to fetch for each partition. -
sasl_enabled_mechanisms(Set of String) The list of SASL mechanisms enabled in the Kafka server. The list of SASL mechanisms enabled in the Kafka server. -
socket_receive_buffer_bytes(String) The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. -
socket_send_buffer_bytes(String) The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. -
ssl_cipher_suites(Set of String) A list of cipher suites. A list of cipher suites.
Nested Schema for config.access
Optional:
data_transfer(Boolean) Allow access for DataTransfer. Allow access for DataTransfer.
Nested Schema for config.disk_size_autoscaling
Required:
disk_size_limit(Number) Maximum possible size of disk in bytes. Maximum possible size of disk in bytes.
Optional:
-
emergency_usage_threshold(Number) Percent of disk utilization. Disk will autoscale immediately, if this threshold reached. Value is between 0 and 100. Default value is 0 (autoscaling disabled). Must be not less then 'planned_usage_threshold' value. Percent of disk utilization. Disk will autoscale immediately, if this threshold reached. Value is between 0 and 100. Default value is 0 (autoscaling disabled). Must be not less then 'planned_usage_threshold' value. -
planned_usage_threshold(Number) Percent of disk utilization. During maintenance disk will autoscale, if this threshold reached. Value is between 0 and 100. Default value is 0 (autoscaling disabled). Percent of disk utilization. During maintenance disk will autoscale, if this threshold reached. Value is between 0 and 100. Default value is 0 (autoscaling disabled).
Nested Schema for config.kafka_ui
Optional:
enabled(Boolean) Enables KAFKA UI on cluster. The default isfalse. Enables KAFKA UI on cluster. The default isfalse.
Nested Schema for config.kraft
Optional:
resources(Block List, Max: 1) Resources allocated to hosts of the KRaft-controller subcluster. (see below for nested schema)
Nested Schema for config.kraft.resources
Optional:
-
disk_size(Number) Volume of the storage available to a KRaft-controller host, in gigabytes. Volume of the storage available to a KRaft-controller host, in gigabytes. -
disk_type_id(String) Type of the storage of KRaft-controller hosts. For more information see the official documentation. Type of the storage of KRaft-controller hosts. For more information see the official documentation. -
resource_preset_id(String) The ID of the preset for computational resources available to a KRaft-controller host (CPU, memory etc.). For more information, see the official documentation. The ID of the preset for computational resources available to a KRaft-controller host (CPU, memory etc.). For more information, see the official documentation.
Nested Schema for config.rest_api
Optional:
enabled(Boolean) Enables REST API on cluster. The default isfalse. Enables REST API on cluster. The default isfalse.
Nested Schema for config.zookeeper
Optional:
resources(Block List, Max: 1) Resources allocated to hosts of the ZooKeeper subcluster. (see below for nested schema)
Nested Schema for config.zookeeper.resources
Optional:
-
disk_size(Number) Volume of the storage available to a ZooKeeper host, in gigabytes. Volume of the storage available to a ZooKeeper host, in gigabytes. -
disk_type_id(String) Type of the storage of ZooKeeper hosts. For more information see the official documentation. Type of the storage of ZooKeeper hosts. For more information see the official documentation. -
resource_preset_id(String) The ID of the preset for computational resources available to a ZooKeeper host (CPU, memory etc.). For more information, see the official documentation. The ID of the preset for computational resources available to a ZooKeeper host (CPU, memory etc.). For more information, see the official documentation.
Nested Schema for topic
Required:
-
name(String) The name of the topic. The name of the topic. -
partitions(Number) The number of the topic's partitions. The number of the topic's partitions. -
replication_factor(Number) Amount of data copies (replicas) for the topic in the cluster. Amount of data copies (replicas) for the topic in the cluster.
Optional:
topic_config(Block List, Max: 1) User-defined settings for the topic. For more information, see the official documentation and the Kafka documentation . (see below for nested schema)
Nested Schema for topic.topic_config
Optional:
-
cleanup_policy(String) Retention policy to use on log segments. Retention policy to use on log segments. -
compression_type(String) Compression type of kafka topic. Compression type of kafka topic. -
delete_retention_ms(String) The amount of time to retain delete tombstone markers for log compacted topics. The amount of time to retain delete tombstone markers for log compacted topics. -
file_delete_delay_ms(String) The time to wait before deleting a file from the filesystem. The time to wait before deleting a file from the filesystem. -
flush_messages(String) This setting allows specifying an interval at which we will force an fsync of data written to the log. This setting allows specifying an interval at which we will force an fsync of data written to the log. -
flush_ms(String) This setting allows specifying a time interval at which we will force an fsync of data written to the log. This setting allows specifying a time interval at which we will force an fsync of data written to the log. -
max_message_bytes(String) The largest record batch size allowed by Kafka (after compression if compression is enabled). The largest record batch size allowed by Kafka (after compression if compression is enabled). -
min_compaction_lag_ms(String) The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. -
min_insync_replicas(String) When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. -
preallocate(Boolean, Deprecated) True if we should preallocate the file on disk when creating a new log segment. True if we should preallocate the file on disk when creating a new log segment. -
retention_bytes(String) This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. -
retention_ms(String) This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. -
segment_bytes(String) This configuration controls the segment file size for the log. This configuration controls the segment file size for the log.
Nested Schema for user
Required:
-
name(String) The name of the user. The name of the user. -
password(String, Sensitive) The password of the user. The password of the user.
Optional:
permission(Block Set) Set of permissions granted to the user. (see below for nested schema)
Nested Schema for user.permission
Required:
-
role(String) The role type to grant to the topic. The role type to grant to the topic. -
topic_name(String) The name of the topic that the permission grants access to. The name of the topic that the permission grants access to.
Optional:
allow_hosts(Set of String) Set of hosts, to which this permission grants access to. Only ip-addresses allowed as value of single host. Set of hosts, to which this permission grants access to. Only ip-addresses allowed as value of single host.
Nested Schema for host
Read-Only:
assign_public_ip(Boolean)health(String)name(String)role(String)subnet_id(String)zone_id(String)
Nested Schema for maintenance_window
Read-Only:
-
day(String) Day of the week (inDDDformat). Allowed values:MON,TUE,WED,THU,FRI,SAT,SUN. -
hour(Number) Hour of the day in UTC (inHHformat). Allowed value is between 1 and 24. -
type(String) Type of maintenance window. Can be eitherANYTIMEorWEEKLY. A day and hour of window need to be specified with weekly window.