Managed Service for Apache Kafka® API, gRPC: ClusterService
A set of methods for managing Apache Kafka® clusters.
Call | Description |
---|---|
Get | Returns the specified Apache Kafka® cluster. |
List | Retrieves the list of Apache Kafka® clusters that belong to the specified folder. |
Create | Creates a new Apache Kafka® cluster in the specified folder. |
Update | Updates the specified Apache Kafka® cluster. |
Delete | Deletes the specified Apache Kafka® cluster. |
Move | Moves the specified Apache Kafka® cluster to the specified folder. |
Start | Starts the specified Apache Kafka® cluster. |
Stop | Stops the specified Apache Kafka® cluster. |
RescheduleMaintenance | Reschedule planned maintenance operation. |
ListLogs | Retrieves logs for the specified Apache Kafka® cluster. |
StreamLogs | Same as ListLogs but using server-side streaming. |
ListOperations | Retrieves the list of operations for the specified Apache Kafka® cluster. |
ListHosts | Retrieves a list of hosts for the specified Apache Kafka® cluster. |
Calls ClusterService
Get
Returns the specified Apache Kafka® cluster.
To get the list of available Apache Kafka® clusters, make a List request.
rpc Get (GetClusterRequest) returns (Cluster)
GetClusterRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® Cluster resource to return. To get the cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
List
Retrieves the list of Apache Kafka® clusters that belong to the specified folder.
rpc List (ListClustersRequest) returns (ListClustersResponse)
ListClustersRequest
Field | Description |
---|---|
folder_id | string Required. ID of the folder to list Apache Kafka® clusters in. To get the folder ID, make a yandex.cloud.resourcemanager.v1.FolderService.List request. The maximum string length in characters is 50. |
page_size | int64 The maximum number of results per page to return. If the number of available results is larger than page_size , the service returns a ListClustersResponse.next_page_token that can be used to get the next page of results in subsequent list requests. The maximum value is 1000. |
page_token | string Page token. To get the next page of results, set page_token to the ListClustersResponse.next_page_token returned by the previous list request. The maximum string length in characters is 100. |
filter | string Filter support is not currently implemented. Any filters are ignored. The maximum string length in characters is 1000. |
ListClustersResponse
Field | Description |
---|---|
clusters[] | Cluster List of Apache Kafka® clusters. |
next_page_token | string Token that allows you to get the next page of results for list requests. If the number of results is larger than ListClustersRequest.page_size, use next_page_token as the value for the ListClustersRequest.page_token parameter in the next list request. Each subsequent list request will have its own next_page_token to continue paging through the results. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
Create
Creates a new Apache Kafka® cluster in the specified folder.
rpc Create (CreateClusterRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:CreateClusterMetadata
Operation.response:Cluster
CreateClusterRequest
Field | Description |
---|---|
folder_id | string Required. ID of the folder to create the Apache Kafka® cluster in. To get the folder ID, make a yandex.cloud.resourcemanager.v1.FolderService.List request. The maximum string length in characters is 50. |
name | string Required. Name of the Apache Kafka® cluster. The name must be unique within the folder. The string length in characters must be 1-63. Value must match the regular expression [a-z]([-a-z0-9]{0,61}[a-z0-9])? . |
description | string Description of the Apache Kafka® cluster. The maximum string length in characters is 256. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. For example, "project": "mvp" or "source": "dictionary". No more than 64 per resource. The maximum string length in characters for each value is 63. Each value must match the regular expression [-_./\\@0-9a-z]* . The string length in characters for each key must be 1-63. Each key must match the regular expression [a-z][-_./\\@0-9a-z]* . |
environment | Cluster.Environment Deployment environment of the Apache Kafka® cluster. |
config_spec | ConfigSpec Kafka and hosts configuration the Apache Kafka® cluster. |
topic_specs[] | TopicSpec One or more configurations of topics to be created in the Apache Kafka® cluster. |
user_specs[] | UserSpec Configurations of accounts to be created in the Apache Kafka® cluster. |
network_id | string ID of the network to create the Apache Kafka® cluster in. The maximum string length in characters is 50. |
subnet_id[] | string IDs of subnets to create brokers in. |
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups to place VMs of cluster on. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
TopicSpec
Field | Description |
---|---|
name | string Name of the topic. |
partitions | google.protobuf.Int64Value The number of the topic's partitions. |
replication_factor | google.protobuf.Int64Value Amount of copies of a topic data kept in the cluster. |
topic_config | oneof: topic_config_2_8 or topic_config_3 User-defined settings for the topic. |
topic_config_2_8 | TopicConfig2_8 User-defined settings for the topic. |
topic_config_3 | TopicConfig3 User-defined settings for the topic. |
TopicConfig2_8
Field | Description |
---|---|
cleanup_policy | enum CleanupPolicy Retention policy to use on old log messages.
|
compression_type | enum CompressionType The compression type for a given topic.
|
delete_retention_ms | google.protobuf.Int64Value The amount of time in milliseconds to retain delete tombstone markers for log compacted topics. |
file_delete_delay_ms | google.protobuf.Int64Value The time to wait before deleting a file from the filesystem. |
flush_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This setting overrides the cluster-level KafkaConfig2_8.log_flush_interval_messages setting on the topic level. |
flush_ms | google.protobuf.Int64Value The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk. This setting overrides the cluster-level KafkaConfig2_8.log_flush_interval_ms setting on the topic level. |
min_compaction_lag_ms | google.protobuf.Int64Value The minimum time in milliseconds a message will remain uncompacted in the log. |
retention_bytes | google.protobuf.Int64Value The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanup_policy is in effect. It is helpful if you need to control the size of log due to limited disk space. This setting overrides the cluster-level KafkaConfig2_8.log_retention_bytes setting on the topic level. |
retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment's file before deleting it. This setting overrides the cluster-level KafkaConfig2_8.log_retention_ms setting on the topic level. |
max_message_bytes | google.protobuf.Int64Value The largest record batch size allowed in topic. |
min_insync_replicas | google.protobuf.Int64Value This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all"). |
segment_bytes | google.protobuf.Int64Value This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. This setting overrides the cluster-level KafkaConfig2_8.log_segment_bytes setting on the topic level. |
preallocate | google.protobuf.BoolValue True if we should preallocate the file on disk when creating a new log segment. This setting overrides the cluster-level KafkaConfig2_8.log_preallocate setting on the topic level. |
TopicConfig3
Field | Description |
---|---|
cleanup_policy | enum CleanupPolicy Retention policy to use on old log messages.
|
compression_type | enum CompressionType The compression type for a given topic.
|
delete_retention_ms | google.protobuf.Int64Value The amount of time in milliseconds to retain delete tombstone markers for log compacted topics. |
file_delete_delay_ms | google.protobuf.Int64Value The time to wait before deleting a file from the filesystem. |
flush_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This setting overrides the cluster-level KafkaConfig3.log_flush_interval_messages setting on the topic level. |
flush_ms | google.protobuf.Int64Value The maximum time in milliseconds that a message in the topic is kept in memory before flushed to disk. This setting overrides the cluster-level KafkaConfig3.log_flush_interval_ms setting on the topic level. |
min_compaction_lag_ms | google.protobuf.Int64Value The minimum time in milliseconds a message will remain uncompacted in the log. |
retention_bytes | google.protobuf.Int64Value The maximum size a partition can grow to before Kafka will discard old log segments to free up space if the delete cleanup_policy is in effect. It is helpful if you need to control the size of log due to limited disk space. This setting overrides the cluster-level KafkaConfig3.log_retention_bytes setting on the topic level. |
retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment's file before deleting it. This setting overrides the cluster-level KafkaConfig3.log_retention_ms setting on the topic level. |
max_message_bytes | google.protobuf.Int64Value The largest record batch size allowed in topic. |
min_insync_replicas | google.protobuf.Int64Value This configuration specifies the minimum number of replicas that must acknowledge a write to topic for the write to be considered successful (when a producer sets acks to "all"). |
segment_bytes | google.protobuf.Int64Value This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. This setting overrides the cluster-level KafkaConfig3.log_segment_bytes setting on the topic level. |
preallocate | google.protobuf.BoolValue True if we should preallocate the file on disk when creating a new log segment. This setting overrides the cluster-level KafkaConfig3.log_preallocate setting on the topic level. |
UserSpec
Field | Description |
---|---|
name | string Required. Name of the Kafka user. The string length in characters must be 1-256. Value must match the regular expression [a-zA-Z0-9_]* . |
password | string Required. Password of the Kafka user. The string length in characters must be 8-128. |
permissions[] | Permission Set of permissions granted to the user. |
Permission
Field | Description |
---|---|
topic_name | string Name or prefix-pattern with wildcard for the topic that the permission grants access to. To get the topic name, make a TopicService.List request. |
role | enum AccessRole Access role type to grant to the user.
|
allow_hosts[] | string Lists hosts allowed for this permission. When not defined, access from any host is allowed. Bare in mind that the same host might appear in multiple permissions at the same time, hence removing individual permission doesn't automatically restricts access from the allow_hosts of the permission. If the same host(s) is listed for another permission of the same principal/topic, the host(s) remains allowed. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
CreateClusterMetadata
Field | Description |
---|---|
cluster_id | string ID of the Apache Kafka® cluster that is being created. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
Update
Updates the specified Apache Kafka® cluster.
rpc Update (UpdateClusterRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:UpdateClusterMetadata
Operation.response:Cluster
UpdateClusterRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to update. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
update_mask | google.protobuf.FieldMask |
description | string New description of the Apache Kafka® cluster. The maximum string length in characters is 256. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. For example, "project": "mvp" or "source": "dictionary". The new set of labels will completely replace the old ones. To add a label, request the current set with the ClusterService.Get method, then send an ClusterService.Update request with the new label added to the set. No more than 64 per resource. The maximum string length in characters for each value is 63. Each value must match the regular expression [-_0-9a-z]* . The string length in characters for each key must be 1-63. Each key must match the regular expression [a-z][-_0-9a-z]* . |
config_spec | ConfigSpec New configuration and resources for hosts in the Apache Kafka® cluster. Use update_mask to prevent reverting all cluster settings that are not listed in config_spec to their default values. |
name | string New name for the Apache Kafka® cluster. The maximum string length in characters is 63. Value must match the regular expression [a-zA-Z0-9_-]* . |
security_group_ids[] | string User security groups |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow New maintenance window settings for the cluster. |
subnet_ids[] | string IDs of subnets where the hosts are located or a new host is being created |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
UpdateClusterMetadata
Field | Description |
---|---|
cluster_id | string ID of the Apache Kafka® cluster that is being updated. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
Delete
Deletes the specified Apache Kafka® cluster.
rpc Delete (DeleteClusterRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:DeleteClusterMetadata
Operation.response:google.protobuf.Empty
DeleteClusterRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to delete. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
DeleteClusterMetadata
Field | Description |
---|---|
cluster_id | string ID of the Apache Kafka® cluster that is being deleted. |
Move
Moves the specified Apache Kafka® cluster to the specified folder.
rpc Move (MoveClusterRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:MoveClusterMetadata
Operation.response:Cluster
MoveClusterRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to move. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
destination_folder_id | string Required. ID of the destination folder. The maximum string length in characters is 50. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
MoveClusterMetadata
Field | Description |
---|---|
cluster_id | string ID of the Apache Kafka® cluster being moved. |
source_folder_id | string ID of the source folder. |
destination_folder_id | string ID of the destnation folder. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
Start
Starts the specified Apache Kafka® cluster.
rpc Start (StartClusterRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:StartClusterMetadata
Operation.response:Cluster
StartClusterRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to start. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
StartClusterMetadata
Field | Description |
---|---|
cluster_id | string ID of the Apache Kafka® cluster. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
Stop
Stops the specified Apache Kafka® cluster.
rpc Stop (StopClusterRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:StopClusterMetadata
Operation.response:Cluster
StopClusterRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to stop. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
StopClusterMetadata
Field | Description |
---|---|
cluster_id | string ID of the Apache Kafka® cluster. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
RescheduleMaintenance
Reschedule planned maintenance operation.
rpc RescheduleMaintenance (RescheduleMaintenanceRequest) returns (operation.Operation)
Metadata and response of Operation:
Operation.metadata:RescheduleMaintenanceMetadata
Operation.response:Cluster
RescheduleMaintenanceRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Kafka cluster to reschedule the maintenance operation for. The maximum string length in characters is 50. |
reschedule_type | enum RescheduleType Required. The type of reschedule request.
|
delayed_until | google.protobuf.Timestamp The time until which this maintenance operation should be delayed. The value should be ahead of the first time when the maintenance operation has been scheduled for no more than two weeks. The value can also point to the past moment of time if reschedule_type.IMMEDIATE reschedule type is chosen. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any if operation finished successfully. |
RescheduleMaintenanceMetadata
Field | Description |
---|---|
cluster_id | string ID of the Kafka cluster. |
delayed_until | google.protobuf.Timestamp The time until which this maintenance operation is to be delayed. |
Cluster
Field | Description |
---|---|
id | string ID of the Apache Kafka® cluster. This ID is assigned at creation time. |
folder_id | string ID of the folder that the Apache Kafka® cluster belongs to. |
created_at | google.protobuf.Timestamp Creation timestamp. |
name | string Name of the Apache Kafka® cluster. The name must be unique within the folder. 1-63 characters long. Value must match the regular expression [a-zA-Z0-9_-]* . |
description | string Description of the Apache Kafka® cluster. 0-256 characters long. |
labels | map<string,string> Custom labels for the Apache Kafka® cluster as key:value pairs. A maximum of 64 labels per resource is allowed. |
environment | enum Environment Deployment environment of the Apache Kafka® cluster.
|
monitoring[] | Monitoring Description of monitoring systems relevant to the Apache Kafka® cluster. |
config | ConfigSpec Configuration of the Apache Kafka® cluster. |
network_id | string ID of the network that the cluster belongs to. |
health | enum Health Aggregated cluster health.
|
status | enum Status Current state of the cluster.
|
security_group_ids[] | string User security groups |
host_group_ids[] | string Host groups hosting VMs of the cluster. |
deletion_protection | bool Deletion Protection inhibits deletion of the cluster |
maintenance_window | MaintenanceWindow Window of maintenance operations. |
planned_operation | MaintenanceOperation Scheduled maintenance operation. |
Monitoring
Field | Description |
---|---|
name | string Name of the monitoring system. |
description | string Description of the monitoring system. |
link | string Link to the monitoring system charts for the Apache Kafka® cluster. |
ConfigSpec
Field | Description |
---|---|
version | string Version of Apache Kafka® used in the cluster. Possible values: 2.1 , 2.6 . |
kafka | Kafka Configuration and resource allocation for Kafka brokers. |
zookeeper | Zookeeper Configuration and resource allocation for ZooKeeper hosts. |
zone_id[] | string IDs of availability zones where Kafka brokers reside. |
brokers_count | google.protobuf.Int64Value The number of Kafka brokers deployed in each availability zone. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the cluster. If the value is true , then Apache Kafka® cluster is available on the Internet via it's public IP address. |
unmanaged_topics | bool Allows to manage topics via AdminAPI Deprecated. Feature enabled permanently. |
schema_registry | bool Enables managed schema registry on cluster |
access | Access Access policy for external services. |
rest_api_config | RestAPIConfig Configuration of REST API. |
Kafka
Field | Description |
---|---|
resources | Resources Resources allocated to Kafka brokers. |
kafka_config | oneof: kafka_config_2_8 or kafka_config_3 Kafka broker configuration. |
kafka_config_2_8 | KafkaConfig2_8 Kafka broker configuration. |
kafka_config_3 | KafkaConfig3 Kafka broker configuration. |
Zookeeper
Field | Description |
---|---|
resources | Resources Resources allocated to ZooKeeper hosts. |
RestAPIConfig
Field | Description |
---|---|
enabled | bool Is REST API enabled for this cluster. |
Access
Field | Description |
---|---|
data_transfer | bool Allow access for DataTransfer. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |
KafkaConfig2_8
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig2_8.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig2_8.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
KafkaConfig3
Field | Description |
---|---|
compression_type | enum CompressionType Cluster topics compression type.
|
log_flush_interval_messages | google.protobuf.Int64Value The number of messages accumulated on a log partition before messages are flushed to disk. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_messages setting. |
log_flush_interval_ms | google.protobuf.Int64Value The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of log_flush_scheduler_interval_ms is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.flush_ms setting. |
log_flush_scheduler_interval_ms | google.protobuf.Int64Value The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher. |
log_retention_bytes | google.protobuf.Int64Value Partition size limit; Kafka will discard old log segments to free up space if delete TopicConfig3.cleanup_policy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_bytes setting. |
log_retention_hours | google.protobuf.Int64Value The number of hours to keep a log segment file before deleting it. |
log_retention_minutes | google.protobuf.Int64Value The number of minutes to keep a log segment file before deleting it. If not set, the value of log_retention_hours is used. |
log_retention_ms | google.protobuf.Int64Value The number of milliseconds to keep a log segment file before deleting it. If not set, the value of log_retention_minutes is used. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.retention_ms setting. |
log_segment_bytes | google.protobuf.Int64Value The maximum size of a single log file. This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.segment_bytes setting. |
log_preallocate | google.protobuf.BoolValue Should pre allocate file when create new segment? This is the global cluster-level setting that can be overridden on a topic level by using the TopicConfig3.preallocate setting. |
socket_send_buffer_bytes | google.protobuf.Int64Value The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
socket_receive_buffer_bytes | google.protobuf.Int64Value The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. |
auto_create_topics_enable | google.protobuf.BoolValue Enable auto creation of topic on the server |
num_partitions | google.protobuf.Int64Value Default number of partitions per topic on the whole cluster |
default_replication_factor | google.protobuf.Int64Value Default replication factor of the topic on the whole cluster |
message_max_bytes | google.protobuf.Int64Value The largest record batch size allowed by Kafka. Default value: 1048588. |
replica_fetch_max_bytes | google.protobuf.Int64Value The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576. |
ssl_cipher_suites[] | string A list of cipher suites. |
offsets_retention_minutes | google.protobuf.Int64Value Offset storage time after a consumer group loses all its consumers. Default: 10080. |
sasl_enabled_mechanisms[] | enum SaslMechanism The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512]. |
MaintenanceWindow
Field | Description |
---|---|
policy | oneof: anytime or weekly_maintenance_window |
anytime | AnytimeMaintenanceWindow |
weekly_maintenance_window | WeeklyMaintenanceWindow |
AnytimeMaintenanceWindow
Empty.
WeeklyMaintenanceWindow
Field | Description |
---|---|
day | enum WeekDay |
hour | int64 Hour of the day in UTC. Acceptable values are 1 to 24, inclusive. |
MaintenanceOperation
Field | Description |
---|---|
info | string The maximum string length in characters is 256. |
delayed_until | google.protobuf.Timestamp |
ListLogs
Retrieves logs for the specified Apache Kafka® cluster.
For more information about logs, see the Logs section in the documentation.
rpc ListLogs (ListClusterLogsRequest) returns (ListClusterLogsResponse)
ListClusterLogsRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to request logs for. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
column_filter[] | string Columns from the logs table to request. If no columns are specified, full log records are returned. |
from_time | google.protobuf.Timestamp Start timestamp for the logs request. |
to_time | google.protobuf.Timestamp End timestamp for the logs request. |
page_size | int64 The maximum number of results per page to return. If the number of available results is larger than page_size , the service returns a ListClusterLogsResponse.next_page_token that can be used to get the next page of results in subsequent list requests. The maximum value is 1000. |
page_token | string Page token. To get the next page of results, set page_token to the ListClusterLogsResponse.next_page_token returned by the previous list request. The maximum string length in characters is 100. |
always_next_page_token | bool The flag that defines behavior of providing the next page token. If this flag is set to true , this API method will always return ListClusterLogsResponse.next_page_token, even if current page is empty. |
filter | string A filter expression that filters resources listed in the response. The expression must specify:
Example of a filter: message.hostname='node1.db.cloud.yandex.net' The maximum string length in characters is 1000. |
ListClusterLogsResponse
Field | Description |
---|---|
logs[] | LogRecord Requested log records. |
next_page_token | string Token that allows you to get the next page of results for list requests. If the number of results is larger than ListClusterLogsRequest.page_size, use next_page_token as the value for the ListClusterLogsRequest.page_token query parameter in the next list request. Each subsequent list request will have its own next_page_token to continue paging through the results. This value is interchangeable with StreamLogRecord.next_record_token from StreamLogs method. |
LogRecord
Field | Description |
---|---|
timestamp | google.protobuf.Timestamp Log record timestamp. |
message | map<string,string> Contents of the log record. |
StreamLogs
Same as ListLogs but using server-side streaming. Also allows for tail -f
semantics.
rpc StreamLogs (StreamClusterLogsRequest) returns (stream StreamLogRecord)
StreamClusterLogsRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
column_filter[] | string Columns from logs table to get in the response. If no columns are specified, full log records are returned. |
from_time | google.protobuf.Timestamp Start timestamp for the logs request. |
to_time | google.protobuf.Timestamp End timestamp for the logs request. If this field is not set, all existing logs will be sent and then the new ones as they appear. In essence it has tail -f semantics. |
record_token | string Record token. Set record_token to the StreamLogRecord.next_record_token returned by a previous ClusterService.StreamLogs request to start streaming from next log record. The maximum string length in characters is 100. |
filter | string A filter expression that filters resources listed in the response. The expression must specify:
Example of a filter: message.hostname='node1.db.cloud.yandex.net' The maximum string length in characters is 1000. |
StreamLogRecord
Field | Description |
---|---|
record | LogRecord One of the requested log records. |
next_record_token | string This token allows you to continue streaming logs starting from the exact same record. To continue streaming, specify value of next_record_token as value for StreamClusterLogsRequest.record_token parameter in the next StreamLogs request. This value is interchangeable with ListClusterLogsResponse.next_page_token from ListLogs method. |
LogRecord
Field | Description |
---|---|
timestamp | google.protobuf.Timestamp Log record timestamp. |
message | map<string,string> Contents of the log record. |
ListOperations
Retrieves the list of operations for the specified Apache Kafka® cluster.
rpc ListOperations (ListClusterOperationsRequest) returns (ListClusterOperationsResponse)
ListClusterOperationsRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster to list operations for. The maximum string length in characters is 50. |
page_size | int64 The maximum number of results per page to return. If the number of available results is larger than page_size , the service returns a ListClusterOperationsResponse.next_page_token that can be used to get the next page of results in subsequent list requests. The maximum value is 1000. |
page_token | string Page token. To get the next page of results, set page_token to the ListClusterOperationsResponse.next_page_token returned by the previous list request. The maximum string length in characters is 100. |
ListClusterOperationsResponse
Field | Description |
---|---|
operations[] | operation.Operation List of operations for the specified Apache Kafka® cluster. |
next_page_token | string Token that allows you to get the next page of results for list requests. If the number of results is larger than ListClusterOperationsRequest.page_size, use next_page_token as the value for the ListClusterOperationsRequest.page_token query parameter in the next list request. Each subsequent list request will have its own next_page_token to continue paging through the results. |
Operation
Field | Description |
---|---|
id | string ID of the operation. |
description | string Description of the operation. 0-256 characters long. |
created_at | google.protobuf.Timestamp Creation timestamp. |
created_by | string ID of the user or service account who initiated the operation. |
modified_at | google.protobuf.Timestamp The time when the Operation resource was last modified. |
done | bool If the value is false , it means the operation is still in progress. If true , the operation is completed, and either error or response is available. |
metadata | google.protobuf.Any Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any. |
result | oneof: error or response The operation result. If done == false and there was no failure detected, neither error nor response is set. If done == false and there was a failure detected, error is set. If done == true , exactly one of error or response is set. |
error | google.rpc.Status The error result of the operation in case of failure or cancellation. |
response | google.protobuf.Any The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty |
ListHosts
Retrieves a list of hosts for the specified Apache Kafka® cluster.
rpc ListHosts (ListClusterHostsRequest) returns (ListClusterHostsResponse)
ListClusterHostsRequest
Field | Description |
---|---|
cluster_id | string Required. ID of the Apache Kafka® cluster. To get the Apache Kafka® cluster ID, make a ClusterService.List request. The maximum string length in characters is 50. |
page_size | int64 The maximum number of results per page to return. If the number of available results is larger than page_size , the service returns a ListClusterHostsResponse.next_page_token that can be used to get the next page of results in subsequent list requests. The maximum value is 1000. |
page_token | string Page token. To get the next page of results, set page_token to the ListClusterHostsResponse.next_page_token returned by the previous list request. The maximum string length in characters is 100. |
ListClusterHostsResponse
Field | Description |
---|---|
hosts[] | Host List of hosts. |
next_page_token | string Token that allows you to get the next page of results for list requests. If the number of results is larger than ListClusterHostsRequest.page_size, use the next_page_token as the value for the ListClusterHostsRequest.page_token query parameter in the next list request. Each subsequent list request will have its own next_page_token to continue paging through the results. |
Host
Field | Description |
---|---|
name | string Name of the host. |
cluster_id | string ID of the Apache Kafka® cluster. |
zone_id | string ID of the availability zone where the host resides. |
role | enum Role Host role.
|
resources | Resources Computational resources allocated to the host. |
health | enum Health Aggregated host health data.
|
subnet_id | string ID of the subnet the host resides in. |
assign_public_ip | bool The flag that defines whether a public IP address is assigned to the node. If the value is true , then this node is available on the Internet via it's public IP address. |
Resources
Field | Description |
---|---|
resource_preset_id | string ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation. |
disk_size | int64 Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted. |
disk_type_id | string Type of the storage environment for the host. |