Yandex Cloud
  • Сервисы
  • Решения
  • Почему Yandex Cloud
  • Сообщество
  • Тарифы
  • Документация
  • Связаться с нами
Подключиться
Language / Region
Проект Яндекса
© 2023 ООО «Яндекс.Облако»
Yandex Managed Service for Apache Kafka®
  • Начало работы
  • Пошаговые инструкции
    • Все инструкции
    • Информация об имеющихся кластерах
    • Создание кластера
    • Подключение к кластеру
    • Остановка и запуск кластера
    • Обновление версии Apache Kafka®
    • Изменение настроек кластера
    • Управление хостами Apache Kafka®
    • Работа с топиками и разделами
    • Управление пользователями Apache Kafka®
    • Управление коннекторами
    • Просмотр логов кластера
    • Удаление кластера
    • Мониторинг состояния кластера и хостов
  • Практические руководства
    • Все руководства
    • Настройка Kafka Connect для работы с Managed Service for Apache Kafka®
    • Использование схем формата данных с Managed Service for Apache Kafka®
      • Обзор
      • Работа с управляемым реестром схем формата данных
      • Использование Confluent Schema Registry с Managed Service for Apache Kafka®
    • Миграция базы данных из стороннего кластера Apache Kafka®
    • Перенос данных между кластерами Managed Service for Apache Kafka® с помощью Yandex Data Transfer
    • Поставка данных из Yandex Managed Service for PostgreSQL с помощью Debezium
    • Поставка данных из Yandex Managed Service for MySQL с помощью Debezium
    • Поставка данных из Yandex Managed Service for PostgreSQL с помощью Yandex Data Transfer
    • Поставка данных в Managed Service for ClickHouse
    • Поставка данных в Yandex Managed Service for ClickHouse с помощью Yandex Data Transfer
    • Поставка данных в ksqlDB
    • Поставка данных в Yandex Managed Service for YDB с помощью Yandex Data Transfer
  • Концепции
    • Взаимосвязь ресурсов сервиса
    • Топики и разделы
    • Брокеры
    • Производители и потребители
    • Управление схемами данных
    • Классы хостов
    • Сеть в Managed Service for Apache Kafka®
    • Квоты и лимиты
    • Типы дисков
    • Коннекторы
    • Техническое обслуживание
    • Настройки Apache Kafka®
  • Управление доступом
  • Правила тарификации
  • Справочник API
    • Аутентификация в API
    • gRPC (англ.)
      • Overview
      • ClusterService
      • ConnectorService
      • ResourcePresetService
      • TopicService
      • UserService
      • OperationService
    • REST (англ.)
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listHosts
        • listLogs
        • listOperations
        • move
        • rescheduleMaintenance
        • start
        • stop
        • streamLogs
        • update
      • Connector
        • Overview
        • create
        • delete
        • get
        • list
        • pause
        • resume
        • update
      • ResourcePreset
        • Overview
        • get
        • list
      • Topic
        • Overview
        • create
        • delete
        • get
        • list
        • update
      • User
        • Overview
        • create
        • delete
        • get
        • grantPermission
        • list
        • revokePermission
        • update
      • Operation
        • Overview
        • get
  • История изменений
  • Вопросы и ответы
  1. Справочник API
  2. REST (англ.)
  3. Cluster
  4. update

Managed Service for Apache Kafka® API, REST: Cluster.update

Статья создана
Yandex Cloud
  • HTTP request
  • Path parameters
  • Body parameters
  • Response

Updates the specified Apache Kafka® cluster.

HTTP request

PATCH https://mdb.api.cloud.yandex.net/managed-kafka/v1/clusters/{clusterId}

Path parameters

Parameter Description
clusterId

Required. ID of the Apache Kafka® cluster to update.

To get the Apache Kafka® cluster ID, make a list request.

The maximum string length in characters is 50.

Body parameters

{
  "updateMask": "string",
  "description": "string",
  "labels": "object",
  "configSpec": {
    "version": "string",
    "kafka": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      },

      // `configSpec.kafka` includes only one of the fields `kafkaConfig_2_1`, `kafkaConfig_2_6`, `kafkaConfig_2_8`, `kafkaConfig_3`
      "kafkaConfig_2_1": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true,
        "socketSendBufferBytes": "integer",
        "socketReceiveBufferBytes": "integer",
        "autoCreateTopicsEnable": true,
        "numPartitions": "integer",
        "defaultReplicationFactor": "integer",
        "messageMaxBytes": "integer",
        "replicaFetchMaxBytes": "integer",
        "sslCipherSuites": [
          "string"
        ],
        "offsetsRetentionMinutes": "integer"
      },
      "kafkaConfig_2_6": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true,
        "socketSendBufferBytes": "integer",
        "socketReceiveBufferBytes": "integer",
        "autoCreateTopicsEnable": true,
        "numPartitions": "integer",
        "defaultReplicationFactor": "integer",
        "messageMaxBytes": "integer",
        "replicaFetchMaxBytes": "integer",
        "sslCipherSuites": [
          "string"
        ],
        "offsetsRetentionMinutes": "integer"
      },
      "kafkaConfig_2_8": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true,
        "socketSendBufferBytes": "integer",
        "socketReceiveBufferBytes": "integer",
        "autoCreateTopicsEnable": true,
        "numPartitions": "integer",
        "defaultReplicationFactor": "integer",
        "messageMaxBytes": "integer",
        "replicaFetchMaxBytes": "integer",
        "sslCipherSuites": [
          "string"
        ],
        "offsetsRetentionMinutes": "integer",
        "saslEnabledMechanisms": [
          "string"
        ]
      },
      "kafkaConfig_3": {
        "compressionType": "string",
        "logFlushIntervalMessages": "integer",
        "logFlushIntervalMs": "integer",
        "logFlushSchedulerIntervalMs": "integer",
        "logRetentionBytes": "integer",
        "logRetentionHours": "integer",
        "logRetentionMinutes": "integer",
        "logRetentionMs": "integer",
        "logSegmentBytes": "integer",
        "logPreallocate": true,
        "socketSendBufferBytes": "integer",
        "socketReceiveBufferBytes": "integer",
        "autoCreateTopicsEnable": true,
        "numPartitions": "integer",
        "defaultReplicationFactor": "integer",
        "messageMaxBytes": "integer",
        "replicaFetchMaxBytes": "integer",
        "sslCipherSuites": [
          "string"
        ],
        "offsetsRetentionMinutes": "integer",
        "saslEnabledMechanisms": [
          "string"
        ]
      },
      // end of the list of possible fields`configSpec.kafka`

    },
    "zookeeper": {
      "resources": {
        "resourcePresetId": "string",
        "diskSize": "string",
        "diskTypeId": "string"
      }
    },
    "zoneId": [
      "string"
    ],
    "brokersCount": "integer",
    "assignPublicIp": true,
    "unmanagedTopics": true,
    "schemaRegistry": true,
    "access": {
      "dataTransfer": true
    }
  },
  "name": "string",
  "securityGroupIds": [
    "string"
  ],
  "deletionProtection": true,
  "maintenanceWindow": {

    // `maintenanceWindow` includes only one of the fields `anytime`, `weeklyMaintenanceWindow`
    "anytime": {},
    "weeklyMaintenanceWindow": {
      "day": "string",
      "hour": "string"
    },
    // end of the list of possible fields`maintenanceWindow`

  }
}
Field Description
updateMask string

A comma-separated names off ALL fields to be updated. Only the specified fields will be changed. The others will be left untouched. If the field is specified in updateMask and no value for that field was sent in the request, the field's value will be reset to the default. The default value for most fields is null or 0.

If updateMask is not sent in the request, all fields' values will be updated. Fields specified in the request will be updated to provided values. The rest of the fields will be reset to the default.

description string

New description of the Apache Kafka® cluster.

The maximum string length in characters is 256.

labels object

Custom labels for the Apache Kafka® cluster as key:value pairs.

For example, "project": "mvp" or "source": "dictionary".

The new set of labels will completely replace the old ones. To add a label, request the current set with the get method, then send an update request with the new label added to the set.

No more than 64 per resource. The string length in characters for each key must be 1-63. Each key must match the regular expression [a-z][-_0-9a-z]*. The maximum string length in characters for each value is 63. Each value must match the regular expression [-_0-9a-z]*.

configSpec object

New configuration and resources for hosts in the Apache Kafka® cluster.

Use updateMask to prevent reverting all cluster settings that are not listed in configSpec to their default values.

configSpec.
version
string

Version of Apache Kafka® used in the cluster. Possible values: 2.1, 2.6.

configSpec.
kafka
object

Configuration and resource allocation for Kafka brokers.

configSpec.
kafka.
resources
object
Resources allocated to Kafka brokers.
configSpec.
kafka.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

configSpec.
kafka.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted.

configSpec.
kafka.
resources.
diskTypeId
string

Type of the storage environment for the host.

configSpec.
kafka.
kafkaConfig_2_1
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6, kafkaConfig_2_8, kafkaConfig_3

Deprecated. Version 2.1 of Kafka not supported in Yandex Cloud.

configSpec.
kafka.
kafkaConfig_2_1.
compressionType
string

Cluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_1.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_1.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_1.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_1.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_1.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_1.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
kafka.
kafkaConfig_2_1.
socketSendBufferBytes
integer (int64)

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_2_1.
socketReceiveBufferBytes
integer (int64)

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_2_1.
autoCreateTopicsEnable
boolean (boolean)

Enable auto creation of topic on the server

configSpec.
kafka.
kafkaConfig_2_1.
numPartitions
integer (int64)

Default number of partitions per topic on the whole cluster

configSpec.
kafka.
kafkaConfig_2_1.
defaultReplicationFactor
integer (int64)

Default replication factor of the topic on the whole cluster

configSpec.
kafka.
kafkaConfig_2_1.
messageMaxBytes
integer (int64)

The largest record batch size allowed by Kafka. Default value: 1048588.

configSpec.
kafka.
kafkaConfig_2_1.
replicaFetchMaxBytes
integer (int64)

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

configSpec.
kafka.
kafkaConfig_2_1.
sslCipherSuites[]
string

A list of cipher suites.

configSpec.
kafka.
kafkaConfig_2_1.
offsetsRetentionMinutes
integer (int64)

Offset storage time after a consumer group loses all its consumers. Default: 10080.

configSpec.
kafka.
kafkaConfig_2_6
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6, kafkaConfig_2_8, kafkaConfig_3

Deprecated. Version 2.6 of Kafka not supported in Yandex Cloud.

configSpec.
kafka.
kafkaConfig_2_6.
compressionType
string

Cluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_6.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_6.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_6.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_6.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_6.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_6.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
kafka.
kafkaConfig_2_6.
socketSendBufferBytes
integer (int64)

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_2_6.
socketReceiveBufferBytes
integer (int64)

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_2_6.
autoCreateTopicsEnable
boolean (boolean)

Enable auto creation of topic on the server

configSpec.
kafka.
kafkaConfig_2_6.
numPartitions
integer (int64)

Default number of partitions per topic on the whole cluster

configSpec.
kafka.
kafkaConfig_2_6.
defaultReplicationFactor
integer (int64)

Default replication factor of the topic on the whole cluster

configSpec.
kafka.
kafkaConfig_2_6.
messageMaxBytes
integer (int64)

The largest record batch size allowed by Kafka. Default value: 1048588.

configSpec.
kafka.
kafkaConfig_2_6.
replicaFetchMaxBytes
integer (int64)

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

configSpec.
kafka.
kafkaConfig_2_6.
sslCipherSuites[]
string

A list of cipher suites.

configSpec.
kafka.
kafkaConfig_2_6.
offsetsRetentionMinutes
integer (int64)

Offset storage time after a consumer group loses all its consumers. Default: 10080.

configSpec.
kafka.
kafkaConfig_2_8
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6, kafkaConfig_2_8, kafkaConfig_3

Kafka version 2.8 broker configuration.

configSpec.
kafka.
kafkaConfig_2_8.
compressionType
string

Cluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_2_8.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_2_8.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_2_8.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_2_8.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_2_8.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_2_8.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_2_8.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_2_8.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_2_8.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
kafka.
kafkaConfig_2_8.
socketSendBufferBytes
integer (int64)

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_2_8.
socketReceiveBufferBytes
integer (int64)

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_2_8.
autoCreateTopicsEnable
boolean (boolean)

Enable auto creation of topic on the server

configSpec.
kafka.
kafkaConfig_2_8.
numPartitions
integer (int64)

Default number of partitions per topic on the whole cluster

configSpec.
kafka.
kafkaConfig_2_8.
defaultReplicationFactor
integer (int64)

Default replication factor of the topic on the whole cluster

configSpec.
kafka.
kafkaConfig_2_8.
messageMaxBytes
integer (int64)

The largest record batch size allowed by Kafka. Default value: 1048588.

configSpec.
kafka.
kafkaConfig_2_8.
replicaFetchMaxBytes
integer (int64)

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

configSpec.
kafka.
kafkaConfig_2_8.
sslCipherSuites[]
string

A list of cipher suites.

configSpec.
kafka.
kafkaConfig_2_8.
offsetsRetentionMinutes
integer (int64)

Offset storage time after a consumer group loses all its consumers. Default: 10080.

configSpec.
kafka.
kafkaConfig_2_8.
saslEnabledMechanisms[]
string

The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].

configSpec.
kafka.
kafkaConfig_3
object
configSpec.kafka includes only one of the fields kafkaConfig_2_1, kafkaConfig_2_6, kafkaConfig_2_8, kafkaConfig_3

Kafka version 3.x broker configuration.

configSpec.
kafka.
kafkaConfig_3.
compressionType
string

Cluster topics compression type.

  • COMPRESSION_TYPE_UNCOMPRESSED: no codec (uncompressed).
  • COMPRESSION_TYPE_ZSTD: Zstandard codec.
  • COMPRESSION_TYPE_LZ4: LZ4 codec.
  • COMPRESSION_TYPE_SNAPPY: Snappy codec.
  • COMPRESSION_TYPE_GZIP: GZip codec.
  • COMPRESSION_TYPE_PRODUCER: the codec to use is set by a producer (can be any of ZSTD, LZ4, GZIP or SNAPPY codecs).
configSpec.
kafka.
kafkaConfig_3.
logFlushIntervalMessages
integer (int64)

The number of messages accumulated on a log partition before messages are flushed to disk.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMessages setting.

configSpec.
kafka.
kafkaConfig_3.
logFlushIntervalMs
integer (int64)

The maximum time (in milliseconds) that a message in any topic is kept in memory before flushed to disk. If not set, the value of logFlushSchedulerIntervalMs is used.

This is the global cluster-level setting that can be overridden on a topic level by using the flushMs setting.

configSpec.
kafka.
kafkaConfig_3.
logFlushSchedulerIntervalMs
integer (int64)

The frequency of checks (in milliseconds) for any logs that need to be flushed to disk. This check is done by the log flusher.

configSpec.
kafka.
kafkaConfig_3.
logRetentionBytes
integer (int64)

Partition size limit; Kafka will discard old log segments to free up space if delete cleanupPolicy is in effect. This setting is helpful if you need to control the size of a log due to limited disk space.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionBytes setting.

configSpec.
kafka.
kafkaConfig_3.
logRetentionHours
integer (int64)

The number of hours to keep a log segment file before deleting it.

configSpec.
kafka.
kafkaConfig_3.
logRetentionMinutes
integer (int64)

The number of minutes to keep a log segment file before deleting it.

If not set, the value of logRetentionHours is used.

configSpec.
kafka.
kafkaConfig_3.
logRetentionMs
integer (int64)

The number of milliseconds to keep a log segment file before deleting it.

If not set, the value of logRetentionMinutes is used.

This is the global cluster-level setting that can be overridden on a topic level by using the retentionMs setting.

configSpec.
kafka.
kafkaConfig_3.
logSegmentBytes
integer (int64)

The maximum size of a single log file.

This is the global cluster-level setting that can be overridden on a topic level by using the segmentBytes setting.

configSpec.
kafka.
kafkaConfig_3.
logPreallocate
boolean (boolean)

Should pre allocate file when create new segment?

This is the global cluster-level setting that can be overridden on a topic level by using the preallocate setting.

configSpec.
kafka.
kafkaConfig_3.
socketSendBufferBytes
integer (int64)

The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_3.
socketReceiveBufferBytes
integer (int64)

The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

configSpec.
kafka.
kafkaConfig_3.
autoCreateTopicsEnable
boolean (boolean)

Enable auto creation of topic on the server

configSpec.
kafka.
kafkaConfig_3.
numPartitions
integer (int64)

Default number of partitions per topic on the whole cluster

configSpec.
kafka.
kafkaConfig_3.
defaultReplicationFactor
integer (int64)

Default replication factor of the topic on the whole cluster

configSpec.
kafka.
kafkaConfig_3.
messageMaxBytes
integer (int64)

The largest record batch size allowed by Kafka. Default value: 1048588.

configSpec.
kafka.
kafkaConfig_3.
replicaFetchMaxBytes
integer (int64)

The number of bytes of messages to attempt to fetch for each partition. Default value: 1048576.

configSpec.
kafka.
kafkaConfig_3.
sslCipherSuites[]
string

A list of cipher suites.

configSpec.
kafka.
kafkaConfig_3.
offsetsRetentionMinutes
integer (int64)

Offset storage time after a consumer group loses all its consumers. Default: 10080.

configSpec.
kafka.
kafkaConfig_3.
saslEnabledMechanisms[]
string

The list of SASL mechanisms enabled in the Kafka server. Default: [SCRAM_SHA_512].

configSpec.
zookeeper
object

Configuration and resource allocation for ZooKeeper hosts.

configSpec.
zookeeper.
resources
object

Resources allocated to ZooKeeper hosts.

configSpec.
zookeeper.
resources.
resourcePresetId
string

ID of the preset for computational resources available to a host (CPU, memory, etc.). All available presets are listed in the documentation.

configSpec.
zookeeper.
resources.
diskSize
string (int64)

Volume of the storage available to a host, in bytes. Must be greater than 2 * partition segment size in bytes * partitions count, so each partition can have one active segment file and one closed segment file that can be deleted.

configSpec.
zookeeper.
resources.
diskTypeId
string

Type of the storage environment for the host.

configSpec.
zoneId[]
string

IDs of availability zones where Kafka brokers reside.

configSpec.
brokersCount
integer (int64)

The number of Kafka brokers deployed in each availability zone.

configSpec.
assignPublicIp
boolean (boolean)

The flag that defines whether a public IP address is assigned to the cluster. If the value is true, then Apache Kafka® cluster is available on the Internet via it's public IP address.

configSpec.
unmanagedTopics
boolean (boolean)

Allows to manage topics via AdminAPI

configSpec.
schemaRegistry
boolean (boolean)

Enables managed schema registry on cluster

configSpec.
access
object

Access policy for external services.

configSpec.
access.
dataTransfer
boolean (boolean)

Allow access for DataTransfer.

name string

New name for the Apache Kafka® cluster.

The maximum string length in characters is 63. Value must match the regular expression [a-zA-Z0-9_-]*.

securityGroupIds[] string

User security groups

deletionProtection boolean (boolean)

Deletion Protection inhibits deletion of the cluster

maintenanceWindow object

New maintenance window settings for the cluster.

maintenanceWindow.
anytime
object
maintenanceWindow includes only one of the fields anytime, weeklyMaintenanceWindow
maintenanceWindow.
weeklyMaintenanceWindow
object
maintenanceWindow includes only one of the fields anytime, weeklyMaintenanceWindow
maintenanceWindow.
weeklyMaintenanceWindow.
day
string
maintenanceWindow.
weeklyMaintenanceWindow.
hour
string (int64)

Hour of the day in UTC.

Acceptable values are 1 to 24, inclusive.

Response

HTTP Code: 200 - OK

{
  "id": "string",
  "description": "string",
  "createdAt": "string",
  "createdBy": "string",
  "modifiedAt": "string",
  "done": true,
  "metadata": "object",

  //  includes only one of the fields `error`, `response`
  "error": {
    "code": "integer",
    "message": "string",
    "details": [
      "object"
    ]
  },
  "response": "object",
  // end of the list of possible fields

}

An Operation resource. For more information, see Operation.

Field Description
id string

ID of the operation.

description string

Description of the operation. 0-256 characters long.

createdAt string (date-time)

Creation timestamp.

String in RFC3339 text format. The range of possible values is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, i.e. from 0 to 9 digits for fractions of a second.

To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits).

createdBy string

ID of the user or service account who initiated the operation.

modifiedAt string (date-time)

The time when the Operation resource was last modified.

String in RFC3339 text format. The range of possible values is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z, i.e. from 0 to 9 digits for fractions of a second.

To work with values in this field, use the APIs described in the Protocol Buffers reference. In some languages, built-in datetime utilities do not support nanosecond precision (9 digits).

done boolean (boolean)

If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.

metadata object

Service-specific metadata associated with the operation. It typically contains the ID of the target resource that the operation is performed on. Any method that returns a long-running operation should document the metadata type, if any.

error object
The error result of the operation in case of failure or cancellation.
includes only one of the fields error, response
error.
code
integer (int32)

Error code. An enum value of google.rpc.Code.

error.
message
string

An error message.

error.
details[]
object

A list of messages that carry the error details.

response object
includes only one of the fields error, response

The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is the standard Create/Update, the response should be the target resource of the operation. Any method that returns a long-running operation should document the response type, if any.

Была ли статья полезна?

Language / Region
Проект Яндекса
© 2023 ООО «Яндекс.Облако»
В этой статье:
  • HTTP request
  • Path parameters
  • Body parameters
  • Response