"Fossies" - the Fresh Open Source Software Archive

Member "monasca-events-api-2.0.0/devstack/files/kafka/server.properties" (14 Oct 2020, 5083 Bytes) of package /linux/misc/openstack/monasca-events-api-2.0.0.tar.gz:


As a special service "Fossies" has tried to format the requested text file into HTML format (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file.

    1 #
    2 # (C) Copyright 2015 Hewlett Packard Enterprise Development Company LP
    3 #
    4 # Licensed under the Apache License, Version 2.0 (the "License");
    5 # you may not use this file except in compliance with the License.
    6 # You may obtain a copy of the License at
    7 #
    8 #    http://www.apache.org/licenses/LICENSE-2.0
    9 #
   10 # Unless required by applicable law or agreed to in writing, software
   11 # distributed under the License is distributed on an "AS IS" BASIS,
   12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
   13 # implied.
   14 # See the License for the specific language governing permissions and
   15 # limitations under the License.
   16 #
   17 
   18 ############################# Server Basics #############################
   19 
   20 # The id of the broker. This must be set to a unique integer for each broker.
   21 broker.id=0
   22 
   23 ############################# Socket Server Settings #############################
   24 
   25 # The port the socket server listens on
   26 port=9092
   27 
   28 # Hostname the broker will bind to. If not set, the server will bind to all interfaces
   29 #host.name=127.0.0.1
   30 
   31 # Hostname the broker will advertise to producers and consumers. If not set, it uses the
   32 # value for "host.name" if configured.  Otherwise, it will use the value returned from
   33 # java.net.InetAddress.getCanonicalHostName().
   34 #advertised.host.name=<hostname routable by clients>
   35 
   36 # The port to publish to ZooKeeper for clients to use. If this is not set,
   37 # it will publish the same port that the broker binds to.
   38 #advertised.port=<port accessible by clients>
   39 
   40 # The number of threads handling network requests
   41 num.network.threads=2
   42 
   43 # The number of threads doing disk I/O
   44 num.io.threads=2
   45 
   46 # The send buffer (SO_SNDBUF) used by the socket server
   47 socket.send.buffer.bytes=1048576
   48 
   49 # The receive buffer (SO_RCVBUF) used by the socket server
   50 socket.receive.buffer.bytes=1048576
   51 
   52 # The maximum size of a request that the socket server will accept (protection against OOM)
   53 socket.request.max.bytes=104857600
   54 
   55 
   56 ############################# Log Basics #############################
   57 
   58 # A comma separated list of directories under which to store log files
   59 log.dirs=/var/kafka
   60 
   61 auto.create.topics.enable=false
   62 # The number of logical partitions per topic per server. More partitions allow greater parallelism
   63 # for consumption, but also mean more files.
   64 num.partitions=2
   65 
   66 ############################# Log Flush Policy #############################
   67 
   68 # Messages are immediately written to the filesystem but by default we only fsync() to sync
   69 # the OS cache lazily. The following configurations control the flush of data to disk.
   70 # There are a few important trade-offs here:
   71 #    1. Durability: Unflushed data may be lost if you are not using replication.
   72 #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
   73 #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
   74 # The settings below allow one to configure the flush policy to flush data after a period of time or
   75 # every N messages (or both). This can be done globally and overridden on a per-topic basis.
   76 
   77 # The number of messages to accept before forcing a flush of data to disk
   78 log.flush.interval.messages=10000
   79 
   80 # The maximum amount of time a message can sit in a log before we force a flush
   81 log.flush.interval.ms=1000
   82 
   83 ############################# Log Retention Policy #############################
   84 
   85 # The following configurations control the disposal of log segments. The policy can
   86 # be set to delete segments after a period of time, or after a given size has accumulated.
   87 # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
   88 # from the end of the log.
   89 
   90 # The minimum age of a log file to be eligible for deletion
   91 log.retention.hours=24
   92 
   93 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
   94 # segments don't drop below log.retention.bytes.
   95 log.retention.bytes=104857600
   96 
   97 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
   98 log.segment.bytes=104857600
   99 
  100 # The interval at which log segments are checked to see if they can be deleted according
  101 # to the retention policies
  102 log.retention.check.interval.ms=60000
  103 
  104 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
  105 # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
  106 log.cleaner.enable=false
  107 
  108 ############################# Zookeeper #############################
  109 
  110 # Zookeeper connection string (see zookeeper docs for details).
  111 # This is a comma separated host:port pairs, each corresponding to a zk
  112 # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
  113 # You can also append an optional chroot string to the urls to specify the
  114 # root directory for all kafka znodes.
  115 zookeeper.connect=127.0.0.1:2181
  116 
  117 # Timeout in ms for connecting to zookeeper
  118 zookeeper.connection.timeout.ms=1000000