"Fossies" - the Fresh Open Source Software Archive

Member "kafka-2.2.0-src/docs/streams/quickstart.html" (9 Mar 2019, 18076 Bytes) of package /linux/misc/kafka-2.2.0-src.tgz:


As a special service "Fossies" has tried to format the requested source page into HTML format using (guessed) HTML source code syntax highlighting (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file.

    1 <!--
    2  Licensed to the Apache Software Foundation (ASF) under one or more
    3  contributor license agreements.  See the NOTICE file distributed with
    4  this work for additional information regarding copyright ownership.
    5  The ASF licenses this file to You under the Apache License, Version 2.0
    6  (the "License"); you may not use this file except in compliance with
    7  the License.  You may obtain a copy of the License at
    8 
    9     http://www.apache.org/licenses/LICENSE-2.0
   10 
   11  Unless required by applicable law or agreed to in writing, software
   12  distributed under the License is distributed on an "AS IS" BASIS,
   13  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   14  See the License for the specific language governing permissions and
   15  limitations under the License.
   16 -->
   17 <script><!--#include virtual="../js/templateData.js" --></script>
   18 
   19 <script id="content-template" type="text/x-handlebars-template">
   20 
   21   <h1>Run Kafka Streams Demo Application</h1>
   22     <div class="sub-nav-sticky">
   23         <div class="sticky-top">
   24             <div style="height:35px">
   25                 <a href="/{{version}}/documentation/streams/">Introduction</a>
   26                 <a class="active-menu-item" href="/{{version}}/documentation/streams/quickstart">Run Demo App</a>
   27                 <a href="/{{version}}/documentation/streams/tutorial">Tutorial: Write App</a>
   28                 <a href="/{{version}}/documentation/streams/core-concepts">Concepts</a>
   29                 <a href="/{{version}}/documentation/streams/architecture">Architecture</a>
   30                 <a href="/{{version}}/documentation/streams/developer-guide/">Developer Guide</a>
   31                 <a href="/{{version}}/documentation/streams/upgrade-guide">Upgrade</a>
   32             </div>
   33         </div>
   34     </div>
   35 <p>
   36   This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. However, if you have already started Kafka and
   37   ZooKeeper, feel free to skip the first two steps.
   38 </p>
   39 
   40   <p>
   41  Kafka Streams is a client library for building mission-critical real-time applications and microservices,
   42   where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of
   43   writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka's
   44   server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed,
   45  and much more.
   46   </p>
   47   <p>
   48 This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist
   49 of the <code><a href="https://github.com/apache/kafka/blob/{{dotVersion}}/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java">WordCountDemo</a></code> example code (converted to use Java 8 lambda expressions for easy reading).
   50 </p>
   51 <pre class="brush: java;">
   52 // Serializers/deserializers (serde) for String and Long types
   53 final Serde&lt;String&gt; stringSerde = Serdes.String();
   54 final Serde&lt;Long&gt; longSerde = Serdes.Long();
   55 
   56 // Construct a `KStream` from the input topic "streams-plaintext-input", where message values
   57 // represent lines of text (for the sake of this example, we ignore whatever may be stored
   58 // in the message keys).
   59 KStream&lt;String, String&gt; textLines = builder.stream("streams-plaintext-input",
   60     Consumed.with(stringSerde, stringSerde);
   61 
   62 KTable&lt;String, Long&gt; wordCounts = textLines
   63     // Split each text line, by whitespace, into words.
   64     .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
   65 
   66     // Group the text words as message keys
   67     .groupBy((key, value) -> value)
   68 
   69     // Count the occurrences of each word (message key).
   70     .count()
   71 
   72 // Store the running counts as a changelog stream to the output topic.
   73 wordCounts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
   74 </pre>
   75 
   76 <p>
   77 It implements the WordCount
   78 algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples
   79 you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is
   80 designed to operate on an <b>infinite, unbounded stream</b> of data. Similar to the bounded variant, it is a stateful algorithm that
   81 tracks and updates the counts of words. However, since it must assume potentially
   82 unbounded input data, it will periodically output its current state and results while continuing to process more data
   83 because it cannot know when it has processed "all" the input data.
   84 </p>
   85 <p>
   86   As the first step, we will start Kafka (unless you already have it started) and then we will
   87   prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.
   88 </p>
   89 
   90 <h4><a id="quickstart_streams_download" href="#quickstart_streams_download">Step 1: Download the code</a></h4>
   91 
   92 <a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
   93 Note that there are multiple downloadable Scala versions and we choose to use the recommended version ({{scalaVersion}}) here:
   94 
   95 <pre class="brush: bash;">
   96 &gt; tar -xzf kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
   97 &gt; cd kafka_{{scalaVersion}}-{{fullDotVersion}}
   98 </pre>
   99 
  100 <h4><a id="quickstart_streams_startserver" href="#quickstart_streams_startserver">Step 2: Start the Kafka server</a></h4>
  101 
  102 <p>
  103 Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
  104 </p>
  105 
  106 <pre class="brush: bash;">
  107 &gt; bin/zookeeper-server-start.sh config/zookeeper.properties
  108 [2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
  109 ...
  110 </pre>
  111 
  112 <p>Now start the Kafka server:</p>
  113 <pre class="brush: bash;">
  114 &gt; bin/kafka-server-start.sh config/server.properties
  115 [2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
  116 [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
  117 ...
  118 </pre>
  119 
  120 
  121 <h4><a id="quickstart_streams_prepare" href="#quickstart_streams_prepare">Step 3: Prepare input topic and start Kafka producer</a></h4>
  122 
  123 <!--
  124 
  125 <pre class="brush: bash;">
  126 &gt; echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt
  127 </pre>
  128 Or on Windows:
  129 <pre class="brush: bash;">
  130 &gt; echo all streams lead to kafka> file-input.txt
  131 &gt; echo hello kafka streams>> file-input.txt
  132 &gt; echo|set /p=join kafka summit>> file-input.txt
  133 </pre>
  134 
  135 -->
  136 
  137 Next, we create the input topic named <b>streams-plaintext-input</b> and the output topic named <b>streams-wordcount-output</b>:
  138 
  139 <pre class="brush: bash;">
  140 &gt; bin/kafka-topics.sh --create \
  141     --bootstrap-server localhost:9092 \
  142     --replication-factor 1 \
  143     --partitions 1 \
  144     --topic streams-plaintext-input
  145 Created topic "streams-plaintext-input".
  146 </pre>
  147 
  148 Note: we create the output topic with compaction enabled because the output stream is a changelog stream
  149 (cf. <a href="#anchor-changelog-output">explanation of application output</a> below).
  150 
  151 <pre class="brush: bash;">
  152 &gt; bin/kafka-topics.sh --create \
  153     --bootstrap-server localhost:9092 \
  154     --replication-factor 1 \
  155     --partitions 1 \
  156     --topic streams-wordcount-output \
  157     --config cleanup.policy=compact
  158 Created topic "streams-wordcount-output".
  159 </pre>
  160 
  161 The created topic can be described with the same <b>kafka-topics</b> tool:
  162 
  163 <pre class="brush: bash;">
  164 &gt; bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
  165 
  166 Topic:streams-plaintext-input   PartitionCount:1    ReplicationFactor:1 Configs:
  167     Topic: streams-plaintext-input  Partition: 0    Leader: 0   Replicas: 0 Isr: 0
  168 Topic:streams-wordcount-output  PartitionCount:1    ReplicationFactor:1 Configs:cleanup.policy=compact
  169     Topic: streams-wordcount-output Partition: 0    Leader: 0   Replicas: 0 Isr: 0
  170 </pre>
  171 
  172 <h4><a id="quickstart_streams_start" href="#quickstart_streams_start">Step 4: Start the Wordcount Application</a></h4>
  173 
  174 The following command starts the WordCount demo application:
  175 
  176 <pre class="brush: bash;">
  177 &gt; bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
  178 </pre>
  179 
  180 <p>
  181 The demo application will read from the input topic <b>streams-plaintext-input</b>, perform the computations of the WordCount algorithm on each of the read messages,
  182 and continuously write its current results to the output topic <b>streams-wordcount-output</b>.
  183 Hence there won't be any STDOUT output except log entries as the results are written back into in Kafka.
  184 </p>
  185 
  186 Now we can start the console producer in a separate terminal to write some input data to this topic:
  187 
  188 <pre class="brush: bash;">
  189 &gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
  190 </pre>
  191 
  192 and inspect the output of the WordCount demo application by reading from its output topic with the console consumer in a separate terminal:
  193 
  194 <pre class="brush: bash;">
  195 &gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
  196     --topic streams-wordcount-output \
  197     --from-beginning \
  198     --formatter kafka.tools.DefaultMessageFormatter \
  199     --property print.key=true \
  200     --property print.value=true \
  201     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
  202     --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
  203 </pre>
  204 
  205 
  206 <h4><a id="quickstart_streams_process" href="#quickstart_streams_process">Step 5: Process some data</a></h4>
  207 
  208 Now let's write some message with the console producer into the input topic <b>streams-plaintext-input</b> by entering a single line of text and then hit &lt;RETURN&gt;.
  209 This will send a new message to the input topic, where the message key is null and the message value is the string encoded text line that you just entered
  210 (in practice, input data for applications will typically be streaming continuously into Kafka, rather than being manually entered as we do in this quickstart):
  211 
  212 <pre class="brush: bash;">
  213 &gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
  214 all streams lead to kafka
  215 </pre>
  216 
  217 <p>
  218 This message will be processed by the Wordcount application and the following output data will be written to the <b>streams-wordcount-output</b> topic and printed by the console consumer:
  219 </p>
  220 
  221 <pre class="brush: bash;">
  222 &gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
  223     --topic streams-wordcount-output \
  224     --from-beginning \
  225     --formatter kafka.tools.DefaultMessageFormatter \
  226     --property print.key=true \
  227     --property print.value=true \
  228     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
  229     --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
  230 
  231 all     1
  232 streams 1
  233 lead    1
  234 to      1
  235 kafka   1
  236 </pre>
  237 
  238 <p>
  239 Here, the first column is the Kafka message key in <code>java.lang.String</code> format and represents a word that is being counted, and the second column is the message value in <code>java.lang.Long</code>format, representing the word's latest count.
  240 </p>
  241 
  242 Now let's continue writing one more message with the console producer into the input topic <b>streams-plaintext-input</b>.
  243 Enter the text line "hello kafka streams" and hit &lt;RETURN&gt;.
  244 Your terminal should look as follows:
  245 
  246 <pre class="brush: bash;">
  247 &gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
  248 all streams lead to kafka
  249 hello kafka streams
  250 </pre>
  251 
  252 In your other terminal in which the console consumer is running, you will observe that the WordCount application wrote new output data:
  253 
  254 <pre class="brush: bash;">
  255 &gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
  256     --topic streams-wordcount-output \
  257     --from-beginning \
  258     --formatter kafka.tools.DefaultMessageFormatter \
  259     --property print.key=true \
  260     --property print.value=true \
  261     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
  262     --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
  263 
  264 all     1
  265 streams 1
  266 lead    1
  267 to      1
  268 kafka   1
  269 hello   1
  270 kafka   2
  271 streams 2
  272 </pre>
  273 
  274 Here the last printed lines <b>kafka 2</b> and <b>streams 2</b> indicate updates to the keys <b>kafka</b> and <b>streams</b> whose counts have been incremented from <b>1</b> to <b>2</b>.
  275 Whenever you write further input messages to the input topic, you will observe new messages being added to the <b>streams-wordcount-output</b> topic,
  276 representing the most recent word counts as computed by the WordCount application.
  277 Let's enter one final input text line "join kafka summit" and hit &lt;RETURN&gt; in the console producer to the input topic <b>streams-wordcount-input</b> before we wrap up this quickstart:
  278 
  279 <pre class="brush: bash;">
  280 &gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-wordcount-input
  281 all streams lead to kafka
  282 hello kafka streams
  283 join kafka summit
  284 </pre>
  285 
  286 <a name="anchor-changelog-output"></a>
  287 The <b>streams-wordcount-output</b> topic will subsequently show the corresponding updated word counts (see last three lines):
  288 
  289 <pre class="brush: bash;">
  290 &gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
  291     --topic streams-wordcount-output \
  292     --from-beginning \
  293     --formatter kafka.tools.DefaultMessageFormatter \
  294     --property print.key=true \
  295     --property print.value=true \
  296     --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
  297     --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
  298 
  299 all     1
  300 streams 1
  301 lead    1
  302 to      1
  303 kafka   1
  304 hello   1
  305 kafka   2
  306 streams 2
  307 join    1
  308 kafka   3
  309 summit  1
  310 </pre>
  311 
  312 As one can see, outputs of the Wordcount application is actually a continuous stream of updates, where each output record (i.e. each line in the original output above) is
  313 an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
  314 
  315 <p>
  316 The two diagrams below illustrate what is essentially happening behind the scenes.
  317 The first column shows the evolution of the current state of the <code>KTable&lt;String, Long&gt;</code> that is counting word occurrences for <code>count</code>.
  318 The second column shows the change records that result from state updates to the KTable and that are being sent to the output Kafka topic <b>streams-wordcount-output</b>.
  319 </p>
  320 
  321 <img src="/{{version}}/images/streams-table-updates-02.png" style="float: right; width: 25%;">
  322 <img src="/{{version}}/images/streams-table-updates-01.png" style="float: right; width: 25%;">
  323 
  324 <p>
  325 First the text line "all streams lead to kafka" is being processed.
  326 The <code>KTable</code> is being built up as each new word results in a new table entry (highlighted with a green background), and a corresponding change record is sent to the downstream <code>KStream</code>.
  327 </p>
  328 <p>
  329 When the second text line "hello kafka streams" is processed, we observe, for the first time, that existing entries in the <code>KTable</code> are being updated (here: for the words "kafka" and for "streams"). And again, change records are being sent to the output topic.
  330 </p>
  331 <p>
  332 And so on (we skip the illustration of how the third line is being processed). This explains why the output topic has the contents we showed above, because it contains the full record of changes.
  333 </p>
  334 
  335 <p>
  336 Looking beyond the scope of this concrete example, what Kafka Streams is doing here is to leverage the duality between a table and a changelog stream (here: table = the KTable, changelog stream = the downstream KStream): you can publish every change of the table to a stream, and if you consume the entire changelog stream from beginning to end, you can reconstruct the contents of the table.
  337 </p>
  338 
  339 <h4><a id="quickstart_streams_stop" href="#quickstart_streams_stop">Step 6: Teardown the application</a></h4>
  340 
  341 <p>You can now stop the console consumer, the console producer, the Wordcount application, the Kafka broker and the ZooKeeper server in order via <b>Ctrl-C</b>.</p>
  342 
  343  <div class="pagination">
  344         <a href="/{{version}}/documentation/streams" class="pagination__btn pagination__btn__prev">Previous</a>
  345         <a href="/{{version}}/documentation/streams/tutorial" class="pagination__btn pagination__btn__next">Next</a>
  346     </div>
  347 </script>
  348 
  349 <div class="p-quickstart-streams"></div>
  350 
  351 <!--#include virtual="../../includes/_header.htm" -->
  352 <!--#include virtual="../../includes/_top.htm" -->
  353 <div class="content documentation documentation--current">
  354     <!--#include virtual="../../includes/_nav.htm" -->
  355     <div class="right">
  356         <!--#include virtual="../../includes/_docs_banner.htm" -->
  357         <ul class="breadcrumbs">
  358             <li><a href="/documentation">Documentation</a></li>
  359             <li><a href="/documentation/streams">Kafka Streams</a></li>
  360         </ul>
  361         <div class="p-content"></div>
  362     </div>
  363 </div>
  364 <!--#include virtual="../../includes/_footer.htm" -->
  365 <script>
  366 $(function() {
  367   // Show selected style on nav item
  368   $('.b-nav__streams').addClass('selected');
  369 
  370 
  371      //sticky secondary nav
  372     var $navbar = $(".sub-nav-sticky"),
  373                y_pos = $navbar.offset().top,
  374                height = $navbar.height();
  375 
  376            $(window).scroll(function() {
  377                var scrollTop = $(window).scrollTop();
  378 
  379                if (scrollTop > y_pos - height) {
  380                    $navbar.addClass("navbar-fixed")
  381                } else if (scrollTop <= y_pos) {
  382                    $navbar.removeClass("navbar-fixed")
  383                }
  384            });
  385 
  386   // Display docs subnav items
  387   $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
  388 });
  389 </script>