Skip to content

Commit

Permalink
DEVX-1919: promote CCloud path first in microservices demo (confluent…
Browse files Browse the repository at this point in the history
  • Loading branch information
ybyzek committed Jul 29, 2020
1 parent a52758d commit aaf1eb2
Showing 1 changed file with 91 additions and 85 deletions.
176 changes: 91 additions & 85 deletions microservices-orders/docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,23 @@
Tutorial: Introduction to Streaming Application Development
===========================================================

This self-paced tutorial provides exercises for developers to apply the basic principles of streaming applications.
This self-paced tutorial provides exercises for developers to learn the basic principles of service-based architectures and streaming application development:

- Exercise 1: Persist events
- Exercise 2: Event-driven applications
- Exercise 3: Enriching streams with joins
- Exercise 4: Filtering and branching
- Exercise 5: Stateful operations
- Exercise 6: State stores
- Exercise 7: Enrichment with |ksqldb|


========
Overview
========

The tutorial is based on a small microservices ecosystem, showcasing an order management workflow, such as one might find in retail and online shopping.
It is built using Kafka Streams, whereby business events that describe the order management workflow propagate through this ecosystem.
It is built using |ak-tm|, whereby business events that describe the order management workflow propagate through this ecosystem.
The blog post `Building a Microservices Ecosystem with Kafka Streams and ksqlDB <https://www.confluent.io/blog/building-a-microservices-ecosystem-with-kafka-streams-and-ksql/>`__ outlines the approach used.

.. figure:: images/microservices-demo.png
Expand Down Expand Up @@ -108,7 +117,7 @@ Reading
~~~~~~~

You will get a lot more out of this tutorial if you have first learned the concepts which are foundational for this tutorial.
To learn how service-based architectures and stream processing tools such as Apache Kafka® can help you build business-critical systems, we recommend:
To learn how service-based architectures and stream processing platforms such as |ak-tm| can help you build business-critical systems, we recommend:

* If you have lots of time: `Designing Event-Driven Systems <https://www.confluent.io/designing-event-driven-systems>`__, a book by Ben Stopford.
* If you do not have lots of time: `Building a Microservices Ecosystem with Kafka Streams and ksqlDB <https://www.confluent.io/blog/building-a-microservices-ecosystem-with-kafka-streams-and-ksql/>`__ or `Build Services on a Backbone of Events <https://www.confluent.io/blog/build-services-backbone-events/>`__.
Expand All @@ -121,137 +130,132 @@ For more learning on Kafka Streams API that you can use as a reference while wor
Environment Setup
~~~~~~~~~~~~~~~~~

1. Make sure you have the following pre-requisites, depending on whether you are running |cp| locally, in Docker, or with |ccloud|.

Local:

* `Confluent Platform <https://www.confluent.io/download/>`__: download |cp| with commercial features to use topic management, |ksqldb| and |sr-long| integration, and streams monitoring capabilities
* Java 1.8 to run the demo application
* Maven to compile the demo application
* (optional) `Elasticsearch 5.6.5 <https://www.elastic.co/downloads/past-releases/elasticsearch-5-6-5>`__ to export data from Kafka

* If you do not want to use Elasticsearch, comment out ``check_running_elasticsearch`` in the ``start.sh`` script

* (optional) `Kibana 5.5.2 <https://www.elastic.co/downloads/past-releases/kibana-5-5-2>`__ to visualize data

* If you do not want to use Kibana, comment out ``check_running_kibana`` in the ``start.sh`` script
Make sure you have the following pre-requisites, depending on whether you are running with |ccloud|, in Docker, or |cp| locally:

Docker:
|ccloud|
--------

* Docker version >= 19.00.0
* Docker Compose version >= 1.25.0 with Docker Compose file format 3
* In Docker's advanced `settings <https://docs.docker.com/docker-for-mac/#advanced>`__, increase the memory dedicated to Docker to at least 8GB (default is 2GB)
* |ccloud| account. The `Confluent Cloud <https://www.confluent.io/confluent-cloud/?utm_source=github&utm_medium=demo&utm_campaign=ch.examples_type.community_content.microservices-orders?>`__ home page can help you get setup with your own account if you do not yet have access.
* |ccloud| CLI. See :ref:`Install and Configure the Confluent Cloud CLI <cloud-cli-install>`.

|ccloud|:
.. note:: The first 20 users to sign up for |ccloud| and use promo code ``C50INTEG`` will receive an additional $50 free usage (`details <https://www.confluent.io/confluent-cloud-promo-disclaimer>`__).

Docker
------

* Docker version >= 19.00.0
* Docker Compose version >= 1.25.0 with Docker Compose file format 3
* In Docker's advanced `settings <https://docs.docker.com/docker-for-mac/#advanced>`__, increase the memory dedicated to Docker to at least 8GB (default is 2GB)
* |ccloud| account. The `Confluent Cloud <https://www.confluent.io/confluent-cloud/?utm_source=github&utm_medium=demo&utm_campaign=ch.examples_type.community_content.microservices-orders?>`__ home page can help you get setup with your own account if you do not yet have access.
* |ccloud| CLI. See :ref:`Install and Configure the Confluent Cloud CLI <cloud-cli-install>`.

2. Clone the `examples GitHub repository <https://github.com/confluentinc/examples>`__:

.. sourcecode:: bash
Local
-----

git clone https://github.com/confluentinc/examples

3. Change directory to this project.
* `Confluent Platform <https://www.confluent.io/download/>`__: download |cp| with commercial features to use topic management, |ksqldb| and |sr-long| integration, and streams monitoring capabilities
* Java 1.8 to run the demo application
* Maven to compile the demo application
* (optional) `Elasticsearch 5.6.5 <https://www.elastic.co/downloads/past-releases/elasticsearch-5-6-5>`__ to export data from Kafka

.. sourcecode:: bash
* If you do not want to use Elasticsearch, comment out ``check_running_elasticsearch`` in the ``start.sh`` script

cd examples/microservices-orders
* (optional) `Kibana 5.5.2 <https://www.elastic.co/downloads/past-releases/kibana-5-5-2>`__ to visualize data

.. include:: ../../ccloud/docs/includes/ccloud-promo-code.rst
* If you do not want to use Kibana, comment out ``check_running_kibana`` in the ``start.sh`` script

========
Tutorial
========

How to use the tutorial
~~~~~~~~~~~~~~~~~~~~~~~
Setup the Tutorial
~~~~~~~~~~~~~~~~~~

#. Follow the "Environment Setup" instructions.

#. Clone the `examples GitHub repository <https://github.com/confluentinc/examples>`__:

As a pre-requisite, follow the "Environment Setup" instructions.
.. codewithvars:: bash

Then run the full end-to-end working solution, which requires no code development, to see a customer-representative deployment of a streaming application..
This provides context for each of the exercises in which you will develop pieces of the microservices.
git clone https://github.com/confluentinc/examples

* Exercise 0: Run end-to-end demo
#. Navigate to the ``examples/microservices-orders`` directory and switch to the |cp| release branch:

After you have successfully run the full solution, go through the execises in the tutorial to better understand the basic principles of streaming applications:
.. codewithvars:: bash

* Exercise 1: Persist events
* Exercise 2: Event-driven applications
* Exercise 3: Enriching streams with joins
* Exercise 4: Filtering and branching
* Exercise 5: Stateful operations
* Exercise 6: State stores
* Exercise 7: Enrichment with |ksqldb|
cd examples/microservices-orders
git checkout |release_post_branch|

For each exercise:
#. Run the full end-to-end working solution to see a customer-representative deployment of a streaming application. This requires no code development; it just provides context for each of the exercises in which you will develop pieces of the microservices.

#. Read the description to understand the focus area for the exercise
#. Edit the file specified in each exercise and fill in the missing code
#. Copy the file to the project, then compile the project and run the test for the service to ensure it works
- Exercise 0: Run end-to-end demo

#. After you have successfully run the full solution, go through each of the execises 1-7 to better understand the basic principles of streaming applications:

- Exercise 1: Persist events
- Exercise 2: Event-driven applications
- Exercise 3: Enriching streams with joins
- Exercise 4: Filtering and branching
- Exercise 5: Stateful operations
- Exercise 6: State stores
- Exercise 7: Enrichment with |ksqldb|

#. For each of the above exercises:

- Read the description to understand the focus area for the exercise
- Edit the file specified in each exercise and fill in the missing code
- Copy the file to the project, then compile the project and run the test for the service to ensure it works

Exercise 0: Run end-to-end demo
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Running the fully working demo end-to-end provides context for each of the later exercises.

To run this demo end-to-end, you have one of three modes to choose from:

* |cp| locally
* Docker
* |ccloud|
#. Ensure you've followed the appropriate prerequisites section above prior to starting.

Ensure you've followed the appropriate prerequisites section above prior to starting.
#. Start the demo in one of three modes, depending on whether you are running with |ccloud|, in Docker, or |cp| locally:

#. Start the demo

* If you are running |cp| locally, then run the full solution (this also starts a local |cp| cluster using Confluent CLI):
* |ccloud|: first log in to |ccloud| with the command ``ccloud login``, and use your |ccloud| username and password. To prevent being logged out, use the ``--save`` argument which saves your |ccloud| user login credentials or refresh token (in the case of SSO) to the local ``netrc`` file. Then run the full solution using the provided script (this starts a new |ccloud| environment and Kafka using the :devx-examples:`ccloud stack library|ccloud/ccloud-stack/README.md` of this repository).

.. sourcecode:: bash

./start.sh
ccloud login --save
./start-ccloud.sh

* If you are running Docker, then run the full solution (this also starts a local |cp| cluster in Docker containers).
* Docker: run the full solution using ``docker-compose`` (this also starts a local |cp| cluster in Docker containers).

.. sourcecode:: bash

docker-compose up -d --build

* If you are running on |ccloud|, first log in to |ccloud| with the command ``ccloud login``, and use your |ccloud| username and password. To prevent being logged out, use the ``--save`` argument which saves your |ccloud| user login credentials or refresh token (in the case of SSO) to the local ``netrc`` file. Then run the full solution (this starts a new |ccloud| environment and Kafka using the :devx-examples:`ccloud stack library|ccloud/ccloud-stack/README.md` of this repository).

* Local: run the full solution using the provided script (this also starts a local |cp| cluster using Confluent CLI):

.. sourcecode:: bash

ccloud login --save
./start-ccloud.sh
./start.sh

#. After starting the demo with one of the above commands, the microservices applications will be running and Kafka topics will have data in them.

.. figure:: images/microservices-exercises-combined.png
:alt: image

* If you are running locally, you can sample topic data by running:
* |ccloud|: sample topic data by running the following command, substituting your configuration file name with the file located in the ``stack-configs`` folder example (``java-service-account-12345.config``).

.. sourcecode:: bash

./read-topics.sh
source delta_configs/env.delta; CONFIG_FILE=/opt/docker/stack-configs/java-service-account-<service-account-id>.config ./read-topics-ccloud.sh

* If you are running Docker, you can sample topic data by running:
* Docker: sample topic data by running:

.. sourcecode:: bash

./read-topics-docker.sh

* If you are running on |ccloud|, you can sample topic data by running the following command, substituting your configuration file name with the file located in the ``stack-configs`` folder example (``java-service-account-12345.config``).
* Local: sample topic data by running:

.. sourcecode:: bash

source delta_configs/env.delta; CONFIG_FILE=/opt/docker/stack-configs/java-service-account-<service-account-id>.config ./read-topics-ccloud.sh
./read-topics.sh

#. Explore the data with Elasticsearch and Kibana

Expand All @@ -267,16 +271,6 @@ Ensure you've followed the appropriate prerequisites section above prior to star

#. View and monitor the streaming applications.

* If you are running locally or with Docker, use |c3| to view Kafka data, write |ksqldb| queries, manage Kafka connectors, and monitoring your applications:

* |ksqldb| tab: view |ksqldb| streams and tables, and to create |ksqldb| queries. Otherwise, run the |ksqldb| CLI `ksql http://localhost:8088`. To get started, run the query ``SELECT * FROM ORDERS EMIT CHANGES;`` in the |ksqldb| Editor
* Connect tab: view the JDBC source connector and Elasticsearch sink connector.
* Data Streams tab: view the throughput and latency performance of the microservices

.. figure:: images/streams-monitoring.png
:alt: image
:width: 600px

* If you are running on |ccloud|, use the |ccloud| web user interface to explore topics, consumers, Data flow, and the |ksql-cloud| application:

* Browse to: https://confluent.cloud/login
Expand All @@ -292,25 +286,37 @@ Ensure you've followed the appropriate prerequisites section above prior to star
:alt: image
:width: 600px

* If you are running with Docker or locally, use |c3| to view Kafka data, write |ksqldb| queries, manage Kafka connectors, and monitoring your applications:

* |ksqldb| tab: view |ksqldb| streams and tables, and to create |ksqldb| queries. Otherwise, run the |ksqldb| CLI `ksql http://localhost:8088`. To get started, run the query ``SELECT * FROM ORDERS EMIT CHANGES;`` in the |ksqldb| Editor
* Connect tab: view the JDBC source connector and Elasticsearch sink connector.
* Data Streams tab: view the throughput and latency performance of the microservices

.. figure:: images/streams-monitoring.png
:alt: image
:width: 600px


#. When you are done, make sure to stop the demo before proceeding to the exercises.

* If you are running |cp| locally:
* |ccloud|: (where the ``java-service-account-<service-account-id>.config`` file matches the file in your ``stack-configs`` folder):

.. sourcecode:: bash

./stop.sh
./stop-ccloud.sh stack-configs/java-service-account-12345.config

* If you are running Docker:
* Docker:

.. sourcecode:: bash

docker-compose down

* If you are running on |ccloud| (where the ``java-service-account-<service-account-id>.config`` file matches the file in your ``stack-configs`` folder):
* Local:

.. sourcecode:: bash

./stop-ccloud.sh stack-configs/java-service-account-12345.config
./stop.sh


Exercise 1: Persist events
~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -439,7 +445,7 @@ These lookups can be performed at very large scale and with a low processing lat

A stateful streaming service that joins two streams at runtime (`source <https://www.confluent.io/designing-event-driven-systems>`__)

A popular pattern is to make the information in the databases available in Kafka through so-called change data capture (CDC), together with Kafka’s Connect API to pull in the data from the database.
A popular design pattern is to make the information in the databases available in Kafka through so-called change data capture (CDC), together with Kafka’s Connect API to pull in the data from the database.
Once the data is in Kafka, client applications can perform very fast and efficient joins of such tables and streams, rather than requiring the application to make a query to a remote database over the network for each record.
Read more on `an overview of distributed, real-time joins <https://www.confluent.io/blog/distributed-real-time-joins-and-aggregations-on-user-activity-events-using-kafka-streams/>`__ and `implementing joins in Kafka Streams <https://docs.confluent.io/current/streams/developer-guide/dsl-api.html#streams-developer-guide-dsl-joins>`__.

Expand Down Expand Up @@ -673,7 +679,7 @@ To test your code, save off the project's working solution, copy your version of
Exercise 7: Enrichment with ksqlDB
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

`Confluent ksqlDB <https://www.confluent.io/product/ksql/>`__ is the streaming SQL engine that enables real-time data processing against Apache Kafka.
`Confluent ksqlDB <https://www.confluent.io/product/ksql/>`__ is the streaming SQL engine that enables real-time data processing against |ak-tm|.
It provides an easy-to-use, yet powerful interactive SQL interface for stream processing on Kafka, without requiring you to write code in a programming language such as Java or Python.
|ksqldb| is scalable, elastic, fault tolerant, and it supports a wide range of streaming operations, including data filtering, transformations, aggregations, joins, windowing, and sessionization.

Expand Down

0 comments on commit aaf1eb2

Please sign in to comment.