Java – Finematics https://finematics.com decentralized finance education Sat, 28 Mar 2020 08:58:19 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.1 https://finematics.com/wp-content/uploads/2017/09/cropped-favicon-32x32.png Java – Finematics https://finematics.com 32 32 Apache Kafka Explained https://finematics.com/apache-kafka-explained/?utm_source=rss&utm_medium=rss&utm_campaign=apache-kafka-explained&utm_source=rss&utm_medium=rss&utm_campaign=apache-kafka-explained https://finematics.com/apache-kafka-explained/#respond Mon, 16 Sep 2019 22:37:28 +0000 https://finematics.com/?p=495

Pub/Sub

Before we dive into Kafka let’s start with a quick recap on what publish/subscribe (pub/sub) messaging is.

Publish/subscribe is a messaging pattern where the sender (or the publisher) doesn’t send data directly to a specific receiver. Instead, the publisher classifies the messages without knowing if there are any subscribers interested in a particular type of messages. Similarly, the receiver subscribes to receive a certain class of messages without knowing if there are any senders sending those messages.

Pub/sub systems usually have a broker, where all messages are published. This decouples publishers from subscribers and allows for greater flexibility in the type of data that subscribers want to receive. It also reduces the number of potential connections between publishers and subscribers.

Bulletin board comes handy as a good analogy to a pub/sub messaging pattern, where people can publish information in a central place without knowing who the recipients are.

What is Kafka?

Okay, so what is Kafka then?

Apache Kafka is an open-source, publish/subscribe (pub/sub) messaging system, also very often described as a distributed event log where all the new records are immutable and appended to the end of the log.

In Kafka, messages are persisted on disk for a certain period of time known as the retention policy. This is usually the main difference between Kafka and other messaging systems and makes Kafka in some way a hybrid between a messaging system and a database.

The main concepts behind Kafka are producers producing messages to different topics and consumers consuming those messages and maintaining their position in the stream of data. You can think about producers as publishers or senders of messages. Consumers, on the other hand, are analogous to the receivers or subscribers.

Kafka aims to provide a reliable and high-throughput platform for handling real-time data streams and building data pipelines. It also provides a single place for storing and distributing events that can be fed into multiple downstream systems which helps to fight the ever-growing problem of integration complexity. Besides all of that Kafka can also be easily used to build a modern and scalable ETL, CDC or big data ingest systems.

Kafka is used across multiple industries, from companies like Twitter and Netflix to Goldman Sachs and Paypal. It was originally developed by Linkedin and open sourced in 2011.

Kafka Architecture

Now let’s dive a little bit deeper into the Kafka architecture.

On a high level, usual Kafka architecture consists of a Kafka cluster, producers and consumers.

A single Kafka server within a cluster is called a broker. A Kafka cluster usually consists of at least 3 brokers to provide enough level of redundancy. The broker is responsible for receiving messages from producers, assigning offsets and committing messages to disk. It is also responsible for responding to consumers fetch requests and serving messages.

In Kafka, when messages are sent to a broker, they are sent to a particular topic. Topics provide a way of categorising data that is being sent and they can be further broken down into a number of partitions. For example, a system can have separate topics for processing new users and for processing metrics. Each partition acts as a separate commit log and the order of messages is guaranteed only across the same partition. Being able to split a topic into multiple partitions makes scaling easy as each partition can be read by a separate consumer. This allows for achieving high throughput as both partitions and consumers can be split across multiple servers.

Producers are usually other applications producing data. This can be, for example, our application producing metrics and sending them to our Kafka cluster.

Similarly, consumers are usually other applications consuming data from Kafka. As we mentioned before Kafka very often acts like a central hub for all the events in the system which means it’s a perfect place to connect to if we are interested in a particular type of data. A good example would be a database that can consume and persist messages or an elastic search cluster that can consume certain events and provide full-text search capabilities for other applications.

Messages, Batches, Schemas, Topics and Partitions

Now as we went through the general overview of Kafka, let’s jump into the nitty-gritty details.

In Kafka, a message is a single unit of data that can be sent or received. As far as Kafka is concerned, a message is just a byte array, so the data doesn’t have any special meaning to Kafka. A message can also have an optional key (also a byte array) that can be used to write data in a more controlled way to multiple partitions within the same topic.

As an example, let’s assume we want to write our data to multiple partitions as it will be easier to scale the system later. We realised that certain messages, let’s say for each user, have to be written in order. If our topic has multiple partitions, there is no guarantee which messages will be written to which partitions, most likely the new messages would be written to partitions in a round-robin fashion. To avoid that situation we can define a consistent way for choosing the same partition based on a message key. One way of doing that would as simple as using (user id) % (number of partitions) that would assign always the same partition to the same user.

Sending single messages over the network creates a lot of overhead, that’s why messages are written into Kafka in batches. A batch is a collection of messages produced for the same topic and partition. Sending messages in batches provides a trade-off between latency and throughput and can be further controlled by adjusting a few Kafka settings. Additionally, batches can be compressed which provides even more efficient data transfer.

Even though we already established that Kafka messages are just simple byte arrays, in most cases it makes sense to provide additional structure to the message content. There are multiple schema options available. The most popular ones are JSON, XML, Avro or Protobuf.

We already described what topics and partitions are, but let’s just emphasise again the importance of not having any guarantees when it comes to a message time-ordering across multiple partitions of the same topic. The only way to achieve the ordering for all messages is to have only one partition. By doing that we can be sure that events are always ordered by the time they were written into Kafka.

Another important concept when it comes to partitions is the fact that each partition can be hosted on a different server which means that a single topic can be scaled horizontally across multiple servers to improve the throughput.

Producers, Consumers, Offsets, Consumer Groups and Rebalancing

Kafka cluster wouldn’t be very useful without its clients who are the producers and consumers of the messages.

Producers create new messages and send them to a specific topic. If a partition is not specified and a topic has multiple partitions, messages would be written into multiple partitions evenly. This can be further controlled by having a consistent message key that we described earlier.

Consumers, on the other hand, read messages. They subscribe to one or multiple topics and read messages in the order they were produced. The consumer keeps track of its position in the stream of data by remembering what offset was already consumed. Offsets are created at a time a message is written to Kafka and they correspond to a specific message in a specific partition. Within the same topic, multiple partitions can have different offsets and it’s up to the consumer to remember what offset each partition is at. By storing offsets in Zookeeper or Kafka itself a consumer can stop and restart without losing its position in the stream of data.

Consumers always belong to a specific consumer group. Consumers within a consumer group work together to consume a topic. The group makes sure that each partition is only consumed by one member of a consumer group.

This way, consumers can scale horizontally to consume topics with a large number of messages. Additionally, if a single consumer fails, the remaining members of the group will rebalance the partitions to make it up for the missing member.

In case we want to consume the same messages multiple times, we have to make sure the consumers belong to different consumer groups. This can be useful if we have multiple applications that have to process the same data separately.

Clusters, Brokers and Retention

As we mentioned before, Kafka cluster consists of multiple servers called brokers. Depending on the specific hardware, a single broker can easily handle thousands of partitions and millions of messages per second.

Kafka brokers are designed to operate as part of a cluster. Within a cluster of brokers, one broker will also act as the cluster controller (elected automatically). The controller is responsible for administrative operations, including assigning partitions to brokers and monitoring for broker failures. A partition is always owned by a single broker in the cluster, who is called the leader of the partition. A partition may be assigned to multiple brokers, which will result in the partition being replicated. This provides redundancy of messages in the partition, such that another broker can take over leadership if there is a broker failure. However, all consumers and producers operating on that partition must connect to the leader.

One of the key features of Kafka is retention, which, for some period of time, provides durable storage of messages. Kafka brokers are configured with a default retention setting for topics, either retaining messages for some period of time, 7 days by default or until the topic reaches a certain size in bytes e.g. 1GB. Once these limits are reached, the oldest messages are expired and deleted so that the retention configuration is a minimum amount of data available at any time. Individual topics can also configure their own retention settings. For example, a topic for storing metrics might have very short retention of a few hours. On the other hand, a topic containing bank transfers might have a retention policy of a few months.

Reliability Guarantees

Reliability is often discussed in terms of guarantees. These are certain behaviours of a system that should be preserved under different circumstances. Understanding those guarantees is critical for anyone trying to build reliable applications on the top of Kafka.

These are the most common Kafka reliability guarantees:

  • Kafka guarantees the order of messages in a partition. If message A was written before message B, using the same producer in the same partition, then Kafka guarantees that the offset of message B will be higher than message A. This means that consumers will read message A before message B
  • Messages are considered “committed” when they are written to the leader and all in-sync replicas (number of in-sync replicas and number of acks can be configured)
  • Committed messages won’t be lost as long as at least one replica remains alive and retention policy holds
  • Consumers can only read committed messages
  • Kafka provides at-least-once message delivery semantics (doesn’t prevent duplicated messages being produced)

The important thing to note is that even though Kafka provides at-least-once delivery semantics, it does not provide exactly-once semantics and to achieve that we have to either rely on an external system with some support for unique keys or use Kafka Streams (0.11.0 release onwards).

Let’s also remember that even though these basic guarantees can be used to build a reliable system, there is much more to that. In Kafka, there is a lot of trade-offs involved in building a reliable system. The usual trade-offs are reliability and consistency versus availability, high throughput and low latency.

Why Kafka? Pros and Cons

Let’s review both the pros and cons of choosing Kafka.

Pros

  • Tackles integration complexity
  • Great tool for ETL or CDC
  • Great for big data ingestion
  • High throughput
  • Disk-based retention
  • Supports multiple producers/consumers
  • Highly scalable
  • Fault-tolerant
  • Fairly low-latency
  • Highly configurable
  • Provides backpressure

Cons

  • Requires a fair amount of time to understand and do not shoot yourself in the foot by accident
  • Might not be the best solution for real low-latency systems

Kafka vs other messaging systems

A lot of things in Kafka were purposely named to resemble a JMS like messaging systems. This makes people wondering what the actual differences between Kafka and standard JMS systems like RabbitMQ or ActiveMQ are.

First of all the main difference is that Kafka consumers pull messages from the brokers which allow for buffering messages for as long as the retention period holds. In most other JMS systems, messages are actually pushed to the consumers instead. Pushing messages to the consumer makes things like backpressure really hard to achieve. Kafka also makes replaying of events easy as messages are stored on disk and can be replayed anytime (again within the limits of the retention period).

Besides that Kafka guarantees the ordering of messages within one partition and it provides an easy way for building scalable and fault-tolerant systems.

Summary

Time for a quick summary. In the era of ever-growing data and integration complexity, having a reliable and high-throughput messaging system that can be easily scaled is a must. Kafka seems to be one of the best available options that meet those criteria. It has been battle-tested for years by one of the biggest companies in the world.

We have to remember that Kafka is a fairly complex messaging system and there is a lot to learn to make full potential of it without shooting ourselves in the foot.

There are also multiple libraries and frameworks that make using Kafka even easier. Some of the most notable ones are Kafka Streams and Kafka Connect.

Extras

If you want to learn even more about Kafka, I can recommend the following book: “Kafka: The Definitive Guide”, which I found very useful.


Kafka: The Definitive Guide

]]>
https://finematics.com/apache-kafka-explained/feed/ 0
Low latency in Java https://finematics.com/low-latency-in-java/?utm_source=rss&utm_medium=rss&utm_campaign=low-latency-in-java&utm_source=rss&utm_medium=rss&utm_campaign=low-latency-in-java https://finematics.com/low-latency-in-java/#respond Thu, 30 Nov 2017 21:46:37 +0000 https://finematics.com/?p=273 What is latency?

Let’s start with what latency really means. A quick Google search gives us the following definition: “latency is a time interval between the stimulation and response”. If you’re still confused by this definition let’s imagine you’re playing an online computer game. If your Internet connection is slow you will experience something called ‘lag’. Let’s say you want to move your game character from one place to another, you’re clicking a mouse button and nothing is happening. You notice that there was a delay between the time you clicked the mouse button and the time your character started moving. This is what latency really is. It is the delay between the time the action is initialized and the time the action actually takes place.

Low latency and Ultra Low Latency

We already know what latency is, but how to define low latency? First of all, we need to find out what ‘low’ really means. Human perception of latency is completely different from the machine perception. All of our reactions are roughly delayed by 300 milliseconds (if you don’t believe me click on this link). This means that everything below 300 ms is classified by our brains as real-time or 0 delay. Of course, this is completely different for the machines. For a trading system, 300ms latency would be a true nightmare.

Term ultra-low latency is used by many companies to describe sub 1ms latencies. Ultra-low latencies are very often associated with trading as this is one of the areas where speed plays the biggest role. Very often the winner takes all and the second place is worth nothing.

Where is low latency crucial?

There are many systems where low latency plays a major role. Let’s have a look at some examples:

  • trading systems (order execution, matching engines, pricing engines)
  • video/audio streaming
  • online games
  • real-time data applications

Low latency in Java

Java, because of its virtual machine and garbage collection is very often perceived slow. Fortunately, with the right optimisations, we can make it extremely fast.

Let’s focus on the key elements influencing low latency:

  • Garbage Collection. GC pauses can dramatically increase our latency, but they become manageable with the right optimisation. We can also use non-standard, low-latency JVMs like Zing from Azul.
  • JIT. Compiling hot code paths to the machine code can seriously speed up our system. We need to take extra care to warm up our system.
  • Locking. Using lock-free algorithms and I/O becomes crucial and has a big impact on latency.
  • Classloading. All key classes should be loaded before we even start using our system.
  • Data structures. We need to know what data structures are optimal for storing our data and how to make sure they’re not slowing us down. For example, using ConcurrentHashMap for a greater degree of concurrency over synchronizedMap.
  • The complexity of algorithms. Our latency can be greatly reduced if the time complexity of our algorithms is poor. We need to make sure that our algorithms run at the best possible speed.

Let’s have a look at pros and cons of using Java for low latency systems:

ProsCons
Time to marketIt might be still slower than C++
Time to stabilityFinding right GC/JIT optimisation might take a while
Team's productivity

We need to remember that optimisation of our Java code and JVM is only a tip of the iceberg and if we want to get to sub 50ms latencies we need to look into other directions. The next section is going to shed some light on the other major elements of low latency systems.

Other major factors influencing latency

There are many other factors influencing latency. We just described Java and JVM. Now, let’s quickly describe the other elements:

  • Operating system: pinning our process to dedicated cores, page cache, power management tuning,
  • Hardware: commodity hardware, FPGA,
  • Internet connection: fibre or microwaves (for example for High-Frequency Trading) 

Summary

When it comes to developing low latency systems there are many different aspects that we need to consider. A programming language is only one of them and Java with the right JVM configuration might be a good option, especially if we want to balance between speed and time to market. In general, low latency is a huge area to study and this article only touches the surface of this interesting topic. I will be delving into the details in the future posts.

]]>
https://finematics.com/low-latency-in-java/feed/ 0