Cloud service offerings like SaaS, PaaS, IDaaS, FaaS, IaaS, iPaaS, etc. became very popular. Gartner predicts further growth of cloud services revenue by 17% in 2019. More and more companies are looking to utilize cloud – public, private or hybrid – to become more competitive in dynamically changing world of requirements. With the cloud they can adjust their costs in a more agile way, scale more efficiently reducing the time to market of their products or services. AI, Big Data & Analytics or Blockchain are topics that companies would evaluate over the next few years, which without a cloud could become cumbersome and expensive to achieve.
Lets look again at metrics. This time from the perspective of their definition and usability. It was not easy for me to understand types of metrics and how to read them. Which metrics are useful and therefore should be paid attention to? This blog post will focus on Dropwizard metrics with a sample application for a bit of practice.
How much does your current EC2 fleet cost? How much would it cost if EC2 Spot Instances were used?
Use Apache Spark, Scala and Jupyter Notebook to find out!
I have come across Apache Spark when looking for tools for an ETL process. I am a big fan of Scala, and Spark with Scala was mentioned a few times in an ETL context. Therefore, I couldn’t not to use it as a potential option.
The first conference Cloud Developer Days has ended two weeks ago. It took place 28 – 29 May 2018 @ Cracow, Poland. I haven’t come across a similar conference in my country before, i.e. the one that would be focused on cloud in general and not on a specific technology.
The main focus was on Cloud Security, Machine Learning, Artificial Intelligence, Serverless and Blockchain.
Do you use CentOS or any other rpm based Linux distribution to host Java applications or services?
In this article you will see how Gradle can be used for that purpose, in particular:
- what are the naming convention that can be used with RPM
- which tools can help with rpm generation with Gradle
So far the following ideas have been introduced: topic, message, partition, producer, consumer and broker. By now, you should understand how Kafka stores messages on disk using commit log, topics and partitions. You should also know how a message is structured.
It’s time to introduce consumer groups, which are the missing piece of message distribution in Kafka.
The Topic, the Message and the Partition
Traditional messaging patterns: message queue and publish – subscribe, have some limitations as a result of their design.
In the previous post – Apache Kafka Ideas – Part 1, a couple of messaging use cases were introduced. In order to define those cases with Kafka, it is important to understand its ideas. At the very heart of Kafka are topics and partitions. This post explains basic concepts behind them.
What Apache Kafka is?
Apache Kafka can be thought of as a message broker. It has the following characteristics:
- allows sending messages between two parties
- allows one-to-one (peer to peer, queue) or one-to-many (broadcast, topic) message delivery
- persists messages
What ideas are behind Kafka and how does it differ from a classical broker? In this series of posts you’ll find out how does Apache Kafka work and be able to run and use Kafka cluster.