KVM stands for Kernel based Virtual Machine and according to Redhat, it is an open source virtualization technology built into Linux. Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs).
In today’s article, i will mainly cover the topic of KVM VMs backup.
First, Login as sudo and list all of your available KVM virtual machines.
sudo su
virsh list --all
Next, you need to shutdown the VM you want to backup.
virsh shutdown vm_name
Then run the following to verify that the above command worked well. …
According to IBM, DevOps has become an increasingly common approach to software delivery that development and operations teams use to build, test, deploy, and monitor applications with speed, quality, and control.
DevOps is essential for any business aspiring to be lean, agile, and capable of responding rapidly to changing marketplace demands. It is an approach on the journey to lean and agile software delivery that promotes closer collaboration between lines of business, development, and IT operations while removing barriers between stakeholders, and customers.
Development teams need to design, develop, deliver and run the software as quickly and reliably as possible. Operations teams need to identify and resolve problems as soon as possible by monitoring, predicting failure, managing the environment and fixing issues. Combining this common approach across Dev and Ops with the ability to monitor and analyze bottlenecks and optimize as quickly as possible gives birth to DevOps — a collaborative approach across business, development, and operation teams to deliver and run reliable software as quick and efficient as possible. …
In my previous articles , i have covered monitoring using Prometheus along with Grafana. and alerting using the AlertManager integrated to Prometheus.
In today’s article, i will mainly cover the topic of integrating the Postgres exporter to Prometheus in order to get PostgreSQL databases metrics for a monitoring purpose.
First, you need to download the postgres exporter binary tar archive and untar it as follows
mkdir /opt/postgres_exportercd /opt/postgres_exporterwget https://github.com/wrouesnel/postgres_exporter/releases/download/v0.5.1/postgres_exporter_v0.5.1_linux-amd64.tar.gztar -xzvf postgres_exporter_v0.5.1_linux-amd64.tar.gzcd postgres_exporter_v0.5.1_linux-amd64sudo cp postgres_exporter /usr/local/bin
Then, you need to prepare an env file for the postgres exporter where we will set a variable known as DATA_SOURCE_NAME. …
In today’s world we deal a lot with real-time streaming data and events that come from a bunch of different sources and from which we can derive stats, facts and information that are crucial for today’s businesses.
Processing and storing these real-time streams in a scalable and fault-tolerant manner is now possible using Kafka as an Event Streaming Platform and KsqlDB as an Event Streaming Database. These technologies have made Event Stream Processing Applications thrive.
In today’s article i will discuss the huge benefit that Logpickr Process Explorer 360 platform can derive to any kind of business that uses ServiceNow as its task management platform and the contribution of the Confluent Platform in such context. …
The Splunk Source connector provided by Confluent does not support receiving data from a Splunk Universal Forwarder or Splunk Heavy Forwarder.
This connector simply listens on a network port of an application that posts data to Splunk.
In this article, i will be focuing on setting up Splunk Source Connector using a containerized Splunk and Confluent Platform.
For this demonstration, I will be using :
First, you need to have docker installed. To install Docker, follow these steps for Ubuntu:
In today’s world we deal a lot with real-time streaming data and events that come from a bunch of different sources and from which we can derive stats, facts and information that are crucial for today’s businesses.
Processing and storing these real-time streams in a scalable and fault-tolerant manner is now possible using Kafka as an Event Streaming Platform and KsqlDB as an Event Streaming Database. These technologies have made Event Stream Processing Applications thrive.
I have planned a series of articles that explain and demonstrate how to connect the following data sources using Kafka Connector to the Kafka Cluster…
With the increasing amount of company’s data, the task of protecting it becomes more challenging, especially in the cloud. As a result, the demand for reliable backup and recovery solutions has never been greater.
According to IBM, Backup and restore refers to technologies and practices for making periodic copies of data to a separate, secondary device and then using those copies to recover the critical company’s data in cases where data is lost or damaged due to different events such as power outage, cyberattack, human error or disaster.
The aim behind this article is to explain how to backup a Postgres database to an S3 Object Storage using backup scripts provided by the Postgres wiki and the s3fs-fuse project explained in my previous article. It also explains how to restore your database once its is backed up. …
In my previous articles, i have explained how to mount the Object Storage Bucket as a File System on Flexible Engine’s ECS Instances.
In this article, i will be using the s3fs-fuse project along with the Swift S3 API to mount OVH Object storage bucket on an OVH instance.
Object Storage allows you to store a large amount of unstructured data in a highly scalable manner using REST APIs.
The benefits behind using such service are :
Durability : data is kept safe on the cloud.
Availability : data is available and accessible remotely through REST APIs. …
In today’s world we deal a lot with real-time streaming data and events that come from a bunch of different sources and from which we can derive stats, facts and information that are crucial for today’s businesses.
Processing and storing these real-time streams in a scalable and fault-tolerant manner is now possible using Kafka as an Event Streaming Platform and KsqlDB as an Event Streaming Database. These technologies have made Event Stream Processing Applications thrive.
I have explained the basics behind Apache Kafka and Zookeper in my previous article. However, In this article, i will be dealing with the concepts behind KsqlDB and how we can derive a huge benefit from it through linking it to BPM along with Process Mining. As a use-case. …
Based on Packer’s documentation, Packer is defined as an open source tool for creating identical machine images for multiple platforms from a single source configuration.
A machine image is a single static unit that contains a pre-configured operating system and installed software which is used to quickly create new running machines.
There are many advantages of using packer :
About