Confluent Platform: Connecting Splunk to Kafka
Introduction
The Splunk Source connector provided by Confluent does not support receiving data from a Splunk Universal Forwarder or Splunk Heavy Forwarder.
This connector simply listens on a network port of an application that posts data to Splunk.
In this article, i will be focuing on setting up Splunk Source Connector using a containerized Splunk and Confluent Platform.
For this demonstration, I will be using :
- Docker in order to run the Kafka cluster.
- Confluent Platform in order to have the necessary commands.
Requirements
First, you need to have docker installed. To install Docker, follow these steps for Ubuntu:
Then, you need to install Confluent Platform. It is a fully managed Kafka service and enterprise stream processing platform that allows building Real-time data streaming applications.
tar -xzf confluent-5.5.1-2.12.tar.gz
After that, add the path of the confluent as follows :
sudo su
nano /etc/profile
Add the following
export CONFLUENT_HOME=/home/ubuntu/confluent-5.5.1
export PATH=$PATH:$CONFLUENT_HOME/bin
source /etc/profile
The Confluent Hub Client is integrated within the Confluent Platform.It allows you to install the different connectors to connect KsqlDB to your data sources.
confluent-hub
For this demonstration, we will install the SAP Kafka Source Connector. For the installation, select the second option : 2./(installed rpm/deb package )
. The connectors will be installed in /usr/share/confluent-hub-components
.
Splunk Source Connector
confluent-hub install confluentinc/kafka-connect-splunk-source:1.0.2
Splunk Sink Connector
confluent-hub install splunk/kafka-connect-splunk:1.2.0#
Confluent Platform along with Splunk using Docker
First, create the /opt/docker-compose
directory and put the following docker-compose.yml
File.
sudo mkdir /opt/docker-compose
sudo mkdir /opt/docker-compose/securitysudo mkdir /opt/dsudo nano /opt/docker-compose/docker-compose.yml
- Ensure that the installed connectors are mounted in the
connect container
insidedocker-compose.yml
as follows :
volumes:
- /usr/share/confluent-hub-components/:/usr/share/confluent-hub-components
- Ensure that the created certificate are mounted inside the
connect container
as follows :
volumes:
- ./security:/etc/kafka/secrets
Note that the source connector cannot be created without providing in
splunk.ssh.key.store.path
even though SSL is disabled, for that reasons i have created a random key store and have used it just for my connector to be created, it is not important as SSL is disabled, things gonna work.
Fore more information about key store check out this article on Confluent Security
inside the security folder, Create a key store named kafka1 run the following
keytool -genkey -noprompt \-alias kafka1 \-dname “CN=kafka1,OU=TEST,O=CONFLUENT,L=PaloAlto,S=Ca,C=US” \-ext “SAN=dns:kafka1,dns:localhost” \-keystore kafka.kafka1.keystore.jks \-keyalg RSA \-storepass confluent \-keypass confluent \-storetype pkcs12
This will give you :
kafka.kafka1.keystore.jks
- Ensure that a volume is associated with the
connect container
as follows :
volumes:
- ./data:/data
Demonstration
First, run docker-compose:
docker-compose up -d
Configuring Splunk HTTP Event Collector (HEC) token
Access to Splunk’s UI
# use the username 'admin'
# use the password 'password' defined in the docker-compose.yml file http://localhost:8000
Once connected , click on the Splunk Enterprise icon and change your license group to Free license. Then, save and restart Splunk.
Click on Settings then on Data inputs
Then choose the HTTP Event Collector option
Click on Global Settings and put the following configuration
Then save. Next, create a new token as follows:
Submit it :
One done, copy your HEC token somewhere.
Creating Splunk Source Connector
- Access your confluent control-center :
http://localhost:9021
2.In the connect part, click on upload connector then upload the following connector
You can check connector’s logs as follows
curl -s -X GET http://127.0.0.1:8083/connectors/SlunkSource/status
4. Put some data to Splunk using your HEC token
curl -u “hec-name:hec-token” -k -X POST https://splunk:8889/services/collector/event -d ‘{“event”:”from curl”}’
You will see that your data have been successfully posted to Splunk when you search it based on your HEC.
However, no data have been imported to the splunk-source
topic.
docker-compose exec broker kafka-console-consumer — topic splunk-source — from-beginning — bootstrap-server broker:29092
This is due to the fact that The Splunk Source connector provided by Confluent does not support receiving data from a Splunk Universal Forwarder or Splunk Heavy Forwarder. It imply listens on a network port of an application that posts data to Splunk.
So, because our connector resides in the Connect
container and listens on port 8889
. Lets connect to that container and post some data :
curl -u "hec-name:hec-token" -k -X POST https://localhost:8889/services/collector/event -d '{"event":"from curl"}'
Now, when you run :
docker-compose exec broker kafka-console-consumer — topic splunk-source — from-beginning — bootstrap-server broker:29092
You can see that the posted data have been imported, this is due to the fact that in the second POST we didn’t deal with Splunk https://splunk:8889
, we have imitated Splunkhttps://localhost:8889
You can run the following commands to stop and clean docker
docker stop $(docker ps -a -q) &&docker rm $(docker ps -a -q)