Skip to content
Back to blog Build a SOC Homelab with Docker - Elasticsearch, Cribl, and Log Simulation

Build a SOC Homelab with Docker - Elasticsearch, Cribl, and Log Simulation

SecurityDevOps

Build a SOC Homelab with Docker - Elasticsearch, Cribl, and Log Simulation

Learning security operations requires hands-on practice with real tools. But setting up a full SOC environment traditionally means expensive licenses, complex infrastructure, and hours of configuration.

This guide shows how to build a complete SOC homelab using Docker. Elasticsearch for storage and search, Kibana for visualization, Cribl Stream for log routing and transformation, and simulated log generators to create realistic data. All running on your laptop.

TL;DR

  • Full SOC stack in Docker Compose
  • Elasticsearch + Kibana for SIEM functionality
  • Cribl Stream (leader + worker) for log routing
  • Simulated logs: Linux syslog, firewall alerts, application JSON
  • NGINX reverse proxy for unified access
  • Single command deployment

Architecture Overview

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   Linux VM      │     │  Firewall Logs  │     │   App Logs      │
│   (syslog)      │     │  (alerts)       │     │   (JSON)        │
└────────┬────────┘     └────────┬────────┘     └────────┬────────┘
         │                       │                       │
         └───────────────────────┼───────────────────────┘


                    ┌────────────────────────┐
                    │    Cribl Worker        │
                    │    (log ingestion)     │
                    └────────────┬───────────┘


                    ┌────────────────────────┐
                    │    Cribl Leader        │
                    │    (management)        │
                    └────────────┬───────────┘


                    ┌────────────────────────┐
                    │    Elasticsearch       │
                    │    (storage/search)    │
                    └────────────┬───────────┘


                    ┌────────────────────────┐
                    │    Kibana              │
                    │    (visualization)     │
                    └────────────────────────┘


                    ┌────────────────────────┐
                    │    NGINX Proxy         │
                    │    (unified access)    │
                    └────────────────────────┘

Components

ServicePurposePort
ElasticsearchLog storage and search9200
KibanaVisualization and dashboards5601
Cribl LeaderCribl Stream management UI9000
Cribl WorkerLog ingestion and routing-
NGINXReverse proxy8080
Log GeneratorsSimulated security events-

Project Structure

soclab/
├── docker-compose.yml
└── configs/
    ├── elasticsearch/
    │   └── elasticsearch.yml
    ├── kibana/
    │   └── kibana.yml
    └── nginx/
        └── nginx.conf

Docker Compose Configuration

Full Stack Definition

services:
  # ------------------------
  # Elastic Stack
  # ------------------------
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
    container_name: shl-elasticsearch
    environment:
      - node.name=shl-elasticsearch
      - cluster.name=shl-cluster
      - discovery.type=single-node
      - ES_JAVA_OPTS=-Xms512m -Xmx512m
      - xpack.security.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"
    networks:
      - shl-network
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      retries: 5
      start_period: 60s

  kibana:
    image: docker.elastic.co/kibana/kibana:8.11.0
    container_name: shl-kibana
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
      - SERVER_NAME=shl-kibana
      - SERVER_HOST=0.0.0.0
      - XPACK_SECURITY_ENABLED=false
    volumes:
      - ./configs/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
    ports:
      - "5601:5601"
    depends_on:
      elasticsearch:
        condition: service_healthy
    networks:
      - shl-network
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]
      interval: 30s
      retries: 5

  # ------------------------
  # Cribl Stack (Leader + Workers)
  # ------------------------
  cribl-leader:
    image: cribl/cribl:latest
    container_name: shl-cribl-leader
    environment:
      - CRIBL_DIST_MODE=leader
      - CRIBL_ADMIN_PASSWORD=cribl123
    ports:
      - "9000:9000"
    networks:
      - shl-network
    volumes:
      - cribl_leader_data:/opt/cribl/local

  cribl-worker1:
    image: cribl/cribl:latest
    container_name: shl-cribl-worker1
    environment:
      - CRIBL_DIST_MODE=worker
      - CRIBL_LEADER=https://cribl-leader:9000
      - CRIBL_ADMIN_PASSWORD=cribl123
    depends_on:
      - cribl-leader
    networks:
      - shl-network
    volumes:
      - cribl_worker_data:/opt/cribl/local

  # ------------------------
  # Log Generators (Linux + Firewall + App)
  # ------------------------
  linux-vm:
    image: ubuntu:22.04
    container_name: shl-linux-vm
    command: >
      /bin/bash -c "while true; do 
        logger 'Linux VM syslog test'; 
        sleep 5; 
      done"
    networks:
      - shl-network

  firewall-logs:
    image: alpine
    container_name: shl-firewall-logs
    command: >
      /bin/sh -c "while true; do 
        echo 'FIREWALL ALERT: port scan detected' | nc cribl-worker1 514; 
        sleep 10; 
      done"
    depends_on:
      - cribl-worker1
    networks:
      - shl-network

  app-logs:
    image: alpine
    container_name: shl-app-logs
    command: >
      /bin/sh -c "while true; do 
        echo '{\"level\":\"info\",\"msg\":\"app request served\"}' | nc cribl-worker1 514; 
        sleep 7; 
      done"
    depends_on:
      - cribl-worker1
    networks:
      - shl-network

  # ------------------------
  # NGINX Proxy
  # ------------------------
  nginx:
    image: nginx:alpine
    container_name: shl-nginx
    ports:
      - "8080:80"
    volumes:
      - ./configs/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
    depends_on:
      - kibana
      - cribl-leader
    networks:
      - shl-network

volumes:
  elasticsearch_data:
  cribl_leader_data:
  cribl_worker_data:

networks:
  shl-network:
    driver: bridge

Configuration Files

Kibana Configuration

# configs/kibana/kibana.yml
server.name: shl-kibana
server.host: "0.0.0.0"
server.basePath: "/kibana"
server.rewriteBasePath: false
elasticsearch.hosts: ["http://elasticsearch:9200"]
xpack.security.enabled: false

NGINX Reverse Proxy

# configs/nginx/nginx.conf
events {}

http {
  server {
    listen 80;

    # Health check
    location = / {
      return 200 'ok\n';
      add_header Content-Type text/plain;
    }

    # Kibana reverse proxy
    location /kibana/ {
      proxy_pass http://shl-kibana:5601/;
      proxy_http_version 1.1;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_redirect off;
    }

    # Cribl reverse proxy
    location /cribl/ {
      rewrite ^/cribl/(.*)$ /$1 break;
      proxy_pass http://shl-cribl-leader:9000/;
      proxy_http_version 1.1;
      
      # WebSocket support
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
      
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_redirect / /cribl/;
    }
  }
}

Deployment

Start the Lab

# Clone the repository
git clone https://github.com/moabukar/soclab-v2.git
cd soclab-v2

# Start all services
docker compose up -d

# Watch the logs
docker compose logs -f

Access the UIs

ServiceURL
Kibanahttp://localhost:8080/kibana/app/home#/
Criblhttp://localhost:8080/cribl/
Elasticsearchhttp://localhost:9200

Default Credentials

  • Cribl: admin / cribl123

What You Get

Simulated Log Sources

The lab includes three log generators that create realistic security data:

1. Linux VM (syslog)

  • Generates standard Linux syslog messages
  • Interval: every 5 seconds
  • Use case: System monitoring, authentication logs

2. Firewall Logs

  • Simulates firewall alerts (port scans, blocked connections)
  • Interval: every 10 seconds
  • Use case: Network security monitoring

3. Application Logs

  • JSON-formatted application logs
  • Interval: every 7 seconds
  • Use case: Application security, error tracking

Cribl Stream

Cribl acts as a log router and processor:

  • Leader node: Management UI, configuration
  • Worker node: Receives logs on port 514 (syslog)
  • Can transform, filter, and route logs to multiple destinations
  • Supports data reduction and enrichment

Elastic Stack

  • Elasticsearch: Stores and indexes all log data
  • Kibana: Create dashboards, run queries, build alerts

Lab Exercises

Exercise 1: View Incoming Logs

  1. Open Kibana at http://localhost:8080/kibana/
  2. Go to Discover
  3. Create an index pattern for your logs
  4. Watch logs appear in real-time

Exercise 2: Configure Cribl Pipeline

  1. Open Cribl at http://localhost:8080/cribl/
  2. Navigate to Sources > Syslog
  3. Create a pipeline to:
    • Parse JSON from app-logs
    • Extract fields from firewall alerts
    • Add metadata (source, timestamp)

Exercise 3: Build a Security Dashboard

  1. In Kibana, go to Dashboard
  2. Create visualizations:
    • Log volume over time
    • Top log sources
    • Firewall alert frequency
    • Error rate from applications

Exercise 4: Create Alerts

  1. In Kibana, go to Alerts and Actions
  2. Create a rule for:
    • More than 10 firewall alerts in 1 minute
    • Any ERROR level application logs
    • Unusual log volume spikes

Scaling the Lab

Add More Workers

cribl-worker2:
  image: cribl/cribl:latest
  container_name: shl-cribl-worker2
  environment:
    - CRIBL_DIST_MODE=worker
    - CRIBL_LEADER=https://cribl-leader:9000
    - CRIBL_ADMIN_PASSWORD=cribl123
  depends_on:
    - cribl-leader
  networks:
    - shl-network

Add More Log Sources

windows-logs:
  image: alpine
  container_name: shl-windows-logs
  command: >
    /bin/sh -c "while true; do 
      echo 'EventID=4625 Account=admin FailureReason=BadPassword' | nc cribl-worker1 514; 
      sleep 8; 
    done"
  depends_on:
    - cribl-worker1
  networks:
    - shl-network

Enable Security

For production-like testing, enable Elasticsearch security:

elasticsearch:
  environment:
    - xpack.security.enabled=true
    - ELASTIC_PASSWORD=changeme

Troubleshooting

Elasticsearch Won’t Start

Check memory limits:

# Increase vm.max_map_count (Linux)
sudo sysctl -w vm.max_map_count=262144

# Make it permanent
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf

Kibana Can’t Connect to Elasticsearch

Wait for Elasticsearch to be healthy:

# Check Elasticsearch health
curl http://localhost:9200/_cluster/health

# Check container status
docker compose ps

Logs Not Appearing

Verify Cribl worker is receiving data:

# Check Cribl worker logs
docker compose logs cribl-worker1

# Test syslog connectivity
echo "test message" | nc localhost 514

Cleanup

# Stop all services
docker compose down

# Remove volumes (delete all data)
docker compose down -v

# Remove everything including images
docker compose down -v --rmi all

Next Steps

  1. Add Filebeat - Collect logs from files instead of syslog
  2. Integrate with MISP - Add threat intelligence feeds
  3. Deploy Wazuh - Add endpoint detection
  4. Try Sigma Rules - Implement detection rules in Elasticsearch
  5. Add Grafana - Alternative visualization option

Resources


Repository

Full source code: github.com/moabukar/soclab-v2


A full SOC environment on your laptop. No licenses, no cloud bills, no excuses. Happy hunting.

Found this helpful?

Comments