ELK Stack Migration: From 6.x to 8.x - The Complete Guide
Migrating an ELK stack from 6.x to 8.x isn’t a simple version bump. It’s a multi-step journey with breaking changes at every major version, index compatibility requirements, and fundamental architectural shifts - especially around security.
I recently completed this migration for a client running a 15-node production cluster with 50TB of logs. This post documents the complete process: the required upgrade path, breaking changes, migration strategies, and the exact steps we followed.
The Upgrade Path: You Can’t Skip Versions
Critical: You cannot directly upgrade from Elasticsearch 6.x to 8.x. The supported upgrade path is:
6.x → 6.8 (latest) → 7.17 (latest 7.x) → 8.x
Why? Each major version can only read indices from the previous major version:
| Elasticsearch Version | Can Read Indices Created In |
|---|---|
| 6.x | 5.x, 6.x |
| 7.x | 6.x, 7.x |
| 8.x | 7.x, 8.x |
This means:
- ES 7.x cannot read indices created in ES 5.x - must reindex first
- ES 8.x cannot read indices created in ES 6.x - must go through 7.x first
If you have any indices created in 5.x or earlier, you must reindex them before upgrading to 7.x.
Code Repository: All code from this post is available at github.com/moabukar/blog-code/elk-6-to-8-migration
Phase 1: Preparation and Assessment
Step 1: Inventory Your Cluster
Before touching anything, document your current state:
# Get cluster health
curl -X GET "localhost:9200/_cluster/health?pretty"
# List all indices with creation version
curl -X GET "localhost:9200/_cat/indices?v&h=index,creation.date.string,pri,rep,docs.count,store.size"
# Check index settings for compatibility issues
curl -X GET "localhost:9200/_settings?pretty" | jq 'to_entries[] | {index: .key, created: .value.settings.index.version.created}'
# Get current version
curl -X GET "localhost:9200/"
Step 2: Identify Indices Created in 5.x
ES 7.x cannot read 5.x indices. Check for them:
# Indices with version.created starting with "503" or lower are 5.x
curl -s "localhost:9200/_settings" | jq -r '
to_entries[] |
select(.value.settings.index.version.created | startswith("5") or startswith("2") or startswith("1")) |
.key'
If you have 5.x indices, you must reindex them while still on 6.x:
# Reindex old index to a new one
POST _reindex
{
"source": {
"index": "old-5x-index"
},
"dest": {
"index": "old-5x-index-reindexed"
}
}
# Delete old index
DELETE old-5x-index
# Optionally rename
POST _aliases
{
"actions": [
{ "add": { "index": "old-5x-index-reindexed", "alias": "old-5x-index" }}
]
}
Step 3: Install the Upgrade Assistant
For Kibana 6.7+, use the Upgrade Assistant:
- Open Kibana
- Go to Management → Stack Management → Upgrade Assistant
- Review all deprecation warnings
- Fix all critical issues before proceeding
The Upgrade Assistant identifies:
- Deprecated index settings
- Deprecated cluster settings
- Mappings that need updating
- Deprecated API usage in Kibana saved objects
Step 4: Backup Everything
# Create a snapshot repository (if not exists)
PUT _snapshot/migration_backup
{
"type": "fs",
"settings": {
"location": "/mnt/backups/es-migration"
}
}
# Take a full snapshot
PUT _snapshot/migration_backup/pre-migration-snapshot?wait_for_completion=true
{
"indices": "*",
"include_global_state": true
}
# Verify the snapshot
GET _snapshot/migration_backup/pre-migration-snapshot
Also backup:
- Kibana saved objects (export from Management → Saved Objects)
- Logstash pipelines
- All configuration files (elasticsearch.yml, kibana.yml, logstash.yml)
- Any custom scripts or integrations
Phase 2: Upgrade to Latest 6.8.x
Always upgrade to the latest minor version before a major upgrade:
# On each node (rolling restart):
# 1. Disable shard allocation
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": "primaries"
}
}'
# 2. Stop non-essential indexing and perform a synced flush
curl -X POST "localhost:9200/_flush/synced"
# 3. Stop Elasticsearch
sudo systemctl stop elasticsearch
# 4. Upgrade the package
# Debian/Ubuntu
sudo apt-get update && sudo apt-get install elasticsearch=6.8.23
# RHEL/CentOS
sudo yum install elasticsearch-6.8.23
# 5. Start Elasticsearch
sudo systemctl start elasticsearch
# 6. Wait for node to join
curl -X GET "localhost:9200/_cat/nodes"
# 7. Re-enable shard allocation
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}'
# 8. Wait for green status before proceeding to next node
curl -X GET "localhost:9200/_cluster/health?wait_for_status=green&timeout=5m"
Repeat for all nodes, one at a time.
Phase 3: Upgrade 6.8 to 7.17
This is the most significant upgrade - many breaking changes occur here.
Breaking Changes in 7.0
1. Mapping Types Removed
ES 7.x removes mapping types. The _doc type becomes the only type.
6.x:
PUT my_index
{
"mappings": {
"my_type": {
"properties": {
"title": { "type": "text" }
}
}
}
}
PUT my_index/my_type/1
{
"title": "Hello"
}
7.x:
PUT my_index
{
"mappings": {
"properties": {
"title": { "type": "text" }
}
}
}
PUT my_index/_doc/1
{
"title": "Hello"
}
Indices created in 6.x with custom types will still work in 7.x (compatibility mode), but you should plan to migrate them.
2. Discovery Configuration Changed
The discovery.zen.* settings are removed. New settings:
6.x (old):
discovery.zen.ping.unicast.hosts: ["host1", "host2"]
discovery.zen.minimum_master_nodes: 2
7.x (new):
discovery.seed_hosts: ["host1", "host2"]
cluster.initial_master_nodes: ["node-1", "node-2", "node-3"]
Important: cluster.initial_master_nodes is only needed for the first cluster formation. Remove it after the cluster is running.
3. Default Shards Changed
- Primary shards default changed from 5 to 1
- This only affects new indices
4. Lucene 8 Upgrade
ES 7 uses Lucene 8, which brings:
- Better query performance
- New BKD-based doc values
- Some queries may behave differently
5. Java 11+ Required
ES 7.x requires Java 11. ES 6.x could run on Java 8.
The 6.8 → 7.17 Upgrade Process
# Ensure you're on 6.8.x latest
curl -X GET "localhost:9200/"
# Run Upgrade Assistant one more time
# Fix any remaining deprecation warnings
# Take another snapshot
PUT _snapshot/migration_backup/pre-7x-snapshot?wait_for_completion=true
# For each node (rolling upgrade):
# 1. Disable shard allocation
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": "primaries"
}
}'
# 2. Stop indexing and flush
curl -X POST "localhost:9200/_flush/synced"
# 3. Stop ES
sudo systemctl stop elasticsearch
# 4. Update configuration (elasticsearch.yml)
# - Replace discovery.zen.* with discovery.seed_hosts
# - Add cluster.initial_master_nodes (first time only)
# - Remove any deprecated settings flagged by Upgrade Assistant
# 5. Install 7.17
sudo apt-get install elasticsearch=7.17.18
# 6. Start ES
sudo systemctl start elasticsearch
# 7. Verify node joined
curl -X GET "localhost:9200/_cat/nodes?v"
# 8. Re-enable allocation
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}'
# 9. Wait for green
curl -X GET "localhost:9200/_cluster/health?wait_for_status=green&timeout=5m"
# Proceed to next node
After all nodes are upgraded:
# Remove cluster.initial_master_nodes from elasticsearch.yml
# It's only needed for initial cluster bootstrap
# Verify cluster
curl -X GET "localhost:9200/_cluster/health?pretty"
curl -X GET "localhost:9200/_cat/indices?v"
Upgrade Kibana 6.8 → 7.17
Kibana must match the Elasticsearch version.
# Stop Kibana
sudo systemctl stop kibana
# Update kibana.yml if needed
# - elasticsearch.url is now elasticsearch.hosts
# Install 7.17
sudo apt-get install kibana=7.17.18
# Start Kibana
sudo systemctl start kibana
# Kibana will migrate saved objects automatically
# Check logs for migration status
sudo journalctl -u kibana -f
Upgrade Logstash 6.8 → 7.17
# Stop Logstash
sudo systemctl stop logstash
# Review pipeline configurations
# - Update any deprecated plugin options
# - document_type is no longer needed
# Install 7.17
sudo apt-get install logstash=7.17.18
# Start Logstash
sudo systemctl start logstash
Phase 4: Upgrade 7.17 to 8.x
The 7.x to 8.x upgrade is significant because security is enabled by default in ES 8.
Breaking Changes in 8.0
1. Security Enabled by Default
ES 8 enables security automatically:
- TLS for HTTP and transport layers
- Built-in users (elastic, kibana_system, etc.)
- API key authentication
If you weren’t using security before, this is a major change.
2. Discovery Settings Finalized
cluster.initial_master_nodes is deprecated for clusters that have already formed. Remove it.
3. Many Deprecated Settings Removed
All settings deprecated in 7.x are removed in 8.0:
discovery.zen.*- completely removednode.max_local_storage_nodes- removedhttp.tcp_no_delay- usehttp.tcp.no_delay- Many more
4. Java 17+ Recommended
ES 8 bundles its own JDK, but if you provide your own, use Java 17+.
5. REST API Changes
- The
_typepath element is removed - Content-Type header is always required
- Some query DSL changes
Preparing for Security
If your 7.x cluster didn’t have security enabled, you need to prepare:
Option A: Enable Security on 7.x First (Recommended)
# elasticsearch.yml on 7.17
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
# Generate certificates
bin/elasticsearch-certutil ca
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
# Set passwords
bin/elasticsearch-setup-passwords interactive
Option B: Let ES 8 Configure Security Automatically
When you start ES 8 for the first time, it will:
- Generate certificates
- Create the elastic superuser password
- Configure TLS
But this requires cluster downtime and reconfiguration of all clients.
The 7.17 → 8.x Upgrade Process
# Take a snapshot
PUT _snapshot/migration_backup/pre-8x-snapshot?wait_for_completion=true
# Upgrade each node (rolling upgrade):
# 1. Disable allocation
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": "primaries"
}
}'
# 2. Flush
curl -X POST "localhost:9200/_flush"
# 3. Stop ES
sudo systemctl stop elasticsearch
# 4. Update elasticsearch.yml
# - Remove cluster.initial_master_nodes
# - Remove any deprecated settings
# - Configure security settings
# 5. Install ES 8
sudo apt-get install elasticsearch=8.12.0
# 6. Start ES
sudo systemctl start elasticsearch
# On first 8.x node start, note:
# - Auto-generated password for 'elastic' user
# - Enrollment tokens for other nodes
# Check: /var/log/elasticsearch/elasticsearch.log
# 7. Re-enable allocation (now with auth)
curl -X PUT "https://localhost:9200/_cluster/settings" \
-u elastic:YOUR_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt \
-H 'Content-Type: application/json' -d'
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}'
# 8. Wait for green
curl -X GET "https://localhost:9200/_cluster/health?wait_for_status=green" \
-u elastic:YOUR_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt
Upgrade Kibana to 8.x
# Stop Kibana
sudo systemctl stop kibana
# Update kibana.yml
# - elasticsearch.hosts with https://
# - elasticsearch.username: kibana_system
# - elasticsearch.password: [generated password]
# - elasticsearch.ssl.certificateAuthorities
# Install Kibana 8
sudo apt-get install kibana=8.12.0
# Reset kibana_system password
curl -X POST "https://localhost:9200/_security/user/kibana_system/_password" \
-u elastic:YOUR_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt \
-H 'Content-Type: application/json' -d'
{
"password": "your_new_kibana_password"
}'
# Start Kibana
sudo systemctl start kibana
Upgrade Logstash to 8.x
Update your Logstash output configurations for HTTPS and authentication:
output {
elasticsearch {
hosts => ["https://es-node1:9200", "https://es-node2:9200"]
user => "logstash_writer"
password => "your_password"
ssl_certificate_authorities => "/etc/logstash/certs/http_ca.crt"
}
}
Create a dedicated Logstash user:
curl -X POST "https://localhost:9200/_security/user/logstash_writer" \
-u elastic:YOUR_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt \
-H 'Content-Type: application/json' -d'
{
"password": "logstash_password",
"roles": ["logstash_writer"],
"full_name": "Logstash Writer"
}'
Alternative: Zero-Downtime Migration with Cluster Expansion
For production clusters where you can’t afford downtime, use the “expand then contract” method:
The Concept
Instead of in-place upgrades, you:
- Create new ES 8 nodes
- Join them to the existing cluster temporarily
- Migrate data via shard reallocation
- Remove old nodes
This only works for 6.8 → 7.x migration (same major version compatibility). For 7.x → 8.x, you’d do a second round.
Step-by-Step
# 1. Configure new ES 7 nodes to join existing ES 6.8 cluster
# In new node's elasticsearch.yml:
cluster.name: mycluster
discovery.seed_hosts: ["old-node1", "old-node2", "old-node3"]
# 2. Start new nodes - they join the cluster
# Verify
curl -X GET "localhost:9200/_cat/nodes?v"
# Should show both old and new nodes
# 3. Disable rebalancing temporarily
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.rebalance.enable": "none"
}
}'
# 4. Set migration rate limits (based on your benchmark)
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.node_concurrent_recoveries": 10,
"indices.recovery.max_bytes_per_sec": "100mb"
}
}'
# 5. Exclude old nodes one by one (starts shard migration)
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.exclude._name": "old-node1"
}
}'
# 6. Wait for shards to migrate off the node
watch -n 5 'curl -s localhost:9200/_cat/shards | grep old-node1 | wc -l'
# Wait until count reaches 0
# 7. Shut down old-node1
# Repeat steps 5-7 for each old node
# 8. Reset cluster settings
curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.exclude._name": null,
"cluster.routing.rebalance.enable": null,
"cluster.routing.allocation.node_concurrent_recoveries": null,
"indices.recovery.max_bytes_per_sec": null
}
}'
Index Template Migration
ES 8 uses composable index templates. Migrate your legacy templates:
Legacy Template (6.x/7.x style)
PUT _template/logs_template
{
"index_patterns": ["logs-*"],
"settings": {
"number_of_shards": 3
},
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"message": { "type": "text" }
}
}
}
Composable Template (8.x style)
# Component template for settings
PUT _component_template/logs_settings
{
"template": {
"settings": {
"number_of_shards": 3
}
}
}
# Component template for mappings
PUT _component_template/logs_mappings
{
"template": {
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"message": { "type": "text" }
}
}
}
}
# Composable index template
PUT _index_template/logs_template
{
"index_patterns": ["logs-*"],
"composed_of": ["logs_settings", "logs_mappings"],
"priority": 200
}
Legacy templates still work in ES 8 but are deprecated.
Post-Migration Verification
After completing the migration:
# 1. Verify cluster health
curl -X GET "https://localhost:9200/_cluster/health?pretty" -u elastic:password --cacert ca.crt
# 2. Check all nodes
curl -X GET "https://localhost:9200/_cat/nodes?v" -u elastic:password --cacert ca.crt
# 3. Verify indices
curl -X GET "https://localhost:9200/_cat/indices?v&health=yellow,red" -u elastic:password --cacert ca.crt
# 4. Test searches
curl -X GET "https://localhost:9200/your-index/_search?size=1" -u elastic:password --cacert ca.crt
# 5. Verify Kibana dashboards work
# 6. Verify Logstash is ingesting
curl -X GET "https://localhost:9200/_cat/indices?v&s=index:desc" -u elastic:password --cacert ca.crt | head -10
Rollback Plan
If something goes wrong:
Rollback from 7.x to 6.8
# 1. Stop all 7.x nodes
# 2. Restore 6.8 packages
sudo apt-get install elasticsearch=6.8.23
# 3. Restore elasticsearch.yml from backup
# 4. Restore snapshot if needed
# 5. Start cluster
Rollback from 8.x to 7.17
# 1. Stop all 8.x nodes
# 2. Restore 7.17 packages
sudo apt-get install elasticsearch=7.17.18
# 3. Restore elasticsearch.yml (especially security settings)
# 4. If security was newly enabled in 8, disable it or restore 7.x certs
# 5. Restore snapshot if needed
# 6. Start cluster
Critical: You cannot restore an 8.x snapshot to a 7.x cluster. Always keep 7.x snapshots until you’re confident in the 8.x cluster.
Timeline Estimate
For a 10-node cluster with 30TB of data:
| Phase | Duration |
|---|---|
| Preparation & Assessment | 2-4 hours |
| Backup | 2-6 hours (depends on data size) |
| Upgrade to 6.8 (rolling) | 2-3 hours |
| Upgrade to 7.17 (rolling) | 3-4 hours |
| Upgrade to 8.x (rolling) | 3-4 hours |
| Kibana/Logstash upgrades | 1-2 hours |
| Verification | 2-3 hours |
| Total | 15-22 hours |
For zero-downtime cluster expansion method, add 4-8 hours for shard migration per major version.
Checklist
## Pre-Migration
- [ ] Document current cluster state
- [ ] Check for 5.x indices (must reindex)
- [ ] Run Upgrade Assistant, fix all critical issues
- [ ] Backup all data (snapshot)
- [ ] Backup all config files
- [ ] Export Kibana saved objects
- [ ] Test upgrade process in non-prod environment
## 6.8 Upgrade
- [ ] Upgrade to latest 6.8.x
- [ ] Verify cluster health
- [ ] Re-run Upgrade Assistant
## 7.x Upgrade
- [ ] Update elasticsearch.yml (discovery settings)
- [ ] Rolling upgrade all nodes
- [ ] Upgrade Kibana
- [ ] Upgrade Logstash
- [ ] Verify cluster health
- [ ] Remove cluster.initial_master_nodes
## 8.x Upgrade
- [ ] Plan security configuration
- [ ] Update elasticsearch.yml (remove deprecated settings)
- [ ] Rolling upgrade all nodes
- [ ] Note auto-generated passwords
- [ ] Update Kibana configuration for HTTPS/auth
- [ ] Upgrade Kibana
- [ ] Update Logstash outputs for HTTPS/auth
- [ ] Upgrade Logstash
- [ ] Create service accounts for integrations
- [ ] Verify all dashboards and pipelines work
## Post-Migration
- [ ] Verify cluster health is green
- [ ] Verify all indices accessible
- [ ] Verify Kibana dashboards work
- [ ] Verify data ingestion working
- [ ] Migrate legacy index templates
- [ ] Update documentation
- [ ] Remove old snapshots (after grace period)
Key Takeaways
- You must upgrade through 7.x - no direct 6→8 path exists
- Reindex 5.x indices before upgrading to 7.x - they won’t be readable
- Security is mandatory in ES 8 - plan for HTTPS and authentication
- Take snapshots before each major upgrade - your rollback lifeline
- Use the Upgrade Assistant - it catches issues you’ll miss
- Test in non-prod first - always
- Rolling upgrades minimize downtime - but require patience
- Update all clients - Kibana, Logstash, Beats, application code
The ELK 6 to 8 migration is a significant undertaking, but with proper planning and methodical execution, it’s entirely manageable. Take your time, verify at each step, and keep those backups handy.
Questions or war stories from your own ELK migrations? Find me on LinkedIn or GitHub.