elasticdump (1) - Linux Manuals
elasticdump: Import and export tools for elasticsearch
NAME
elasticdump - Import and export tools for elasticsearchSYNOPSIS
elasticdump ,--input SOURCE --output DESTINATION /[,OPTIONS/]DESCRIPTION
- --input
- Source location (required)
- --output
- Destination location (required)
- --limit
- How many objects to move in bulk per operation (default: 100)
- --debug
- Display the elasticsearch commands being used (default: false)
- --type
- What are we exporting? (default: data, options: [data, mapping])
- --delete
- Delete documents one-by-one from the input as they are moved. Will not delete the source index
- (default: false)
- --searchBody
- Preform a partial extract based on search results (when ES is the input, default: '{"query": { "match_all": {} } }')
- --all
- Load/store documents from ALL indexes (default: false)
- --bulk
- Leverage elasticsearch Bulk API when writing documents (default: false)
- --ignore-errors
- Will continue the read/write loop on write error (default: false)
- --scrollTime
- Time the nodes will hold the requested search in order. (default: 10m)
- --maxSockets
- How many simultaneous HTTP requests can we process make? (default: 5 [node <= v0.10.x] / Infinity [node >= v0.11.x] )
- --bulk-use-output-index-name
- Force use of destination index name (the actual output URL) as destination while bulk writing to ES. Allows leveraging Bulk API copying data inside the same elasticsearch instance. (default: false)
- --timeout
- Integer containing the number of milliseconds to wait for a request to respond before aborting the request. Passed directly to the request library. If used in bulk writing, it will result in the entire batch not being written. Mostly used when you don't care too much if you lose some data when importing but rather have speed.
- --skip
- Integer containing the number of rows you wish to skip ahead from the input transport. When importing a large index, things can go wrong, be it connectivity, crashes, someone forgetting to `screen`, etc. This allows you to start the dump again from the last known line written (as logged by the `offset` in the output). Please be advised that since no sorting is specified when the dump is initially created, there's no real way to guarantee that the skipped rows have already been written/parsed. This is more of an option for when you want to get most data as possible in the index without concern for losing some rows in the process, gsimilar to the `timeout` option.
- --inputTransport
- Provide a custom js file to us as the input transport
- --outputTransport
- Provide a custom js file to us as the output transport
- --help
- This page
EXAMPLES
Copy an index from production to staging with mappings:
- elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
- elasticdump \
- --input=http://production.es.com:9200/my_index \ --output=http://staging.es.com:9200/my_index \ --type=data
- elasticdump \
- --input=http://production.es.com:9200/my_index \ --output=/data/my_index_mapping.json \ --type=mapping
- elasticdump \
- --input=http://production.es.com:9200/my_index \ --output=/data/my_index.json \ --type=data
- elasticdump \
- --input=http://production.es.com:9200/my_index \ --output=$ \ | gzip > /data/my_index.json.gz
- elasticdump \
- --all=true \ --input=http://production-a.es.com:9200/ \ --output=/data/production.json
- elasticdump \
- --bulk=true \ --input=/data/production.json \ --output=http://production-b.es.com:9200/
- elasticdump \
- --input=http://production.es.com:9200/my_index \ --output=query.json \ --searchBody '{"query":{"term":{"username": "admin"}}}'