Moving the data from the transaction log into the Lucene index removed the need to manage another copy of the transaction log. It also allows Elasticsearch to free up disk space by removing any unnecessary generation files.
This post will cover an overview of the Elasticsearch Flush API, allowing us to flush one or more index or data streams.
The snippet below shows the syntax of using the Elasticsearch Flush API.
The target parameter can be a name of an index, data stream, or index. You can also specify multiple indices or data streams in a comma-separated list. Keep in mind that Elasticsearch will flush the transaction log in the specified targets.
If you wish to flush all the indices and data streams in the cluster, you can skip the target value as shown in the syntax below:
You can also use an asterisk or the _all parameter.
The API supports the following parameters, allowing you to modify the request and response behavior.
- allow_no_inidices – allows the request to return an error if any wildcard expressions or alias are passed in the target value.
- expand_wildcards – allows the request to expand the passed wildcards and match them on any index or data stream.
- force – allows the request to force a flush operation despite no data available for commit to Lucene index.
- ignore_unavailable – ignores if the target is missing or closed.
- wait_if_ongoing – blocks the flush operation until all other running flush operations are complete.
Example – Elasticsearch Flush Specific Index
The example below shows how to use the Elasticsearch Flush API to flush a target index.
curl -XPOST “http://localhost:9200/disney/_flush” -H “kbn-xsrf: reporting”
The request above flushes the index with the name ‘disney’. The resulting output is as shown:
Example 2 – Elasticsearch Flush Multiple Indices and Data Streams
To flush multiple indices and data streams, we can specify them as comma-separated list as shown:
The resulting output:
Example 3 – Elasticsearch Flush All Indices and Data Streams in the Cluster
To flush all the data streams and indices in the cluster, we can run the request as shown:
The resulting output:
In this post, you learned how to use the Elasticsearch Flush API to flush the transaction log from an index or data stream to the Lucene index.