Elastic Search

Elasticsearch Flush

Elasticsearch provides a flush API that invokes the index or data stream flush operations. Flushing an index or data stream ensures that the data stored in the transaction log is moved into the Lucene index.

Moving the data from the transaction log into the Lucene index removed the need to manage another copy of the transaction log. It also allows Elasticsearch to free up disk space by removing any unnecessary generation files.

This post will cover an overview of the Elasticsearch Flush API, allowing us to flush one or more index or data streams.

API Syntax

The snippet below shows the syntax of using the Elasticsearch Flush API.

POST /<target>/_flush

The target parameter can be a name of an index, data stream, or index. You can also specify multiple indices or data streams in a comma-separated list. Keep in mind that Elasticsearch will flush the transaction log in the specified targets.

If you wish to flush all the indices and data streams in the cluster, you can skip the target value as shown in the syntax below:

POST /_flush

You can also use an asterisk or the _all parameter.

Query Parameters

The API supports the following parameters, allowing you to modify the request and response behavior.

  1. allow_no_inidices – allows the request to return an error if any wildcard expressions or alias are passed in the target value.
  2. expand_wildcards – allows the request to expand the passed wildcards and match them on any index or data stream.
  3. force – allows the request to force a flush operation despite no data available for commit to Lucene index.
  4. ignore_unavailable – ignores if the target is missing or closed.
  5. wait_if_ongoing – blocks the flush operation until all other running flush operations are complete.

Example – Elasticsearch Flush Specific Index

The example below shows how to use the Elasticsearch Flush API to flush a target index.

curl -XPOST “http://localhost:9200/disney/_flush” -H “kbn-xsrf: reporting”

The request above flushes the index with the name ‘disney’. The resulting output is as shown:

{
  "_shards": {
    "total": 2,
    "successful": 2,
    "failed": 0
  }
}

Example 2 – Elasticsearch Flush Multiple Indices and Data Streams

To flush multiple indices and data streams, we can specify them as comma-separated list as shown:

curl -XPOST "http://localhost:9200/disney,disney_plus/_flush" -H "kbn-xsrf: reporting"

The resulting output:

{
  "_shards": {
    "total": 4,
    "successful": 4,
    "failed": 0
  }
}

Example 3 – Elasticsearch Flush All Indices and Data Streams in the Cluster

To flush all the data streams and indices in the cluster, we can run the request as shown:

curl -XPOST "http://localhost:9200/_flush" -H "kbn-xsrf: reporting"

The resulting output:

{
  "_shards": {
    "total": 12,
    "successful": 12,
    "failed": 0
  }
}

Conclusion

In this post, you learned how to use the Elasticsearch Flush API to flush the transaction log from an index or data stream to the Lucene index.

About the author

John Otieno

My name is John and am a fellow geek like you. I am passionate about all things computers from Hardware, Operating systems to Programming. My dream is to share my knowledge with the world and help out fellow geeks. Follow my content by subscribing to LinuxHint mailing list