arrow-left

All pages
gitbookPowered by GitBook
1 of 12

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Object Store

JetStream, the persistence layer of NATS, not only allows for the higher qualities of service and features associated with 'streaming', but it also enables some functionalities not found in messaging systems.

One such feature is the Object store functionality, which allows client applications to create buckets (corresponding to streams) that can store a set of files. Files are stored and transmitted in chunks, allowing files of arbitrary size to be transferred safely over the NATS infrastructure.

Note: Object store is not a distributed storage system. All files in a bucket will need to fit on the target file system.

  • Walkthrough

hashtag
Basic Capabilities

The Object Store implements a chunking mechanism, allowing you to for example store and retrieve files (i.e. the object) of any size by associating them with a path or file name as the key.

  • add a bucket to hold the files.

  • put Add a file to the bucket

  • get Retrieve the file and store it to a designated location

hashtag
Advanced Capabilities

  • watch Subscribe to changes in the bucket. Will receive notifications on successful put and del operations.

  • del Delete a file

  • Details

    Source and Mirror Streams

    When a stream is configured with a source or mirror, it will automatically and asynchronously replicate messages from the origin stream.

    source or mirror are designed to be robust and will recover from a loss of connection. They are suitable for geographic distribution over high latency and unreliable connections. E.g. even a leaf node starting and connecting intermittently every few days will still receive or send messages over the source/mirror link. Another use case is when connecting streams cross-account.

    There are several options available when declaring the configuration.

    • Name - Name of the origin stream to source messages from.

    • StartSeq - An optional start sequence of the origin stream to start mirroring from.

    • StartTime - An optional message start time to start mirroring from. Any messages that are equal to or greater than the start time will be included.

    • FilterSubject - An optional filter subject which will include only messages that match the subject, typically including a wildcard. Note, this cannot be used with SubjectTransforms.

    • SubjectTransforms - An optional set of to apply when sourcing messages from the origin stream. Note, in this context, the Source will act as a filter on the origin stream and the Destination can optionally be provided to apply a transform. Since multiple subject transforms can be used, disjoint subjects can be sourced from the origin stream while maintaining the order of the messages. Note, this cannot be used with FilterSubject.

    • Domain - An optional JetStream domain of where the origin stream exists. This is commonly used in a hub cluster and leafnode topology.

    The stream using a source or mirror configuration can have its own retention policy, replication, and storage type.

    circle-info
    • Changes to the stream using source or mirror, e.g. deleting messages or publishing, do not reflect back on the origin stream from which the data was received.

    • Deletes in the origin stream are NOT replicated through a source or mirror agreement.

    circle-info

    Sources is a generalization of the Mirror and allows for sourcing data from one or more streams concurrently. If you require the target stream to act as a read-only replica:

    • Configure the stream without listen subjects or

    hashtag
    General behavior

    • All configurations are made on the receiving side. The stream from which data is sourced and mirrored does not need to be configured. No cleanup is required on the origin side if the receiver disappears.

    • A stream can be the origin (source) for multiple streams. This is useful for geographic distribution or for designing "fan out" topologies where data needs to be distributed reliable to a large number (up to millions) of client connections.

    • Leaf nodes and leaf node domains are explicitly supported through the API prefix

    hashtag
    Source specific

    A stream defining Sources is a generalized replication mechanism and allows for sourcing data from one or more streams concurrently. A stream with sources can still act as a regular stream allowing direct write/publish by local clients to the stream. Essentially the source streams and local client writes are aggregated into a single interleaved stream. Combined with subject transformation and filtering sourcing allows to design sophisticated data distribution architectures.

    circle-info

    Sourcing messages does not retain sequence numbers. But it retain the in stream sequence of messages . Between streams sourced to the same target, the sequence of messages is undefined.

    hashtag
    Mirror specific

    A mirror can source its messages from exactly one stream and a clients can not directly write to the mirror. Although messages cannot be published to a mirror directly by clients, messages can be deleted on-demand (beyond the retention policy), and consumers have all capabilities available on regular streams.

    circle-info
    • Mirrored messages retains the sequence numbers and timestamps of the origin stream.

    • Mirrors can be used for for (geographic) load distribution with the MirrorDirect stream attribute. See:

    hashtag
    Expected behavior in edge conditions

    • Source and mirror contracts are designed with one-way (geographic) data replication in mind. Neither configuration provides a full synchronization between streams, which would include deletes or replication of other stream attributes.

    • The content of the stream from which a source or mirror is drawn needs to be reasonable stable. Quickly deleting messages after publishing them may result in inconsistent replication due to the asynchronous nature of the replication process.

    • Sources and Mirror try to be be efficient in replicating messages and are lenient towards the source/mirror origin being unreachable (event for extended periods of time), e.g. when using leaf nodes, which are connected intermittently. For sake of efficiency the recovery interval in case of a disconnect is 10-20s.

    hashtag
    WorkQueue retention

    Source and mirror works with origin stream with workqueue retention in a limited context. The source/mirror will act as a consumer removing messages from the origin stream.

    The implementation is not resilient when connecting over intermittent leaf node connections though. Within a cluster where the target stream (with the source/mirror agreement) it will generally work well.

    circle-exclamation

    Source and mirror for workqueue based streams is only partially supported. It is not resilient against connection loss over leaf nodes.

    The consumer pulling message from a remote stream is not durable and other clients may be able to consume and remove messages from the workqueue while leaf connection is down.

    circle-exclamation

    If you try to create additional (conflicting) consumers on the origin workqueue stream the behavior becomes undefined. A workqueue allows only one consumer per subject. If the source/mirror connection is active local clients trying to create additional consumers will fail. In reverse a source/mirror cannot be created when there is already a local consumer for the same subjects.

    hashtag
    Interest base retention

    circle-exclamation

    Source and mirror for interest based streams is not supported. Jetstream does not forbid this configuration but the behavior is undefined and may change in the future.

    Temporarily disable the listen subjects through client authorizations.

    Mirror and source agreements do not create a visible consumer in the origin stream.

    subject transforms
    https://docs.nats.io/nats-concepts/jetstream/streams#configurationarrow-up-right

    JetStream

    NATS has a built-in persistence engine called JetStream which enables messages to be stored and replayed at a later time. Unlike NATS Core which requires you to have an active subscription to process messages as they happen, JetStream allows the NATS server to capture messages and replay them to consumers as needed. This functionality enables a different quality of service for your NATS messages, and enables fault-tolerant and high-availability configurations.

    JetStream is built into nats-server. If you have a cluster of JetStream-enabled servers you can enable data replication and thus guard against failures and service disruptions.

    JetStream was created to address the problems identified with streaming technology today - complexity, fragility, and a lack of scalability. Some technologies address these better than others, but no current streaming technology is truly multi-tenant, horizontally scalable, or supports multiple deployment models. No other technology that we are aware of can scale from edge to cloud using the same security context while having complete deployment observability for operations.

    hashtag
    Additional capabilities enabled by JetStream

    The JetStream persistence layer enables additional use cases typically not found in messaging systems. Being built on top of JetStream they inherit the core capabilities of JetStream, replication, security, routing limits, and mirroring.

    • A map (associative array) with atomic operations

    • File transfer, replications and storage API. Uses chunked transfers for scalability.

    Key/Value and File transfer are capabilities commonly found in in-memory databases or deployment tools. While NATS does not intend to compete with the feature set of such tools, it is our goal to provide the developer with reasonable complete set of data storage and replications features for use cases like micro service, edge deployments and server management.

    hashtag
    Configuration

    To configure a nats-server with JetStream refer to:

    hashtag
    Examples

    For runnable JetStream code examples, refer to .

    hashtag
    Goals

    JetStream was developed with the following goals in mind:

    • The system must be easy to configure and operate and be observable.

    • The system must be secure and operate well with NATS 2.0 security models.

    • The system must scale horizontally and be applicable to a high ingestion rate.

    hashtag
    JetStream capabilities

    hashtag
    Streaming: temporal decoupling between the publishers and subscribers

    One of the tenets of basic publish/subscribe messaging is that there is a required temporal coupling between the publishers and the subscribers: subscribers only receive the messages that are published when they are actively connected to the messaging system (i.e. they do not receive messages that are published while they are not subscribing or not running or disconnected). The traditional way for messaging systems to provide temporal decoupling of the publishers and subscribers is through the 'durable subscriber' functionality or sometimes through 'queues', but neither one is perfect:

    • durable subscribers need to be created before the messages get published

    • queues are meant for workload distribution and consumption, not to be used as a mechanism for message replay.

    However, in many use cases, you do not need to 'consume exactly once' functionality but rather the ability to replay messages on demand, as many times as you want. This need has led to the popularity of some 'streaming' messaging platforms.

    JetStream provides both the ability to consume messages as they are published (i.e. 'queueing') as well as the ability to replay messages on demand (i.e. 'streaming'). See below.

    Replay policies

    JetStream consumers support multiple replay policies, depending on whether the consuming application wants to receive either:

    • all of the messages currently stored in the stream, meaning a complete 'replay' and you can select the 'replay policy' (i.e. the speed of the replay) to be either:

      • instant (meaning the messages are delivered to the consumer as fast as it can take them).

      • original (meaning the messages are delivered to the consumer at the rate they were published into the stream, which can be very useful for example for staging production traffic).

    Retention policies and limits

    JetStream enables new functionalities and higher qualities of service on top of the base 'Core NATS' functionality. However, practically speaking, streams can't always just keep growing 'forever' and therefore JetStream supports multiple retention policies as well as the ability to impose size limits on streams.

    Limits

    You can impose the following limits on a stream

    • Maximum message age.

    • Maximum total stream size (in bytes).

    • Maximum number of messages in the stream.

    You must also select a discard policy which specifies what should happen once the stream has reached one of its limits and a new message is published:

    • discard old means that the stream will automatically delete the oldest message in the stream to make room for the new messages.

    • discard new means that the new message is discarded (and the JetStream publish call returns an error indicating that a limit was reached).

    Retention policy

    You can choose what kind of retention you want for each stream:

    • limits (the default) is to provide a replay of messages in the stream.

    • work queue (the stream is used as a shared queue and messages are removed from it as they are consumed) is to provide the exactly-once consumption of messages in the stream.

    • interest (messages are kept in the stream for as long as there are consumers that haven't delivered the message yet) is a variation of work queue that only retains messages if there is interest (consumers currently defined on the stream) for the message's subject.

    Note that regardless of the retention policy selected, the limits (and the discard policy) always apply.

    Subject mapping transformations

    JetStream also enables the ability to apply subject mapping transformations to messages as they are ingested into a stream.

    hashtag
    Persistent and Consistent distributed storage

    You can choose the durability as well as the resilience of the message storage according to your needs.

    • Memory storage.

    • File storage.

    • Replication (1 (none), 2, 3) between nats servers for Fault Tolerance.

    JetStream uses a NATS optimized RAFT distributed quorum algorithm to distribute the persistence service between NATS servers in a cluster while maintaining immediate consistency (as opposed to ) even in the face of failures.

    For writes (publications to a stream), the formal consistency model of NATS JetStream is . On the read side (listening to or replaying messages from streams) the formal models don't really apply because JetStream does not support atomic batching of multiple operations together (so the only kind of 'transaction' is the persisting, replicating and voting of a single operation on the stream) but in essence, JetStream is because messages are added to a stream in one global order (which you can control using compare and publish).

    Do note, while we do guarantee immediate consistency when it comes to and . We don't guarantee at this time, as reads through direct get requests may be served by followers or mirrors. More consistent results can be achieved by sending get requests to the stream leader.

    JetStream can also provide encryption at rest of the messages being stored.

    In JetStream the configuration for storing messages is defined separately from how they are consumed. Storage is defined in a and consuming messages is defined by multiple .

    Stream replication factor

    A stream's replication factor (R, often referred to as the number 'Replicas') determines how many places it is stored allowing you to tune to balance risk with resource usage and performance. A stream that is easily rebuilt or temporary might be memory-based with a R=1 and a stream that can tolerate some downtime might be file-based R-1.

    Typical usage to operate in typical outages and balance performance would be a file-based stream with R=3. A highly resilient, but less performant and more expensive configuration is R=5, the replication factor limit.

    Rather than defaulting to the maximum, we suggest selecting the best option based on the use case behind the stream. This optimizes resource usage to create a more resilient system at scale.

    • Replicas=1 - Cannot operate during an outage of the server servicing the stream. Highly performant.

    • Replicas=2 - No significant benefit at this time. We recommend using Replicas=3 instead.

    • Replicas=3 - Can tolerate the loss of one server servicing the stream. An ideal balance between risk and performance.

    Mirroring and Sourcing between streams

    JetStream also allows server administrators to easily mirror streams, for example between different JetStream domains in order to offer disaster recovery. You can also define a stream that 'sources' from one or more other streams.

    Syncing data to disk

    JetStream’s file-based streams persist messages to disk. However, while JetStream does flush file writes to the OS synchronously, under the default configuration it does not immediately fsync data to disk. The server uses a configurable sync_interval option, with a default value of 2 minutes, which controls how often the server will fsync its data. The data will be fsync-ed no later than this interval. This has important consequences for durability with respect to OS failures (meaning ungraceful exit of the Operating System such as a power outage, and not just ungraceful exit or killing of the nats-server process itself):

    In a non-replicated setup, an OS failure may result in data loss. A client might publish a message and receive an acknowledgment, but the data may not yet be safely stored to disk. As a result, after an OS failure recovery, a server may have lost recently acknowledged messages.

    In a replicated setup, a published message is acknowledged after it successfully replicated to at least a quorum of servers. However, replication alone is not enough to guarantee the strongest level of durability against multiple systemic failures.

    • If multiple servers fail simultaneously, all due to an OS failure, and before their data has been fsync-ed, the cluster may fail to recover the most recently acknowledged messages.

    • If a failed server lost data locally due to an OS failure, although extremely rare, there are some combinations of events where it may rejoin the cluster and form a new majority with nodes that have never received or persisted a given message. The cluster may then proceed with incomplete data causing acknowledged messages to be lost.

    Setting a lower sync_interval increases the frequency of disk writes, and reduces the window for potential data loss, but at the expense of performance. Additionally, setting sync_interval: always will make sure servers fsync after every message before it is acknowledged. This setting, combined with replication in different data centers or availability zones, provides the strongest durability guarantees but at the slowest performance.

    The default settings have been chosen to balance performance and risk of data loss in what we consider to be a typical production deployment scenario across multiple availability zones.

    For example, consider a stream with 3 replicas deployed across three separate availability zones. For the stream state to diverge across nodes would require that:

    • One of the 3 servers is already offline, isolated or partitioned.

    • A second server’s OS needs to fail such that it loses writes of messages that were only available on 2 out of 3 nodes due to them not being fsync-ed.

    • The stream leader that’s part of the above 2 out of 3 nodes needs to go down or become isolated/partitioned.

    In the end, 2 out of 3 nodes will be available, the previous stream leader with the writes will be unavailable, one server will have lost some writes due to the OS failure, and one server will have never seen these writes due to the earlier partition. The last two servers could then form a majority and accept new writes, essentially losing some of the former writes.

    Importantly this is a failure condition where stream state could diverge, but in a system that is deployed across multiple availability zones, it would require multiple faults to align precisely in the right way.

    A potential mitigation to a failure of this kind is not automatically bringing back a server process that was OS-failed until it is known that a majority of the remaining servers have received the new writes, or by peer-removing the crashed server and admitting it as a new and wiped peer and allowing it to recover over the network from existing healthy nodes (although this could be expensive depending on the amount of data involved).

    For use cases where minimizing loss is an absolute priority, sync_interval: always can of course still be configured, but note that this will have a server-wide performance impact that may affect throughput or latencies. For production environments, operators should evaluate whether the default is correct for their use case, target environment, costs, and performance requirements.

    Alternatively, a hybrid approach can be used where existing clusters still function under their default sync_interval settings but a new cluster gets added that’s configured with sync_interval: always, and utilizes server tags. The placement of a stream can then be specified to have this stream store data on this higher durability cluster through the use of .

    Create a replicated stream that’s specifically placed in the cluster using sync_interval: always, to ensure the strongest durability only for stream writes that require this level of durability.

    hashtag
    De-coupled flow control

    JetStream provides decoupled flow control over streams, the flow control is not 'end to end' where the publisher(s) are limited to publish no faster than the slowest of all the consumers (i.e. the lowest common denominator) can receive but is instead happening individually between each client application (publishers or consumers) and the nats server.

    When using the JetStream publish calls to publish to streams there is an acknowledgment mechanism between the publisher and the NATS server, and you have the choice of making synchronous or asynchronous (i.e. 'batched') JetStream publish calls.

    On the subscriber side, the sending of messages from the NATS server to the client applications receiving or consuming messages from streams is also flow controlled.

    hashtag
    Exactly once semantics

    Because publications to streams using the JetStream publish calls are acknowledged by the server the base quality of service offered by streams is 'at least once', meaning that while reliable and normally duplicate free there are some specific failure scenarios that could result in a publishing application believing (wrongly) that a message was not published successfully and therefore publishing it again, and there are failure scenarios that could result in a client application's consumption acknowledgment getting lost and therefore in the message being re-sent to the consumer by the server. Those failure scenarios while being rare and even difficult to reproduce do exist and can result in perceived 'message duplication' at the application level.

    Therefore, JetStream also offers an 'exactly once' quality of service. For the publishing side, it relies on the publishing application attaching a unique message or publication ID in a message header and on the server keeping track of those IDs for a configurable rolling period of time in order to detect the publisher publishing the same message twice. For the subscribers a double acknowledgment mechanism is used to avoid a message being erroneously re-sent to a subscriber by the server after some kinds of failures.

    hashtag
    Consumers

    JetStream are 'views' on a stream, they are subscribed to (or pulled) by client applications to receive copies of (or to consume if the stream is set as a working queue) messages stored in the stream.

    Fast push consumers

    Client applications can choose to use fast un-acknowledged push (ordered) consumers to receive messages as fast as possible (for the selected replay policy) on a specified delivery subject or to an inbox. Those consumers are meant to be used to 'replay' rather than 'consume' the messages in a stream.

    Horizontally scalable pull consumers with batching

    Client applications can also use and share pull consumers that are demand-driven, support batching and must explicitly acknowledge message reception and processing which means that they can be used to consume (i.e. use the stream as a distributed queue) as well as process the messages in a stream.

    Pull consumers can and are meant to be shared between applications (just like queue groups) in order to provide easy and transparent horizontal scalability of the processing or consumption of messages in a stream without having (for example) to worry about having to define partitions or worry about fault-tolerance.

    Note: using pull consumers doesn't mean that you can't get updates (new messages published into the stream) 'pushed' in real-time to your application, as you can pass a (reasonable) timeout to the consumer's Fetch call and call it in a loop.

    Consumer acknowledgments

    While you can decide to use un-acknowledged consumers trading quality of service for the fastest possible delivery of messages, most processing is not idem-potent and requires higher qualities of service (such as the ability to automatically recover from various failure scenarios that could result in some messages not being processed or being processed more than once) and you will want to use acknowledged consumers. JetStream supports more than one kind of acknowledgment:

    • Some consumers support acknowledging all the messages up to the sequence number of the message being acknowledged, some consumers provide the highest quality of service but require acknowledging the reception and processing of each message explicitly as well as the maximum amount of time the server will wait for an acknowledgment for a specific message before re-delivering it (to another process attached to the consumer).

    • You can also send back negative acknowledgements.

    • You can even send

    hashtag
    Key Value Store

    The JetStream persistence layer enables the Key Value store: the ability to store, retrieve and delete value messages associated with a key into a bucket.

    hashtag
    Watch and History

    You can subscribe to changes in a Key Value on the bucket or individual key level with watch and optionally retrieve a history of the values (and deletions) that have happened on a particular key.

    hashtag
    Atomic updates and locking

    The Key Value store supports atomic create and update operations. This enables pessimistic locks (by creating a key and holding on to it) and optimistic locks (using CAS - compare and set).

    hashtag
    Object Store

    The Object Store is similar to the Key Value Store. The key being replaced by a file name and value being designed to store arbitrarily large objects (e.g. files, even if they are very large) rather than 'values' that are message-sized (i.e. limited to 1Mb by default). This is achieved by chunking messages.

    hashtag
    Legacy

    Note that JetStream completely replaces the legacy NATS streaming layer.

    Example

    Consider this architecture

    While it is an incomplete architecture it does show a number of key points:

    • Many related subjects are stored in a Stream

    • Consumers can have different modes of operation and receive just subsets of the messages

    The system must support multiple use cases.

  • The system must self-heal.

  • The system must allow NATS messages to be part of a stream as desired.

  • The system must display payload agnostic behavior.

  • The system must not have third party dependencies.

  • the last message stored in the stream, or the last message for each subject (as streams can capture more than one subject).

  • starting from a specific sequence number.

  • starting from a specific start time.

  • Maximum individual message size.
  • You can also set limits on the number of consumers that can be defined for the stream at any given point in time.

  • Replicas=4 - No significant benefit over Replicas=3 except marginally in a 5 node cluster.

  • Replicas=5 - Can tolerate simultaneous loss of two servers servicing the stream. Mitigates risk at the expense of performance.

  • The first server of the original partition that didn’t receive the writes recovers from the partition.

  • The OS-failed server now returns and comes in contact with the first server but not with the previous stream leader.

  • in progress
    acknowledgments (to indicate that you are still processing the message in question and need more time before acking or nacking it).
    Key Value Store
    Object Store
    Configuring JetStream
    JetStream Clustering
    NATS by Examplearrow-up-right
    retention policies
    eventual consistencyarrow-up-right
    Linearizablearrow-up-right
    serializablearrow-up-right
    monotonic writesarrow-up-right
    monotonic readsarrow-up-right
    read your writesarrow-up-right
    Stream
    Consumers
    placement tags
    consumers
    Concepts
    Walkthrough
    API and details
    Concepts
    Walkthrough
    API and details
    STANarrow-up-right
  • Multiple Acknowledgement modes are supported

  • A new order arrives on ORDERS.received, gets sent to the NEW Consumer who, on success, will create a new message on ORDERS.processed. The ORDERS.processed message again enters the Stream where a DISPATCH Consumer receives it and once processed it will create an ORDERS.completed message which will again enter the Stream. These operations are all pull based meaning they are work queues and can scale horizontally. All require acknowledged delivery ensuring no order is missed.

    All messages are delivered to a MONITOR Consumer without any acknowledgement and using Pub/Sub semantics - they are pushed to the monitor.

    As messages are acknowledged to the NEW and DISPATCH Consumers, a percentage of them are Sampled and messages indicating redelivery counts, ack delays and more, are delivered to the monitoring system.

    hashtag
    Example Configuration

    Additional documentation introduces the nats utility and how you can use it to create, monitor, and manage streams and consumers, but for completeness and reference this is how you'd create the ORDERS scenario. We'll configure a 1 year retention for order related messages:

    Orders
    # Configure a cluster that's dedicated to always sync writes.
    server_tags: ["sync:always"]
    
    jetstream {
        sync_interval: always
    }
    nats stream add --replicas 3 --tag sync:always
    nats stream add ORDERS --subjects "ORDERS.*" --ack --max-msgs=-1 --max-bytes=-1 --max-age=1y --storage file --retention limits --max-msg-size=-1 --discard=old
    nats consumer add ORDERS NEW --filter ORDERS.received --ack explicit --pull --deliver all --max-deliver=-1 --sample 100
    nats consumer add ORDERS DISPATCH --filter ORDERS.processed --ack explicit --pull --deliver all --max-deliver=-1 --sample 100
    nats consumer add ORDERS MONITOR --filter '' --ack none --target monitor.ORDERS --deliver last --replay instant

    JetStream Walkthrough

    The following is a small walkthrough on creating a stream and a consumer and interacting with the stream using the nats cliarrow-up-right.

    hashtag
    Prerequisite: enabling JetStream

    If you are running a local nats-server stop it and restart it with JetStream enabled using nats-server -js (if that's not already done)

    You can then check that JetStream is enabled by using

    If you see the below then JetStream is not enabled

    hashtag
    1. Creating a stream

    Let's start by creating a stream to capture and store the messages published on the subject "foo".

    Enter nats stream add <Stream name> (in the examples below we will name the stream "my_stream"), then enter "foo" as the subject name and hit return to use the defaults for all the other stream attributes:

    You can then check the information about the stream you just created:

    hashtag
    2. Publish some messages into the stream

    Let's now start a publisher

    As messages are being published on the subject "foo" they are also captured and stored in the stream, you can check that by using nats stream info my_stream and even look at the messages themselves using nats stream view my_stream or nats stream get my_stream

    hashtag
    3. Creating a consumer

    Now at this point if you create a 'Core NATS' (i.e. non-streaming) subscriber to listen for messages on the subject 'foo', you will only receive the messages being published after the subscriber was started, this is normal and expected for the basic 'Core NATS' messaging. In order to receive a 'replay' of all the messages contained in the stream (including those that were published in the past) we will now create a 'consumer'

    We can administratively create a consumer using the 'nats consumer add ' command, in this example we will name the consumer "pull_consumer", and we will leave the delivery subject to 'nothing' (i.e. just hit return at the prompt) because we are creating a 'pull consumer' and select all for the start policy, you can then just use the defaults and hit return for all the other prompts. The stream the consumer is created on should be the stream 'my_stream' we just created above.

    You can check on the status of any consumer at any time using nats consumer info or view the messages in the stream using nats stream view my_stream or nats stream get my_stream, or even remove individual messages from the stream using nats stream rmm

    hashtag
    3. Subscribing from the consumer

    Now that the consumer has been created and since there are messages in the stream we can now start subscribing to the consumer:

    This will print out all the messages in the stream starting with the first message (which was published in the past) and continuing with new messages as they are published until the count is reached.

    Note that in this example we are creating a pull consumer with a 'durable' name, this means that the consumer can be shared between as many consuming processes as you want. For example instead of running a single nats consumer next with a count of 1000 messages you could have started two instances of nats consumer each with a message count of 500 and you would see the consumption of the messages from the consumer distributed between those instances of nats

    hashtag
    Replaying the messages again

    Once you have iterated over all the messages in the stream with the consumer, you can get them again by simply creating a new consumer or by deleting that consumer (nats consumer rm) and re-creating it (nats consumer add).

    hashtag
    4. Cleaning up

    You can clean up a stream (and release the resources associated with it (e.g. the messages stored in the stream)) using nats stream purge

    You can also delete a stream (which will also automatically delete all of the consumers that may be defined on that stream) using nats stream rm

    Key/Value Store Walkthrough

    The Key/Value Store is a JetStream feature, so we need to verify it is enabled by

    which may return

    In this case, you should enable JetStream.

    hashtag
    Prerequisite: enabling JetStream

    If you are running a local nats-server stop it and restart it with JetStream enabled using nats-server -js (if that's not already done)

    You can then check that JetStream is enabled by using

    hashtag
    Creating a KV bucket

    A 'KV bucket' is like a stream; you need to create it before using it, as in nats kv add <KV Bucket Name>:

    hashtag
    Storing a value

    Now that we have a bucket, we can assign, or 'put', a value to a specific key:

    which should return the key's value Value1

    hashtag
    Getting a value

    We can fetch, or 'get', the value for a key "Key1":

    hashtag
    Deleting a value

    You can always delete a key and its value by using

    It is harmless to delete a non-existent key (check this!!).

    hashtag
    Atomic operations

    K/V Stores can also be used in concurrent design patterns, such as semaphores, by using atomic 'create' and 'update' operations.

    E.g. a client wanting exclusive use of a file can lock it by creating a key, whose value is the file name, with create and deleting this key after completing use of that file. A client can increase the reslience against failure by using a timeout for the bucket containing this key. The client can use update with a revision number to keep the bucket alive.

    Updates can also be used for more fine-grained concurrency control, sometimes known as optimistic locking, where multiple clients can try a task, but only one can successfully complete it.

    hashtag
    Create (aka exclusive locking)

    Create a lock/semaphore with the create operation.

    Only one create can succeed. First come, first serve. All concurrent attempts will result in an error until the key is deleted

    hashtag
    Update with CAS (aka optimistic locking)

    We can also atomically update, sometimes known as a CAS (compare and swap) operation, a key with an additional parameter revision

    A second attempt with the same revision 13, will fail

    hashtag
    Watching a K/V Store

    An unusual functionality of a K/V Store is being able to 'watch' a bucket, or a specific key in that bucket, and receive real-time updates to changes in the store.

    For the example above, run nats kv watch my-kv. This will start a watcher on the bucket we have just created earlier. By default, the KV bucket has a history size of one, and so it only remembers the last change. In our case, the watcher should see a delete of the value associated with the key "Key1":

    If we now concurrently change the value of 'my-kv' by

    The watcher will see that change:

    hashtag
    Cleaning up

    When you are finished using a bucket, you can delete the bucket, and its resources, by using the rm operator:

    Key/Value Store

    JetStream, the persistence layer of NATS, not only allows for the higher qualities of service and features associated with 'streaming', but it also enables some functionalities not found in messaging systems.

    One such feature is the Key/Value store functionality, which allows client applications to create buckets and use them as immediately (as opposed to eventually) consistent, persistent associative arraysarrow-up-right (or maps). Note that this is an abstraction on top of the Stream functionality. Buckets are materialized as Streams (with a name starting with KV_), everything you can do with a bucket you can do with a Stream, but you ultimately have more functionality and flexibility and control when using the Stream functionality directly.

    Do note, while we do guarantee immediate consistency when it comes to monotonic writesarrow-up-right and monotonic readsarrow-up-right. We don't guarantee read your writesarrow-up-right at this time, as reads through direct get requests may be served by followers or mirrors. More consistent results can be achieved by sending get requests to the underlying stream leader of the Key/Value store.

    hashtag
    Managing a Key Value store

    1. Create a bucket, which corresponds to a stream in the underlying storage. Define KV/Stream limits as appropriate

    2. Use the operation below.

    hashtag
    Map style operations

    You can use KV buckets to perform the typical operations you would expect from an immediately consistent key/value store:

    • put: associate a value with a key

    • get: retrieve the value associated with a key

    • delete: clear any value associated with a key

    hashtag
    Atomic operations used for locking and concurrency control

    • create: associate the value with a key only if there is currently no value associated with that key (i.e. compare to null and set)

    • update: compare and set (aka compare and swap) the value for a key

    hashtag
    Limiting size, TTL etc.

    You can set limits for your buckets, such as:

    • the maximum size of the bucket

    • the maximum size for any single value

    • a TTL: how long the store will keep values for

    hashtag
    Treating the Key Value store as a message stream

    Finally, you can even do things that typically can not be done with a Key/Value Store:

    • watch: watch for changes happening for a key, which is similar to subscribing (in the publish/subscribe sense) to the key: the watcher receives updates due to put or delete operations on the key pushed to it in real-time as they happen

    • watch all: watch for all the changes happening on all the keys in the bucket

    • history: retrieve a history of the values (and delete operations) associated with each key over time (by default the history of buckets is set to 1, meaning that only the latest value/operation is stored)

    hashtag
    Notes

    A valid key can contain the following characters: a-z, A-Z, 0-9, _, -, ., = and /, i.e. it can be a dot-separated list of tokens (which means that you can then use wildcards to match hierarchies of keys when watching a bucket). The value can be any byte array.

    Example

    Streams with source and mirror configurations are best managed through a client API. If you intend to create such a configuration from command line with NATS CLI you should use a JSON configuration.

    hashtag
    Example stream configuration with two sources

    Minimal example

    With additional options

    nats account info
    JetStream Account Information:
    
       JetStream is not supported in this account
    purge: clear all the values associated with all keys
  • keys: get a copy of all of the keys (with a value or operation associated with it)

  • Walkthrough
    Details
    hashtag
    Example stream configuration with mirror

    Minimal example

    With additional options

    nats stream add --config stream_with_sources.json
    nats account info
    Account Information
    
                               User: 
                            Account: $G
                            Expires: never
                          Client ID: 5
                          Client IP: 127.0.0.1
                                RTT: 128µs
                  Headers Supported: true
                    Maximum Payload: 1.0 MiB
                      Connected URL: nats://127.0.0.1:4222
                  Connected Address: 127.0.0.1:4222
                Connected Server ID: NAMR7YBNZA3U2MXG2JH3FNGKBDVBG2QTMWVO6OT7XUSKRINKTRFBRZEC
           Connected Server Version: 2.11.0-dev
                     TLS Connection: no
    
    JetStream Account Information:
    
    Account Usage:
    
                            Storage: 0 B
                             Memory: 0 B
                            Streams: 0
                          Consumers: 0
    
    Account Limits:
    
                Max Message Payload: 1.0 MiB
    
      Tier: Default:
    
          Configuration Requirements:
    
            Stream Requires Max Bytes Set: false
             Consumer Maximum Ack Pending: Unlimited
    
          Stream Resource Usage Limits:
    
                                   Memory: 0 B of Unlimited
                        Memory Per Stream: Unlimited
                                  Storage: 0 B of Unlimited
                       Storage Per Stream: Unlimited
                                  Streams: 0 of Unlimited
                                Consumers: 0 of Unlimited
    JetStream Account Information:
    
       JetStream is not supported in this account
    nats stream add my_stream
    ? Subjects foo
    ? Storage file
    ? Replication 1
    ? Retention Policy Limits
    ? Discard Policy Old
    ? Stream Messages Limit -1
    ? Per Subject Messages Limit -1
    ? Total Stream Size -1
    ? Message TTL -1
    ? Max Message Size -1
    ? Duplicate tracking time window 2m0s
    ? Allow message Roll-ups No
    ? Allow message deletion Yes
    ? Allow purging subjects or the entire stream Yes
    Stream my_stream was created
    
    Information for Stream my_stream created 2024-06-07 12:29:36
    
                  Subjects: foo
                  Replicas: 1
                   Storage: File
    
    Options:
    
                 Retention: Limits
           Acknowledgments: true
            Discard Policy: Old
          Duplicate Window: 2m0s
                Direct Get: true
         Allows Msg Delete: true
              Allows Purge: true
            Allows Rollups: false
    
    Limits:
    
          Maximum Messages: unlimited
       Maximum Per Subject: unlimited
             Maximum Bytes: unlimited
               Maximum Age: unlimited
      Maximum Message Size: unlimited
         Maximum Consumers: unlimited
    
    State:
    
                  Messages: 0
                     Bytes: 0 B
            First Sequence: 0
             Last Sequence: 0
          Active Consumers: 0
    nats stream info my_stream
    Information for Stream my_stream created 2024-06-07 12:29:36
    
                  Subjects: foo
                  Replicas: 1
                   Storage: File
    
    Options:
    
                 Retention: Limits
           Acknowledgments: true
            Discard Policy: Old
          Duplicate Window: 2m0s
                Direct Get: true
         Allows Msg Delete: true
              Allows Purge: true
            Allows Rollups: false
    
    Limits:
    
          Maximum Messages: unlimited
       Maximum Per Subject: unlimited
             Maximum Bytes: unlimited
               Maximum Age: unlimited
      Maximum Message Size: unlimited
         Maximum Consumers: unlimited
    
    State:
    
                  Messages: 0
                     Bytes: 0 B
            First Sequence: 0
             Last Sequence: 0
          Active Consumers: 0
    nats pub foo --count=1000 --sleep 1s "publication #{{.Count}} @ {{.TimeStamp}}"
    nats consumer add
    ? Consumer name pull_consumer
    ? Delivery target (empty for Pull Consumers) 
    ? Start policy (all, new, last, subject, 1h, msg sequence) all
    ? Acknowledgment policy explicit
    ? Replay policy instant
    ? Filter Stream by subjects (blank for all) 
    ? Maximum Allowed Deliveries -1
    ? Maximum Acknowledgments Pending 0
    ? Deliver headers only without bodies No
    ? Add a Retry Backoff Policy No
    ? Select a Stream my_stream
    Information for Consumer my_stream > pull_consumer created 2024-06-07T12:32:09-05:00
    
    Configuration:
    
                        Name: pull_consumer
                   Pull Mode: true
              Deliver Policy: All
                  Ack Policy: Explicit
                    Ack Wait: 30.00s
               Replay Policy: Instant
             Max Ack Pending: 1,000
           Max Waiting Pulls: 512
    
    State:
    
      Last Delivered Message: Consumer sequence: 0 Stream sequence: 0
        Acknowledgment Floor: Consumer sequence: 0 Stream sequence: 0
            Outstanding Acks: 0 out of maximum 1,000
        Redelivered Messages: 0
        Unprocessed Messages: 74
               Waiting Pulls: 0 of maximum 512
    nats consumer next my_stream pull_consumer --count 1000
    nats account info
    Connection Information:
    
                   Client ID: 6
                   Client IP: 127.0.0.1
                         RTT: 64.996µs
           Headers Supported: true
             Maximum Payload: 1.0 MiB
               Connected URL: nats://127.0.0.1:4222
           Connected Address: 127.0.0.1:4222
         Connected Server ID: ND2XVDA4Q363JOIFKJTPZW3ZKZCANH7NJI4EJMFSSPTRXDBFG4M4C34K
    
    JetStream Account Information:
    
               Memory: 0 B of Unlimited
              Storage: 0 B of Unlimited
              Streams: 0 of Unlimited
            Consumers: 0 of Unlimited 
    nats kv add my-kv
    my_kv Key-Value Store Status
    
             Bucket Name: my-kv
             History Kept: 1
            Values Stored: 0
               Compressed: false
       Backing Store Kind: JetStream
              Bucket Size: 0 B
      Maximum Bucket Size: unlimited
       Maximum Value Size: unlimited
              Maximum Age: unlimited
         JetStream Stream: KV_my-kv
                  Storage: File
    nats kv put my-kv Key1 Value1
    nats kv get my-kv Key1
    my-kv > Key1 created @ 12 Oct 21 20:08 UTC
    
    Value1
    nats kv del my-kv Key1
    nats kv create my-sem Semaphore1 Value1
    nats kv create my-sem Semaphore1 Value1
    nats: error: nats: wrong last sequence: 1: key exists
    nats kv update my-sem Semaphore1 Value2 13
    nats kv update my-sem Semaphore1 Value2 13
    nats: error: nats: wrong last sequence: 14
    nats kv watch my-kv
    [2021-10-12 13:15:03] DEL my-kv > Key1
    nats kv put my-kv Key1 Value2
    [2021-10-12 13:25:14] PUT my-kv > Key1: Value2
    nats kv rm my-kv
    {
      "name": "SOURCE_TARGET",
      "subjects": [
        "foo1.ext.*",
        "foo2.ext.*"
      ],
      "discard": "old",
      "duplicate_window": 120000000000,
      "sources": [
        {
          "name": "SOURCE1_ORIGIN",
        },
      ],
      "deny_delete": false,
      "sealed": false,
      "max_msg_size": -1,
      "allow_rollup_hdrs": false,
      "max_bytes": -1,
      "storage": "file",
      "allow_direct": false,
      "max_age": 0,
      "max_consumers": -1,
      "max_msgs_per_subject": -1,
      "num_replicas": 1,
      "name": "SOURCE_TARGET",
      "deny_purge": false,
      "compression": "none",
      "max_msgs": -1,
      "retention": "limits",
      "mirror_direct": false
    }
    {
      "name": "SOURCE_TARGET",
      "subjects": [
        "foo1.ext.*",
        "foo2.ext.*"
      ],
      "discard": "old",
      "duplicate_window": 120000000000,
      "sources": [
        {
          "name": "SOURCE1_ORIGIN",
          "filter_subject": "foo1.bar",
          "opt_start_seq": 42,
          "external": {
            "deliver": "",
            "api": "$JS.domainA.API"
          }
        },
        {
          "name": "SOURCE2_ORIGIN",
          "filter_subject": "foo2.bar"
        }
      ],
      "consumer_limits": {
        
      },
      "deny_delete": false,
      "sealed": false,
      "max_msg_size": -1,
      "allow_rollup_hdrs": false,
      "max_bytes": -1,
      "storage": "file",
      "allow_direct": false,
      "max_age": 0,
      "max_consumers": -1,
      "max_msgs_per_subject": -1,
      "num_replicas": 1,
      "name": "SOURCE_TARGET",
      "deny_purge": false,
      "compression": "none",
      "max_msgs": -1,
      "retention": "limits",
      "mirror_direct": false
    }
    {
      "name": "MIRROR_TARGET"
      "discard": "old",
      "mirror": {
        "name": "MIRROR_ORIGIN"
      },
      "deny_delete": false,
      "sealed": false,
      "max_msg_size": -1,
      "allow_rollup_hdrs": false,
      "max_bytes": -1,
      "storage": "file",
      "allow_direct": false,
      "max_age": 0,
      "max_consumers": -1,
      "max_msgs_per_subject": -1,
      "num_replicas": 1,
      "name": "MIRROR_TARGET",
      "deny_purge": false,
      "compression": "none",
      "max_msgs": -1,
      "retention": "limits",
      "mirror_direct": false
    }
    {
      "name": "MIRROR_TARGET"
      "discard": "old",
      "mirror": {
        "opt_start_time": "2024-07-11T08:57:20.4441646Z",
        "external": {
          "deliver": "",
          "api": "$JS.domainB.API"
        },
        "name": "MIRROR_ORIGIN"
      },
      "consumer_limits": {
        
      },
      "deny_delete": false,
      "sealed": false,
      "max_msg_size": -1,
      "allow_rollup_hdrs": false,
      "max_bytes": -1,
      "storage": "file",
      "allow_direct": false,
      "max_age": 0,
      "max_consumers": -1,
      "max_msgs_per_subject": -1,
      "num_replicas": 1,
      "name": "MIRROR_TARGET",
      "deny_purge": false,
      "compression": "none",
      "max_msgs": -1,
      "retention": "limits",
      "mirror_direct": false
    }

    Object Store Walkthrough

    If you are running a local nats-server stop it and restart it with JetStream enabled using nats-server -js (if that's not already done)

    You can then check that JetStream is enabled by using

    Which should output something like:

    If you see the below instead then JetStream is not enabled

    hashtag
    Creating an Object Store bucket

    Just like you need to create streams before you can use them you need to first create an Object Store bucket

    which outputs

    hashtag
    Putting a file in the bucket

    hashtag
    Putting a file in the bucket by providing a name

    By default the full file path is used as a key. Provide the key explicitly (e.g. a relative path ) with --name

    hashtag
    Listing the objects in a bucket

    hashtag
    Getting an object from the bucket

    hashtag
    Getting an object from the bucket with a specific output path

    By default, the file will be stored relative to the local path under its name (not the full path). To specify an output path use --output

    hashtag
    Removing an object from the bucket

    hashtag
    Getting information about the bucket

    hashtag
    Watching for changes to a bucket

    hashtag
    Sealing a bucket

    You can seal a bucket, meaning that no further changes are allowed on that bucket

    hashtag
    Deleting a bucket

    Using nats object rm myobjbucket will delete the bucket and all the files stored in it.

    nats account info
    Connection Information:
    
                   Client ID: 6
                   Client IP: 127.0.0.1
                         RTT: 64.996µs
           Headers Supported: true
             Maximum Payload: 1.0 MiB
               Connected URL: nats://127.0.0.1:4222
           Connected Address: 127.0.0.1:4222
         Connected Server ID: ND2XVDA4Q363JOIFKJTPZW3ZKZCANH7NJI4EJMFSSPTRXDBFG4M4C34K
    
    JetStream Account Information:
    
               Memory: 0 B of Unlimited
              Storage: 0 B of Unlimited
              Streams: 0 of Unlimited
            Consumers: 0 of Unlimited
    JetStream Account Information:
    
       JetStream is not supported in this account
    nats object add myobjbucket
    myobjbucket Object Store Status
    
             Bucket Name: myobjbucket
                Replicas: 1
                     TTL: unlimitd
                  Sealed: false
                    Size: 0 B
      Backing Store Kind: JetStream
        JetStream Stream: OBJ_myobjbucket
    nats object put myobjbucket ~/Movies/NATS-logo.mov
    1.5 GiB / 1.5 GiB [====================================================================================]
    
    Object information for myobjbucket > /Users/jnmoyne/Movies/NATS-logo.mov
    
                   Size: 1.5 GiB
      Modification Time: 14 Apr 22 00:34 +0000
                 Chunks: 12,656
                 Digest: sha-256 8ee0679dd1462de393d81a3032d71f43d2bc89c0c8a557687cfe2787e926
    nats object put --name /Movies/NATS-logo.mov myobjbucket ~/Movies/NATS-logo.mov
    1.5 GiB / 1.5 GiB [====================================================================================]
    
    Object information for myobjbucket > /Movies/NATS-logo.mov
    
                   Size: 1.5 GiB
      Modification Time: 14 Apr 22 00:34 +0000
                 Chunks: 12,656
                 Digest: sha-256 8ee0679dd1462de393d81a3032d71f43d2bc89c0c8a557687cfe2787e926
    nats object ls myobjbucket
    ╭───────────────────────────────────────────────────────────────────────────╮
    │                              Bucket Contents                              │
    ├─────────────────────────────────────┬─────────┬───────────────────────────┤
    │ Name                                │ Size    │ Time                      │
    ├─────────────────────────────────────┼─────────┼───────────────────────────┤
    │ /Users/jnmoyne/Movies/NATS-logo.mov │ 1.5 GiB │ 2022-04-13T17:34:55-07:00 │
    │ /Movies/NATS-logo.mov               │ 1.5 GiB │ 2022-04-13T17:35:41-07:00 │
    ╰─────────────────────────────────────┴─────────┴───────────────────────────╯
    nats object get myobjbucket ~/Movies/NATS-logo.mov
    1.5 GiB / 1.5 GiB [====================================================================================]
    
    Wrote: 1.5 GiB to /Users/jnmoyne/NATS-logo.mov in 5.68s average 279 MiB/s
    nats object get myobjbucket --output /temp/Movies/NATS-logo.mov /Movies/NATS-logo.mov
    1.5 GiB / 1.5 GiB [====================================================================================]
    
    Wrote: 1.5 GiB to /temp/Movies/NATS-logo.mov in 5.68s average 279 MiB/s
    nats object rm myobjbucket ~/Movies/NATS-logo.mov
    ? Delete 1.5 GiB byte file myobjbucket > /Users/jnmoyne/Movies/NATS-logo.mov? Yes
    Removed myobjbucket > /Users/jnmoyne/Movies/NATS-logo.mov
    myobjbucket Object Store Status
    
             Bucket Name: myobjbucket
                Replicas: 1
                     TTL: unlimitd
                  Sealed: false
                    Size: 16 MiB
      Backing Store Kind: JetStream
        JetStream Stream: OBJ_myobjbucket
    nats object info myobjbucket
    myobjbucket Object Store Status
    
             Bucket Name: myobjbucket
                Replicas: 1
                     TTL: unlimitd
                  Sealed: false
                    Size: 1.6 GiB
      Backing Store Kind: JetStream
        JetStream Stream: OBJ_myobjbucket
    nats object watch myobjbucket
    [2022-04-13 17:51:28] PUT myobjbucket > /Users/jnmoyne/Movies/NATS-logo.mov: 1.5 GiB bytes in 12,656 chunks
    [2022-04-13 17:53:27] DEL myobjbucket > /Users/jnmoyne/Movies/NATS-logo.mov
    nats object seal myobjbucket
    ? Really seal Bucket myobjbucket, sealed buckets can not be unsealed or modified Yes
    myobjbucket has been sealed
    myobjbucket Object Store Status
    
             Bucket Name: myobjbucket
                Replicas: 1
                     TTL: unlimitd
                  Sealed: true
                    Size: 1.6 GiB
      Backing Store Kind: JetStream
        JetStream Stream: OBJ_myobjbucket

    Consumers

    A consumer is a stateful view of a stream. It acts as an interface for clients to consume a subset of messages stored in a stream and will keep track of which messages were delivered and acknowledged by clients.

    Unlike Core NATSarrow-up-right, which provides an at most once delivery guarantee, a consumer in JetStream can provide an at least once delivery guarantee.

    While Streams are responsible for storing the published messages, the consumer is responsible for tracking the delivery and acknowledgments. This tracking ensures that if a message is not acknowledged (un-acked or 'nacked'), the consumer will automatically attempt to re-deliver it. JetStream consumers support various acknowledgment types and policies. If a message is not acknowledged within a user-specified number of delivery attempts, an advisory notification is emitted.

    hashtag
    Dispatch type - Pull / Push

    Consumers can be push-based where messages will be delivered to a specified subject or pull-based which allows clients to request batches of messages on demand. The choice of what kind of consumer to use depends on the use-case.

    If there is a need to process messages in an application controlled manner and easily scale horizontally, you would use a 'pull consumer'. A simple client application that wants a replay of messages from a stream sequentially you would use an 'ordered push consumer'. An application that wants to benefit from load balancing or acknowledge messages individually will use a regular push consumer.

    circle-info

    We recommend pull consumers for new projects. In particular when scalability, detailed flow control or error handling are a concern.

    hashtag
    Ordered Consumers

    Ordered consumers are the convenient default type of push & pull consumers designed for applications that want to efficiently consume a stream for data inspection or analysis.

    • Always ephemeral

    • No acknowledgements (if gap is detected, consumer is recreated)

    • Automatic flow control/pull processing

    hashtag
    Persistence - Durable / Ephemeral

    In addition to the choice of being push or pull, a consumer can also be ephemeral or durable. A consumer is considered durable when an explicit name is set on the Durable field when creating the consumer, or when InactiveThreshold is set.

    Durables and ephemeral have the same message delivery semantics but an ephemeral consumer will not have persisted state or fault tolerance (server memory only) and will be automatically cleaned up (deleted) after a period of inactivity, when no subscriptions are bound to the consumer.

    By default, consumers will have the same replication factor as the stream they consume, and will remain even when there are periods of inactivity (unless InactiveThreshold is set explicitly). Consumers can recover from server and client failure.

    hashtag
    Configuration

    Below are the set of consumer configuration options that can be defined. The Version column indicates the version of nats-server in which the option was introduced. The Editable column indicates the option can be edited after the consumer is created.

    hashtag
    General

    Field
    Description
    Version
    Editable

    hashtag
    AckPolicy

    The policy choices include:

    • AckExplicit: The default policy. Each individual message must be acknowledged. Recommended for most reliability and functionality.

    • AckNone: No acknowledgment needed; the server assumes acknowledgment on delivery.

    • AckAll: Acknowledge only the last message received in a series; all previous messages are automatically acknowledged. Will acknowledge all pending messages for all subscribers for Pull Consumer.

    If an acknowledgment is required but not received within the AckWait window, the message will be redelivered.

    Warning: The server may consider an acknowledgment arriving out of the window. For instance, in a queue situation, if a first process fails to acknowledge within the window and the message has been redelivered to another consumer, the acknowledgment from the first consumer will be considered.

    hashtag
    DeliverPolicy

    The policy choices include:

    • DeliverAll: Default policy. Start receiving from the earliest available message in the stream.

    • DeliverLast: Start with the last message added to the stream, or the last message matching the consumer's filter subject if defined.

    • DeliverLastPerSubject

    hashtag
    MaxAckPending

    The MaxAckPending capability provides flow control and applies to both push and pull consumers. For push consumers, MaxAckPending is the only form of flow control. For pull consumers, client-driven message delivery creates implicit one-to-one flow control with subscribers.

    For high throughput, set MaxAckPending to a high value. For applications with high latency due to external services, use a lower value and adjust AckWait to avoid re-deliveries.

    hashtag
    FilterSubjects

    A filter subject provides server-side filtering of messages before delivery to clients.

    For example, a stream factory-events with subject factory-events.*.* can have a consumer factory-A with a filter factory-events.A.* to deliver only events for factory A.

    A consumer can have a singular FilterSubject or plural FilterSubjects. Multiple filters can be applied, such as [factory-events.A.*, factory-events.B.*] or specific event types [factory-events.*.item_produced, factory-events.*.item_packaged].

    Warning: For granular consumer permissions, a single filter uses $JS.API.CONSUMER.CREATE.{stream}.{consumer}.{filter} to restrict users to specific filters. Multiple filters use the general $JS.API.CONSUMER.DURABLE.CREATE.{stream}.{consumer}, which does not include the {filter} token. Use a different strategy for granular permissions.

    hashtag
    Pull-specific

    These options apply only to pull consumers. For configuration examples, see .

    Field
    Description
    Version
    Editable

    hashtag
    Push-specific

    These options apply only to push consumers. For configuration examples, see .

    Field
    Description
    Version
    Editable

    Single-threaded dispatching
  • No load balancing

  • AckWait

    The duration that the server will wait for an acknowledgment for any individual message once it has been delivered to a consumer. If an acknowledgment is not received in time, the message will be redelivered. This setting is only effective when BackOff is not configured.

    2.2.0

    Yes

    The point in the stream from which to receive messages: DeliverAll, DeliverLast, DeliverNew, DeliverByStartSequence, DeliverByStartTime, or DeliverLastPerSubject.

    2.2.0

    No

    OptStartSeq

    Used with the DeliverByStartSequence deliver policy.

    2.2.0

    No

    OptStartTime

    Used with the DeliverByStartTime deliver policy.

    2.2.0

    No

    Description

    A description of the consumer. This can be particularly useful for ephemeral consumers to indicate their purpose since a durable name cannot be provided.

    2.3.3

    Yes

    InactiveThreshold

    Duration that instructs the server to clean up consumers inactive for that long. Prior to 2.9, this only applied to ephemeral consumers.

    2.2.0

    Yes

    Defines the maximum number of messages, without acknowledgment, that can be outstanding. Once this limit is reached, message delivery will be suspended. This limit applies across all of the consumer's bound subscriptions. A value of -1 means there can be any number of pending acknowledgments (i.e., no flow control). The default is 1000.

    2.2.0

    Yes

    MaxDeliver

    The maximum number of times a specific message delivery will be attempted. Applies to any message that is re-sent due to acknowledgment policy (i.e., due to a negative acknowledgment or no acknowledgment sent by the client). The default is -1 (redeliver until acknowledged). Messages that have reached the maximum delivery count will stay in the stream.

    2.2.0

    Yes

    Backoff

    A sequence of delays controlling the re-delivery of messages on acknowledgment timeout (but not on nak). The sequence length must be less than or equal to MaxDeliver. If backoff is not set, a timeout will result in immediate re-delivery. E.g., MaxDeliver=5 backoff=[5s, 30s, 300s, 3600s, 84000s] will re-deliver a message 5 times over one day. When MaxDeliver is larger than the backoff list, the last delay in the list will apply for the remaining deliveries. Note that backoff is NOT applied to naked messages. A nak will result in immediate re-delivery unless nakWithDelay is used to set the re-delivery delay explicitly. When BackOff is set, it overrides

    2.7.1

    Yes

    ReplayPolicy

    If the policy is ReplayOriginal, the messages in the stream will be pushed to the client at the same rate they were originally received, simulating the original timing. If the policy is ReplayInstant (default), the messages will be pushed to the client as fast as possible while adhering to the acknowledgment policy, Max Ack Pending, and the client's ability to consume those messages.

    2.2.0

    No

    Replicas

    Sets the number of replicas for the consumer's state. By default, when the value is set to zero, consumers inherit the number of replicas from the stream.

    2.8.3

    Yes

    MemoryStorage

    If set, forces the consumer state to be kept in memory rather than inherit the storage type of the stream (default is file storage). This reduces I/O from acknowledgments, useful for ephemeral consumers.

    2.8.3

    No

    SampleFrequency

    Sets the percentage of acknowledgments that should be sampled for observability, 0-100. This value is a string and allows both 30 and 30% as valid values.

    2.2.0

    Yes

    Metadata

    A set of application-defined key-value pairs for associating metadata with the consumer.

    2.10.0

    Yes

    A set of subjects that overlap with the subjects bound to the stream to filter delivery to subscribers. Note: This cannot be used with the FilterSubject field.

    2.10.0

    Yes

    HeadersOnly

    Delivers only the headers of messages in the stream, adding a Nats-Msg-Size header indicating the size of the removed payload.

    2.6.2

    Yes

    : Start with the latest message for each filtered subject currently in the stream.
  • DeliverNew: Start receiving messages created after the consumer was created.

  • DeliverByStartSequence: Start at the first message with the specified sequence number. The consumer must specify OptStartSeq defining the sequence number.

  • DeliverByStartTime: Start with messages on or after the specified time. The consumer must specify OptStartTime defining the start time.

  • MaxRequestMaxBytes

    The maximum total bytes that can be requested in a given batch. When set with MaxRequestBatch, the batch size will be constrained by whichever limit is hit first.

    2.8.3

    Yes

    IdleHeartbeat

    If set, the server will regularly send a status message to the client during inactivity, indicating that the JetStream service is up and running. The status message will have a code of 100 and no reply address. Note: This mechanism is handled transparently by supported clients.

    2.2.0

    Yes

    RateLimit

    Throttles the delivery of messages to the consumer, in bits per second.

    2.2.0

    Yes

    Durable

    If set, clients can have subscriptions bind to the consumer and resume until the consumer is explicitly deleted. A durable name cannot contain whitespace, ., *, >, path separators (forward or backward slash), or non-printable characters.

    2.2.0

    No

    FilterSubject

    A subject that overlaps with the subjects bound to the stream to filter delivery to subscribers. Note: This cannot be used with the FilterSubjects field.

    2.2.0

    Yes

    AckPolicy

    The requirement of client acknowledgments, either AckExplicit, AckNone, or AckAll.

    2.2.0

    MaxWaiting

    The maximum number of waiting pull requests.

    2.2.0

    No

    MaxRequestExpires

    The maximum duration a single pull request will wait for messages to be available to pull.

    2.7.0

    Yes

    MaxRequestBatch

    The maximum batch size a single pull request can make. When set with MaxRequestMaxBytes, the batch size will be constrained by whichever limit is hit first.

    2.7.0

    DeliverSubject

    The subject to deliver messages to. Setting this field decides whether the consumer is push or pull-based. With a deliver subject, the server will push messages to clients subscribed to this subject.

    2.2.0

    No

    DeliverGroup

    The queue group name used to distribute messages among subscribers. Analogous to a queue grouparrow-up-right in core NATS.

    2.2.0

    Yes

    FlowControl

    Enables per-subscription flow control using a sliding-window protocol. This protocol relies on the server and client exchanging messages to regulate when and how many messages are pushed to the client. This one-to-one flow control mechanism works in tandem with the one-to-many flow control imposed by MaxAckPending across all subscriptions bound to a consumer.

    2.2.0

    NATS by Examplearrow-up-right
    NATS by Examplearrow-up-right

    No

    Yes

    Yes

    AckWait
    entirely
    . The first value in the BackOff determines the
    AckWait
    value.
    DeliverPolicy
    MaxAckPending
    FilterSubjects

    Streams

    Streams are message stores, each stream defines how messages are stored and what the limits (duration, size, interest) of the retention are. Streams consume normal NATS subjects, any message published on those subjects will be captured in the defined storage system. You can do a normal publish to the subject for unacknowledged delivery, though it’s better to use the JetStream publish calls instead as the JetStream server will reply with an acknowledgement that it was successfully stored.

    Orders

    The diagram above shows the concept of storing all ORDERS.* in the Stream even though there are many types of order related messages. We’ll show how you can selectively consume subsets of messages later. Relatively speaking the Stream is the most resource consuming component so being able to combine related data in this manner is important to consider.

    Streams can consume many subjects. Here we have ORDERS.* but we could also consume SHIPPING.state into the same Stream should that make sense.

    hashtag
    Stream limits and message retention

    Streams support various retention policies which define when messages in the stream can be automatically deleted, such as when stream limits are hit (like max count, size or age of messages), or also more novel options that apply on top of the limits such as interest-based retention or work-queue semantics (see ).

    Upon reaching message limits, the server will automatically discard messages either by removing the oldest messages to make room for new ones (DiscardOld) or by refusing to store new messages (DiscardNew). For more details, see .

    Streams support deduplication using a Nats-Msg-Id header and a sliding window within which to track duplicate messages. See the section.

    For examples on how to configure streams with your preferred NATS client, see .

    hashtag
    Configuration

    Below are the set of stream configuration options that can be defined. The Version column indicates the version of the server the option was introduced. The Editable column indicates the option can be edited after the stream created. See client-specific examples .

    Field
    Description
    Version
    Editable

    hashtag
    StorageType

    The storage types include:

    • File (default) - Uses file-based storage for stream data.

    • Memory - Uses memory-based storage for stream data.

    hashtag
    Subjects

    Note: a stream configured as a cannot be configured with a set of subjects. A mirror implicitly sources a subset of the origin stream (optionally with a filter), but does not subscribe to additional subjects.

    If no explicit subject is specified, the default subject will be the same name as the stream. Multiple subjects can be specified and edited over time. Note, if messages are stored by a stream on a subject that is subsequently removed from the stream config, consumers will still observe those messages if their subject filter overlaps.

    hashtag
    RetentionPolicy

    The retention options include:

    • LimitsPolicy (default) - Retention based on the various limits that are set including: MaxMsgs, MaxBytes, MaxAge, and MaxMsgsPerSubject. If any of these limits are set, whichever limit is hit first will cause the automatic deletion of the respective message(s). See a .

    • WorkQueuePolicy - Retention with the typical behavior of a FIFO queue. Each message can be consumed only once. This is enforced by only allowing

    circle-exclamation

    If the InterestPolicy or WorkQueuePolicy is chosen for a stream, note that any limits, if defined, will still be enforced. For example, given a work-queue stream, if MaxMsgs are set and the default discard policy of old, messages will be automatically deleted even if the consumer did not receive them.

    circle-info

    If the InterestPolicy is chosen for a stream, note that when a consumer is deleted, either manually or as part of a server cleanup when the consumer's InactiveThreshold is reached, and that consumer was the last one that marked interest in a set of subjects, the server may not immediately delete all messages that belong to that set of subjects, for performance reasons. Instead, deletion may be deferred until those messages reach the start of the stream.

    circle-info

    WorkQueuePolicy streams will only delete messages enforced by limits or when a message has been successfully Ack’d by its consumer. Messages that have attempted redelivery and have reached MaxDeliver attempts for the consumer will remain in the stream and must be manually deleted via the JetStream API.

    hashtag
    DiscardPolicy

    The discard behavior applies only for streams that have at least one limit defined. The options include:

    • DiscardOld (default) - This policy will delete the oldest messages in order to maintain the limit. For example, if MaxAge is set to one minute, the server will automatically delete messages older than one minute with this policy.

    • DiscardNew - This policy will reject new messages from being appended to the stream if it would exceed one of the limits. An extension to this policy is DiscardNewPerSubject which will apply this policy on a per-subject basis within the stream.

    hashtag
    Placement

    Refers to the placement of the stream assets (data) within a NATS deployment, be it a single cluster or a supercluster. A given stream, including all replicas (not mirrors), are bound to a single cluster. So when creating or moving a stream, a cluster will be chosen to host the assets.

    Without declaring explicit placement for a stream, by default, the stream will be created within the cluster that the client is connected to assuming it has sufficient storage available.

    By declaring stream placement, where these assets are located can be controlled explicitly. This is generally useful to co-locate with the most active clients (publishers or consumers) or may be required for data sovereignty reasons.

    Placement is supported in all client SDKs as well as the CLI. For example, adding a stream via the CLI to place a stream in a specific cluster looks like this:

    For this to work, all servers in a given cluster must define the name field within the server configuration block.

    If you have multiple clusters that form a supercluster, then each is required to have a different name.

    Another placement option are tags. Each server can have its own set of tags, , typically describing properties of geography, hosting provider, sizing tiers, etc. In addition, tags are often used in conjunction with the jetstream.unique_tag config option to ensure that replicas must be placed on servers having different values for the tag.

    For example, a server A, B, and C in the above cluster might all the same configuration except for the availability zone they are deployed to.

    Now we can create a stream by using tags, for example indicating we want a stream in us-east1.

    If we had a second cluster in Google Cloud with the same region tag, the stream could be placed in either the AWS or GCP cluster. However, the unique_tag constraint ensures each replica will be placed in a different AZ in the cluster that was selected implicitly by the placement tags.

    Although less common, note that both the cluster and tags can be used for placement. This would be used if a single cluster contains servers have different properties.

    hashtag
    Sources and Mirrors

    When a stream is configured with a source or mirror, it will automatically and asynchronously replicate messages from the origin stream. There are several options when declaring the configuration.

    A source or mirror stream can have its own retention policy, replication, and storage type. Changes to the source or mirror, e.g. deleting messages or publishing, do not reflect on the origin stream.

    circle-info

    Sources is a generalization of the Mirror and allows for sourcing data from one or more streams concurrently. We suggest to use Sources in new configurations. If you require the target stream to act as a read-only replica:

    • Configure the stream without listen subjects or

    hashtag
    Stream sources

    A stream defining Sources is a generalized replication mechanism and allows for sourcing data from one or more streams concurrently as well as allowing direct write/publish by clients. Essentially the source streams and client writes are aggregated into a single interleaved stream. Subject transformation and filtering allow for powerful data distribution architectures.

    hashtag
    Mirrors

    A mirror can source its messages from exactly one stream and a clients can not directly write to the mirror. Although messages cannot be published to a mirror directly by clients, messages can be deleted on-demand (beyond the retention policy), and consumers have all capabilities available on regular streams.

    For details see:

    hashtag
    AllowRollup

    If enabled, the AllowRollup stream option allows for a published message having a Nats-Rollup header indicating all prior messages should be purged. The scope of the purge is defined by the header value, either all or sub.

    The Nats-Rollup: all header will purge all prior messages in the stream. Whereas the sub value will purge all prior messages for a given subject.

    A common use case for rollup is for state snapshots, where the message being published has accumulated all the necessary state from the prior messages, relative to the stream or a particular subject.

    hashtag
    RePublish

    If enabled, the RePublish stream option will result in the server re-publishing messages received into a stream automatically and immediately after a successful write, to a distinct destination subject.

    For high scale needs where, currently, a dedicated consumer may add too much overhead, clients can establish a core NATS subscription to the destination subject and receive messages that were appended to the stream in real-time.

    The fields for configuring republish include:

    • Source - An optional subject pattern which is a subset of the subjects bound to the stream. It defaults to all messages in the stream, e.g. >.

    • Destination - The destination subject messages will be re-published to. The source and destination must be a valid .

    For each message that is republished, a set of are automatically added.

    Note: The reply-subject is being removed on republished messages, even if no-ack is set on the stream.

    hashtag
    SubjectTransform

    If configured, the SubjectTransform will perform a subject transform to matching subjects of messages received by the stream and transform the subject, before storing it in the stream. The transform configuration specifies a Source and Destination field, following the rules of .

    Replicas

    How many replicas to keep for each message in a clustered JetStream, maximum 5.

    2.2.0

    Yes

    MaxAge

    Maximum age of any message in the Stream, expressed in nanoseconds.

    2.2.0

    Yes

    MaxBytes

    Maximum number of bytes stored in the stream. Adheres to Discard Policy, removing oldest or refusing new messages if the Stream exceeds this size.

    2.2.0

    Yes

    MaxMsgs

    Maximum number of messages stored in the stream. Adheres to Discard Policy, removing oldest or refusing new messages if the Stream exceeds this number of messages.

    2.2.0

    Yes

    MaxMsgSize

    The largest message that will be accepted by the Stream. The size of a message is a sum of payload and headers.

    2.2.0

    Yes

    MaxConsumers

    Maximum number of Consumers that can be defined for a given Stream, -1 for unlimited.

    2.2.0

    No

    NoAck

    Default false. Disables acknowledging messages that are received by the Stream. This is mandatory when archiving messages which have a reply subject set. E.g. requests in an Request/Reply communication. By default JetStream will acknowledge each message with an empty reply on the reply subject.

    2.2.0

    Yes

    Declares the retention policy for the stream.

    2.2.0

    No

    The behavior of discarding messages when any streams’ limits have been reached.

    2.2.0

    Yes

    DuplicateWindow

    The window within which to track duplicate messages, expressed in nanoseconds.

    2.2.0

    Yes

    Used to declare where the stream should be placed via tags and/or an explicit cluster name.

    2.2.0

    Yes

    If set, indicates this stream is a mirror of another stream.

    2.2.0

    Yes (since 2.12.0)

    If defined, declares one or more streams this stream will source messages from.

    2.2.0

    Yes

    MaxMsgsPerSubject

    Limits maximum number of messages in the stream to retain per subject.

    2.3.0

    Yes

    Description

    A verbose description of the stream.

    2.3.3

    Yes

    Sealed

    Sealed streams do not allow messages to be deleted via limits or API, sealed streams can not be unsealed via configuration update. Can only be set on already created streams via the Update API.

    2.6.2

    Yes (once)

    DenyDelete

    Restricts the ability to delete messages from a stream via the API.

    2.6.2

    No

    DenyPurge

    Restricts the ability to purge messages from a stream via the API.

    2.6.2

    No

    Allows the use of the Nats-Rollup header to replace all contents of a stream, or subject in a stream, with a single new message.

    2.6.2

    Yes

    If set, messages stored to the stream will be immediately republished to the configured subject.

    2.8.3

    Yes

    AllowDirect

    If true, and the stream has more than one replica, each replica will respond to direct get requests for individual messages, not only the leader.

    2.9.0

    Yes

    MirrorDirect

    If true, and the stream is a mirror, the mirror will participate in a serving direct get requests for individual messages from origin stream.

    2.9.0

    Yes

    DiscardNewPerSubject

    If true, applies discard new semantics on a per subject basis. Requires DiscardPolicy to be DiscardNew and the MaxMsgsPerSubject to be set.

    2.9.0

    Yes

    Metadata

    A set of application-defined key-value pairs for associating metadata on the stream.

    2.10.0

    Yes

    Compression

    If file-based and a compression algorithm is specified, the stream data will be compressed on disk. Valid options are nothing (empty string) or s2 for Snappy compression.

    2.10.0

    Yes

    FirstSeq

    If specified, a new stream will be created with its initial sequence set to this value.

    2.10.0

    No

    Applies a subject transform (to matching messages) before storing the message.

    2.10.0

    Yes

    ConsumerLimits

    Sets default limits for consumers created for a stream. Those can be overridden per consumer.

    2.10.0

    Yes

    AllowMsgTTL

    If set, allows header initiated per-message TTLs, instead of relying solely on MaxAge.

    2.11.0

    No (can only enable)

    SubjectDeleteMarkerTTL

    If set, a subject delete marker will be placed after the last message of a subject ages out. This defines the TTL of the delete marker that's left behind.

    2.11.0

    Yes

    AllowAtomicPublish

    If set, allows atomically writing a batch of N messages into the stream.

    2.12.0

    Yes

    AllowMsgCounter

    If set, the stream will function as a counter stream, hosting distributed counter CRDTs.

    2.12.0

    No

    AllowMsgSchedules

    If set, allows message scheduling in the stream.

    2.12.0

    No (can only enable)

    one
    consumer to be created
    per subject
    for a work-queue stream (i.e. the consumers' subject filter(s) must
    not
    overlap). Once a given message is ack’ed, it will be deleted from the stream. See a
    .
  • InterestPolicy - Retention based on the consumer interest in the stream and messages. The base case is that there are zero consumers defined for a stream. If messages are published to the stream, they will be immediately deleted so there is no interest. This implies that consumers need to be bound to the stream ahead of messages being published to the stream. Once a given message is ack’ed by all consumers filtering on the subject, the message is deleted (same behavior as WorkQueuePolicy). See a full code examplearrow-up-right.

  • Temporarily disable the listen subjects through client authorizations.

    HeadersOnly - If true, the message data will not be included in the re-published message, only an additional header Nats-Msg-Size indicating the size of the message in bytes.

    Name

    Identifies the stream and has to be unique within JetStream account. Names cannot contain whitespace, ., *, >, path separators (forward or backwards slash), and non-printable characters.

    2.2.0

    No

    Storage

    The storage type for stream data.

    2.2.0

    No

    Subjects

    A list of subjects to bind. Wildcards are supported. Cannot be set for mirror streams.

    2.2.0

    Retention Policy
    Discard Policy
    Message Deduplication
    NATS by Examplearrow-up-right
    herearrow-up-right
    mirror
    full code examplearrow-up-right
    clusterarrow-up-right
    defined in configurationarrow-up-right
    Source and Mirror
    subject mapping
    headers
    subject transform

    Yes

    full code examplearrow-up-right
    nats stream add --cluster aws-us-east1-c1
    cluster {
      name: aws-us-east1-c1
      # etc..
    }
    // Server A
    server_tags: ["cloud:aws", "region:us-east1", "az:a"]
    
    jetstream: {
      unique_tag: "az"
    }
    
    // Server B
    server_tags: ["cloud:aws", "region:us-east1", "az:b"]
    
    jetstream: {
      unique_tag: "az"
    }
    
    // Server C
    server_tags: ["cloud:aws", "region:us-east1", "az:c"]
    
    jetstream: {
      unique_tag: "az"
    }
    nats stream add --tag region:us-east1
    Retention
    Discard
    Placement
    Mirror
    Sources
    AllowRollup
    RePublish
    SubjectTransform
    NATS JS Consumers - The ONE feature that makes NATS more powerful than Kafka, Pulsar, RabbitMQ, & redis

    Headers

    Message headers are used in a variety of JetStream contexts, such de-duplication, auto-purging of messages, metadata from republished messages, and more.

    Nats- is a reserved namespace. Please use a different prefix for your own headers. This list may not be complete. Additional headers may be used for API internal messages or messages used for monitoring and control.

    hashtag
    Publish

    Headers that can be set by a client when a message being published. These headers are recognized by the server.

    Name
    Description
    Example
    Version

    hashtag
    RePublish or direct get

    When messages are being re-published (must be configured in stream settings) from a stream or retrieved with a direct get operation from stream these headers are being set.

    Do not set these headers on client published messages.

    Name
    Description
    Example
    Version

    hashtag
    Sources

    Headers that are implicitly added to messages sourced from other streams.

    The format of the header content may change in the future. Please parse conservatively and assume that additional fields may be added or that older nats-server version have fewer fields.

    Name
    Description
    Example
    Version

    hashtag
    Tracing

    When tracing is activated every subsystem that touches a message will produce Trace Events. These Events are aggregated per server and published to a destination subject.

    Note that two variants exist. traceparent as per the trace context standard and ad hoc tracing through Nats-Trace-Dest.

    Introduced in version 2.11 - see

    Name
    Description
    Example
    Version

    hashtag
    Scheduler

    Message scheduling in streams. Needs to be enabled on the stream with "allow schedules" flag.

    Introduced in version 2.11 - see

    Name
    Description
    Example
    Version

    The final scheduled message will contain the following headers.

    Name
    Description
    Example
    Version

    hashtag

    hashtag
    Batch send

    Introduced in version 2.12 with optimizations in 2.14 - see

    Atomic batch sends will use the following headers. Batches are atomic on send only, but a client may reconstruct a batch using the headers below.

    Name
    Description
    Example
    Version

    hashtag
    Internal

    Headers used internally by API clients and the server. Should not be set by user.

    This is list is not exhaustive. Headers used in error replies may not be documented.

    Name
    Description
    Example
    Version

    Nats-Counter-Sources

    hashtag
    Mirror

    Headers used for internal flow-control messages for a mirror.

    This is for information only and may change without notice.

    Name
    Description
    Example
    Version

    Nats-Expected-Last-Sequence

    Used to apply optimistic concurrency control at the stream-level. The value is the last expected sequence and the server will reject a publish if the current sequence does not match.

    328

    2.2.0

    Nats-Expected-Last-Subject-Sequence

    Used to apply optimistic concurrency control at the subject-level. The value is the last expected sequence and the server will reject a publish if the current sequence does not match for the message's subject.

    38

    2.3.1

    Nats-Expected-Last-Subject-Sequence-Subject

    A subject which may include wildcards. Used with Nats-Expected-Last-Subject-Sequence. Server will enforce last sequence against the given subject rather than the one being published.

    events.orders.1.>

    2.11.0

    Nats-Rollup

    Used to apply a purge of all prior messages in a stream or at the subject-level. The rollup message will stay in the stream. Requires the allow rollups to be set on the stream.

    all purges the full stream, sub purges the subject on which this messages was sent. Wildcards subjects are not allowed and will result in undefined behavior.

    2.6.2

    Nats-TTL

    Used to set a per message TTL. Requires the per message ttl flag to be set on the stream.

    1h, 10s (go duration string format)

    2.11

    Nats-Last-Sequence

    The last sequence of the message having the same subject, otherwise zero if this is the first message for the subject.

    190

    2.8.3

    Nats-Time-Stamp

    The original timestamp of the message.

    2023-08-23T19:53:05.762416Z

    2.10.0

    Nats-Num-Pending

    Number of messages pending in the multi/batched get response. or details see:

    5

    2.10.0

    Nats-UpTo-Sequence

    On the last messages of multi/batched get response. The up-to-seq value of the original request. Helps the client to continue incomplete batch requests. For details see:

    2.11.0

    Accept-Encoding

    Optional. Enables compression of the payload of the trace messages.

    gzip, snappy

    2.11

    Nats-Trace-Hop

    Internal. Do not set. Set by the server to count hops.

    <hop count>

    2.11

    Nats-Trace-Origin-Account

    Internal. Do not set. Set by the server when an account boundary is crossed

    <account name>

    2.11

    Nats-Schedule-Source

    Optional. Instructs the schedule to read the last message on the given subject and publish it. If the Subject is empty, nothing is published, wildcards are not supported.

    orders.customer_acme

    2.11

    Nats-Schedule-Time-Zone

    Optional. The time zone used for the Cron schedule. If not specified, the Cron schedule will be in UTC. Not allowed to be used if the schedule is not a Cron schedule.

    CET

    2.11

    Nats-Incr

    Used in KV stores to atomically increment counter. Any valid integer (including 0) starting with a sign.. See

    +1, +42, -1, +0

    2.12

    Nats-Counter-Sources

    Tracking Nats-Incr when messages are sourced. For details see:

    2.12

    Nats-Response-Type

    2.6.4

    Nats-Msg-Id

    Client-defined unique identifier for a message that will be used by the server apply de-duplication within the configured Duplicate Window.

    9f01ccf0-8c34-4789-8688-231a2538a98b

    2.2.0

    Nats-Expected-Stream

    Used to assert the published message is received by some expected stream.

    my-stream

    2.2.0

    Nats-Expected-Last-Msg-Id

    Used to apply optimistic concurrency control at the stream-level. The value is the last expected Nats-Msg-Id and the server will reject a publish if the current ID does not match.

    9f01ccf0-8c34-4789-8688-231a2538a98b

    Nats-Stream

    Name of the stream the message was republished from.

    Nats-Stream: my-stream

    2.8.3

    Nats-Subject

    The original subject of the message.

    events.mouse_clicked

    2.8.3

    Nats-Sequence

    The original sequence of the message.

    193

    Nats-Stream-Source

    Contains space delimited: - Origin stream name (disambiguated with domain hash if cross domain sourced) - The original sequence number - The list of subject filters - The list of destination transforms - The original subject

    ORDERS:vSF0ECo6 17 foo.* bar.$1 foo.abc

    2.2.0

    traceparent

    Triggers tracing as per the https://www.w3.org/TR/trace-context/arrow-up-right standard. Requires the msg-trace section to be configured on the account level.

    N/A

    2.11

    Nats-Trace-Dest

    The subject that will receive the Trace messages

    trace.receiver.all

    2.11

    Nats-Trace-Only

    Optional. Defaults to false. Set to true to skip message delivery. If true only traces will be produced, but the messages is not sent to a subscribing client or stored in JetStream.

    true

    Nats-Schedule

    When the message will be sent to the target subject. Several formats are suppported: A timestamp for sending a messages once. Crontab format for repeated messages and simple alias for common crontab formats.

    0 30 14 * * *, @hourly, daily, @at 2009-11-10T23:00:00Z (RFC3339 format)

    2.11

    Nats-Schedule-TTL

    Optional. The TTL to be set on the final message on the target subject.

    1h, 10s (valid go duration string)

    2.11

    Nats-Schedule-Target

    The target subject the final message will be sent to. Note that this must be distinct from the scheduling subject the message arrived in.

    orders

    Nats-Scheduler

    The subject holding the schedule

    orders.schedule.1234

    2.11

    Nats-Schedule-Next

    Timestamp for next invocation for cron schedule messages or purge for delayed messages

    2009-11-10T23:00:00Z

    2.11

    Nats-TTL

    The TTL value when Nats-Schedule-TTL was set

    1h, 10s

    Nats-Batch-Id

    Unique identifier for the batch.

    <uuid> (<=64 characters)

    2.12

    Nats-Batch-Sequence

    Monotonously increasing id, starting with 1

    1, 2

    2.12

    Nats-Batch-Commit

    Only on last message. 1 commit the batch including this message. eob commit the batch excluding this message. Any other value will terminate the batch.

    1, eob

    Nats-Required-Api-Level

    Optional. The required API level for the JetStream request. Servers from version 2.11 will return an error if larger than the support API level.

    2 (Integer value)

    2.11

    Nats-Request-Info

    When messages cross account boundaries a header with origin information (account, user etc) may be added.

    2.2.0

    Nats-Marker-Reason

    When messages are removed from a KV where subject delete markers are supported, a delete marker will be placed. And notifications are sent to interested watchers. The message payload is empty and the removal reason is indicated through this header. See ADR-48arrow-up-right

    MaxAge, Remove, Purge

    Nats-Last-Consumer

    2.2.1

    Nats-Last-Stream

    2.2.1

    Nats-Consumer-Stalled

    ADR-41arrow-up-right
    ADR-51arrow-up-right
    ADR-50arrow-up-right

    2.2.0

    2.8.3

    2.11

    2.11

    2.11

    2.12

    2.12

    2.4.0

    ADR-31arrow-up-right
    ADR-31arrow-up-right
    ADR-49arrow-up-right
    ADR-49arrow-up-right