Skip to content

Job Outputs

Every job writes to exactly one output. Outputs stream the event payloads that actions produced, optionally batching or wrapping them before delivery. The table below summarizes the built-in connectors; follow the links for full DSL options.

Supported outputs

OutputDelivery styleIdeal forNotes
azure-blobBatched uploadLanding data in Azure storage.Supports append or replace strategies and server-side encryption settings.
discardNull sinkMeasuring upstream performance without delivery.Useful for load testing actions.
fileStreamingWriting to local files on the worker.Combine with volume mounts for on-prem delivery.
file-storeBatched uploadManaged FileStore buckets.Generates stable object names and handles deduplication metadata.
gcsBatched uploadGoogle Cloud Storage targets.Mirrors the batching semantics of S3.
http-getRequest/responseTriggering downstream HTTP endpoints that expect GETs.Rarely used; most jobs prefer http-post.
http-postBatched POSTREST APIs and webhooks.Configure headers, authentication, and templated bodies.
messageControl-planeBroadcasting structured messages.Downstream jobs consume them via the internal-messages input.
printStreamingWriting to STDOUT/STDERR.Handy for development and demos.
s3Batched uploadAmazon S3 sinks.Supports server-side encryption, storage classes, and multi-part uploads.
splunk-hecBatched POSTSending events to Splunk HTTP Event Collector.Automatically wraps batches according to HEC expectations.
worker-channelStreamingChaining jobs in-memory.Feeds the worker-channel input of downstream jobs.

If you need multiple destinations, emit to a worker channel and fan out with additional jobs.

Batching strategies

Outputs send events individually unless you enable batching. Choose a mode:

  • fixed: Flush after fixed_size events or after the optional timeout expires. This is ideal for APIs that accept arrays or bulk uploads.
  • document: Preserve the grouping generated by the input (for example, all records pulled from one file or API response). This mode keeps document metadata intact for downstream consumers.

Set Header and Footer strings to wrap each batch. With runtime variable expansion you can insert counters, timestamps, or job metadata such as @{job} and ${stat|_BATCH_NUMBER}.

Enable Wrap as JSON when the receiver expects a valid JSON array. The runtime adds brackets and commas automatically, so you can focus on formatting headers and footers.

Reliability and retries

Networked outputs (http-post, s3, azure-blob, gcs, splunk-hec) expose retry settings. Configure maximum attempts, exponential backoff, and optional dead-letter behaviour where supported. Pair these with monitoring alerts so operators know when downstream systems are slow or rejecting payloads.

Testing outputs

Before promoting a job, run it with the print output in staging to inspect the exact payload and headers. Once satisfied, swap back to the production connector and stage the job. Keep the Deploying jobs guide handy for promotion workflows.

For exhaustive field documentation, see the DSL output reference.