Job Outputs
Every job writes to exactly one output. Outputs stream the event payloads that actions produced, optionally batching or wrapping them before delivery. The table below summarizes the built-in connectors; follow the links for full DSL options.
Supported outputs
| Output | Delivery style | Ideal for | Notes |
|---|---|---|---|
azure-blob | Batched upload | Landing data in Azure storage. | Supports append or replace strategies and server-side encryption settings. |
discard | Null sink | Measuring upstream performance without delivery. | Useful for load testing actions. |
file | Streaming | Writing to local files on the worker. | Combine with volume mounts for on-prem delivery. |
file-store | Batched upload | Managed FileStore buckets. | Generates stable object names and handles deduplication metadata. |
gcs | Batched upload | Google Cloud Storage targets. | Mirrors the batching semantics of S3. |
http-get | Request/response | Triggering downstream HTTP endpoints that expect GETs. | Rarely used; most jobs prefer http-post. |
http-post | Batched POST | REST APIs and webhooks. | Configure headers, authentication, and templated bodies. |
message | Control-plane | Broadcasting structured messages. | Downstream jobs consume them via the internal-messages input. |
print | Streaming | Writing to STDOUT/STDERR. | Handy for development and demos. |
s3 | Batched upload | Amazon S3 sinks. | Supports server-side encryption, storage classes, and multi-part uploads. |
splunk-hec | Batched POST | Sending events to Splunk HTTP Event Collector. | Automatically wraps batches according to HEC expectations. |
worker-channel | Streaming | Chaining jobs in-memory. | Feeds the worker-channel input of downstream jobs. |
If you need multiple destinations, emit to a worker channel and fan out with additional jobs.
Batching strategies
Outputs send events individually unless you enable batching. Choose a mode:
fixed: Flush afterfixed_sizeevents or after the optionaltimeoutexpires. This is ideal for APIs that accept arrays or bulk uploads.document: Preserve the grouping generated by the input (for example, all records pulled from one file or API response). This mode keeps document metadata intact for downstream consumers.
Set Header and Footer strings to wrap each batch. With runtime variable expansion you can insert counters, timestamps, or job metadata such as @{job} and ${stat|_BATCH_NUMBER}.
Enable Wrap as JSON when the receiver expects a valid JSON array. The runtime adds brackets and commas automatically, so you can focus on formatting headers and footers.
Reliability and retries
Networked outputs (http-post, s3, azure-blob, gcs, splunk-hec) expose retry settings. Configure maximum attempts, exponential backoff, and optional dead-letter behaviour where supported. Pair these with monitoring alerts so operators know when downstream systems are slow or rejecting payloads.
Testing outputs
Before promoting a job, run it with the print output in staging to inspect the exact payload and headers. Once satisfied, swap back to the production connector and stage the job. Keep the Deploying jobs guide handy for promotion workflows.
For exhaustive field documentation, see the DSL output reference.