Chaining Jobs With Channels (Advanced)
This end-to-end tutorial builds a four-job pipeline that fans events out to two workers and then fans them back in for aggregation. It highlights how to design channel names, deploy jobs in the correct order, and verify flow end to end.
Scenario overview
We implement the following flow:
- Job A ingests events from an HTTP input and publishes each record to channel
alpha. - Job B (enrichment) listens on channel
alphaand writes enriched events to channelbeta. - Job C (alerting) also listens on channel
alphaand writes to channelgamma. - Job D consumes both
betaandgammaand emits final records to an output (for example Elasticsearch).
Channels let you decouple workloads while keeping jobs simple—each job still has a single input and output, but the pipeline scales horizontally by adding workers subscribing to the same channel.
Prerequisites
- Server and at least two workers online (built-in worker plus one external worker is sufficient).
- Admin access to the UI and CLI to create API keys for external workers.
- Familiarity with the visual editor and staging/deployment workflow (see the build overview).
Step 1 – design the channel contract
Before building jobs, document the schema each channel carries. This avoids downstream validation errors.
| Channel | Producer | Consumers | Fields |
|---|---|---|---|
| alpha | Job A | Jobs B, C | event_id, ts, raw_payload |
| beta | Job B | Job D | event_id, ts, geo_country, enrichment_score |
| gamma | Job C | Job D | event_id, ts, alert_level |
Store this table in your runbook or a shared document so future contributors know which fields are available.
Step 2 – build Job A (producer)
- Create a new job named channel-producer.
- Choose an HTTP input (or another source) and configure authentication.
- Add any necessary actions (for example, parse JSON).
- Set the output to Worker Channel with channel ID
alpha. - Save the job, close the editor, and stage it.
You can verify the payload shape by using the Run Output tab—confirm the fields match the alpha contract.
Step 3 – build Jobs B and C (parallel consumers)
Create two jobs based on the channel contract:
-
channel-enrich
- Input: Worker Channel
alpha. - Actions: add geographic enrichment, compute scores, or call external APIs.
- Output: Worker Channel
beta.
- Input: Worker Channel
-
channel-alert
- Input: Worker Channel
alpha. - Actions: filter on severity, map to alert levels.
- Output: Worker Channel
gamma.
- Input: Worker Channel
Use the Preview tab for each action to ensure Job B and Job C emit the expected fields. When both jobs stage successfully, deploy them to workers that have capacity for the new workload.
Step 4 – build Job D (fan-in)
Job D combines the outputs from Jobs B and C. Create a job named channel-aggregate with an Input: Worker Channel set to beta. Worker-channel inputs accept one channel per job definition; if you need to incorporate events from both beta and gamma, publish them into a shared channel (for example by having Job C emit to beta or adding a lightweight republisher job).
Finish configuration:
- Use a Join action (or Merge Variables) to align events by
event_id. - Add business logic or scoring based on enriched and alert data.
- Set the output to your destination—e.g., Elasticsearch, Splunk, or S3.
Sample YAML for the worker-channel input:
input: worker-channel: worker-channel-name: betaStage Job D but wait to deploy until Jobs A–C are running to avoid empty channel warnings.
Step 5 – deploy in order
- Deploy channel-producer and confirm the worker shows the job as running.
- Deploy channel-enrich and channel-alert; watch worker logs for channel subscription messages.
- Deploy channel-aggregate once beta/gamma show new events.
- Trigger sample traffic (curl, replay file, etc.) and verify the final output destination receives enriched events.
Monitor the Issues panel and worker logs throughout deployment. Channel mismatch errors typically indicate a schema drift between the jobs.
Troubleshooting
- Job D never receives events: confirm Jobs B and C publish to the exact channel IDs listed in the contract. Channel IDs are case sensitive.
- Validation errors in Jobs B/C: use Preview for the last action in Job A to confirm the payload contains the fields referenced downstream.
- Worker backpressure: add an additional worker subscribed to
alphaand redeploy Jobs B and C across the fleet. - Lost events after restart: channels are in-memory; for guaranteed delivery, persist to a queue or storage service before fan-out.
Validate and monitor
- Use Run & Trace on channel-producer to capture a live payload and confirm the channel contract before deployment.
- Stage and deploy Jobs A-D in order, then confirm they remain Running with expected event throughput under Operate > Job status.
- Generate synthetic traffic (curl, replay, or QA fixtures) and monitor worker logs for channel subscription updates and aggregate outputs.
After validation, you can:
- Add automated tests or synthetic traffic to continuously verify the multi-job pipeline.
- Extend the pattern with dedicated workers per channel to isolate resource-intensive actions.
- Combine these checks with reference/troubleshooting when new failure modes appear.
Fold the channel topology into Operate daily operations and configure health alerts using Operate monitoring.