How to receive webhooks in Shell (bash) minimal receiver → production-ready processing
Start with a minimal “native” receiver, but don’t stop there.
In production, reliable webhook handling means verification, retries, idempotency, and backpressure — which is where Hooque simplifies everything.
Building for a specific provider? Browse provider webhook APIs.
TL;DR
- Treat “receive webhooks in Shell (bash)” as an ops problem, not just a route handler.
- Verify the request before parsing/side effects (use a verifySignature(...) stub, then implement provider verification).
- Return 2xx quickly; move work to a worker/queue to avoid timeouts and retries.
- Assume retries and design idempotency (dedupe by event id + unique constraints).
- Log + store raw payloads for replayable debugging.
- If you need one workflow across many providers, centralize ingest + standardize consumption.
Deep dives: security, retries, queue migration.
Anti-patterns
- Doing business logic inline in the Shell (bash) request handler.
- Parsing/transforming the body before verification (breaks signing inputs).
- Returning 2xx before authenticity is proven.
- Skipping idempotency (retries become double side effects).
If you’re triaging a live incident, use the debugging playbook .
Table of contents
Why it's hard in production
A route handler is the easy part. Supporting multiple senders means multiple security models, spikes, and retry semantics.
Verify authenticity + stop replays
Use a verifySignature(...) stub here, then implement real verification + replay defense for each provider.
Assume retries (duplicates are optional)
Treat every delivery as at-least-once and make side effects idempotent (DB constraints, dedupe keys).
Don’t do work in the request path
Ack fast, process async. Otherwise timeouts, deploys, and spikes turn into missed webhooks.
Debug with real payloads
Save the exact body + headers so you can replay deterministically after a fix.
Add monitoring + alerts early
Track delivered vs rejected, processing latency, queue depth, and error rates.
Iterate locally without losing events
Tunnels help, but durable capture + replay removes the “my laptop was asleep” problem.
Why “pure shell” receiving is awkward (and what to do instead)
This is a minimal starting point. Keep verifySignature(...) as a stub here, then implement provider-specific verification and replay defense in the
security guide
.
# Shell isn't a great HTTP server:
# - parsing HTTP correctly is annoying
# - TLS is non-trivial
# - concurrency/backpressure is hard
# - signature verification + replay defense is easy to get wrong
#
# Practical options:
# 1) Use a tiny webhook runner binary to execute scripts on HTTP hits,
# for example https://github.com/adnanh/webhook
# or https://github.com/ncarlier/webhookd
# 2) Wrap a minimal Node/Python server and call a script.
# 3) Use Hooque ingest + a run-forever consumer loop to run scripts reliably. Hooque turns any webhook into a reliable queue.
Non-obvious scenario: run scripts, not servers
Sometimes the “language” is really “a script”: rotate a feature flag, kick a deploy, sync a ticket, or run a one-off job. The hard part is safely exposing an inbound endpoint and correctly handling retries and security. Hooque lets you keep the “run a script” workflow while avoiding inbound HTTP plumbing.
The easy path: receive with Hooque + consume forever
Hooque turns inbound webhooks into a durable queue. Your code becomes a run-forever worker that pulls or streams events and acks/nacks/rejects explicitly.
- No need to run a public webhook endpoint in every environment (especially for local dev).
- Durable capture + replay/inspection so “we missed the webhook” becomes debuggable.
- Explicit Ack / Nack / Reject lifecycle so retries are under your control.
- Backpressure and spike absorption: buffer now, process at your pace.
- One consumption pattern across many senders (even if their security/retry rules differ).
Flow
- Provider delivers → Hooque ingest endpoint
- Hooque persists payload immediately
- Your worker pulls (REST) or streams (SSE)
- Your worker ack/nack/rejects explicitly
Next steps
- Security: verification + replay defense
- Reliability: retries + idempotency
- Ops: metrics + alerting
Hooque REST polling loop (runs forever)
Polling is a good default when you want a simple worker loop. It also works in environments where long-lived connections are unreliable.
#!/usr/bin/env bash
set -euo pipefail
# Runs forever: poll /next, ack/nack/reject explicitly.
NEXT_URL="${HOOQUE_QUEUE_NEXT_URL:-https://app.hooque.io/queues/<consumerId>/next}"
TOKEN="${HOOQUE_TOKEN:-hq_tok_replace_me}"
sleep_s() { sleep "$1"; }
main() {
while true; do
local msg=""
msg="$(get_next_message)" || {
continue
}
if process_data "$msg"; then
ack "$msg"
else
nack "$msg" "script_failed"
fi
done
}
get_next_message() {
# gracefully handle intermittent network errors
local tmp_headers="$(mktemp)"
local body=""
body="$(curl -sS -D "$tmp_headers" -H "Authorization: Bearer $TOKEN" "$NEXT_URL" || true)"
local status="$(awk 'NR==1 {print $2}' "$tmp_headers" | tr -d '\r')"
if [[ "$status" == "204" ]]; then
rm -f "$tmp_headers"
sleep_s 1
return 1
fi
if [[ -z "$status" || "$status" -ge 400 ]]; then
echo "next() failed: ${status:-unknown}" >&2
rm -f "$tmp_headers"
sleep_s 2
return 1
fi
local meta="$(awk -F': ' 'tolower($1)=="x-hooque-meta" {sub(/\r$/,"",$2); print $2}' "$tmp_headers")"
rm -f "$tmp_headers"
# Return JSON object with payload and meta
jq -n --arg body "$body" --argjson meta "${meta:-{}}" '{payload: $body, meta: $meta}'
}
process_data() {
local msg="$1"
local body="$(printf '%s' "$msg" | jq -r '.payload // empty')"
# Process event and handle errors
./on_webhook.sh <<<"$body"
}
ack() {
local msg="$1"
local ack_url="$(printf '%s' "$msg" | jq -r '.meta.ackUrl // empty')"
if [[ -n "$ack_url" ]]; then
# asynchronously ack the message
curl -sS -X POST -H "Authorization: Bearer $TOKEN" "$ack_url" >/dev/null
fi
}
nack() {
local msg="$1"
local reason="$2"
local err_url="$(printf '%s' "$msg" | jq -r '.meta.nackUrl // .meta.rejectUrl // empty')"
if [[ -n "$err_url" ]]; then
curl -sS -X POST -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-d "{\"reason\":\"$reason\"}" "$err_url" >/dev/null || true
fi
}
main Hooque SSE stream consumer (runs forever)
SSE is great for low-latency processing: keep a connection open, process events as they arrive, and reconnect on disconnects.
#!/usr/bin/env bash
set -euo pipefail
# Runs forever: connect to /stream and ack each "message" event.
# Requires: curl, jq
STREAM_URL="${HOOQUE_QUEUE_STREAM_URL:-https://app.hooque.io/queues/<consumerId>/stream}"
TOKEN="${HOOQUE_TOKEN:-hq_tok_replace_me}"
sleep_s() { sleep "$1"; }
main() {
while true; do
get_message_stream | while read -r msg; do
if process_data "$msg"; then
ack "$msg"
else
nack "$msg" "script_failed"
fi
done
sleep_s 2
done
}
get_message_stream() {
# connect stream, retry automatically on network errors
# -N disables buffering (needed for SSE)
# defensive line reading to keep loop alive when stream drops
curl -sS -N -H "Authorization: Bearer $TOKEN" -H "Accept: text/event-stream" "$STREAM_URL" | while IFS= read -r line; do
line="${line%$'\r'}"
[[ "$line" == :* ]] && continue
[[ "$line" == data:* ]] || continue
data="${line#data: }"
# print parsed msg as json line
printf '%s\n' "$data"
done
}
process_data() {
local msg="$1"
local body="$(printf '%s' "$msg" | jq -r '.payload // empty')"
# Process event
echo "event: $(printf '%s' "$msg" | jq -r '.meta.messageId // empty')"
}
ack() {
local msg="$1"
local ack_url="$(printf '%s' "$msg" | jq -r '.meta.ackUrl // empty')"
if [[ -n "$ack_url" ]]; then
# asynchronously ack the message
curl -sS -X POST -H "Authorization: Bearer $TOKEN" "$ack_url" >/dev/null
fi
}
nack() {
local msg="$1"
local reason="$2"
local err_url="$(printf '%s' "$msg" | jq -r '.meta.nackUrl // .meta.rejectUrl // empty')"
if [[ -n "$err_url" ]]; then
curl -sS -X POST -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" \
-d "{\"reason\":\"$reason\"}" "$err_url" >/dev/null || true
fi
}
main FAQ
Quick answers for the questions that come up right before you ship.
What status code should I return for webhooks in Shell (bash)?
General: Usually return a fast 2xx after validating authenticity and basic schema. Timeouts and 5xx commonly trigger retries.
How Hooque helps: Hooque acknowledges ingest immediately and persists the payload. Your worker acks/nacks/rejects explicitly after processing.
Do I need signature verification in Shell (bash)?
General: Yes, unless the sender is fully trusted and on a private network. A public endpoint without verification is easy to forge and easy to replay.
How Hooque helps: Hooque can verify at ingest for supported providers or using generic strategies. Either way, your worker receives a normalized meta object and can stay focused on processing.
Why do I see duplicate webhook events in Shell (bash)?
General: Retries are normal: timeouts, transient network failures, and 5xx responses all produce duplicates. Design idempotency around event ids and side-effect boundaries.
How Hooque helps: Hooque makes delivery outcomes explicit (ack/nack/reject) and provides replay/inspection so you can fix issues without guessing what was received.
How do I test webhooks locally in Shell (bash)?
General: You can use a tunnel, but local dev still breaks on sleep, VPNs, clock skew, and signature-byte mismatches.
How Hooque helps: With Hooque you can avoid inbound locally: receive events into a durable queue and pull/stream to your laptop, then replay from the UI after changes.
Should I use REST polling or SSE streaming for webhook processing?
General: Use REST polling for simple batch workers and environments without long-lived connections. Use SSE for low-latency “process as it arrives” flows.
How Hooque helps: Hooque supports both: `GET /next` for polling and `GET /stream` for streaming. Both include meta with ready-to-call ack/nack/reject URLs.
Start processing webhooks reliably
Create a webhook endpoint, receive events, then run your worker forever using REST polling or SSE streaming — with explicit ack/nack/reject control.
No credit card required