Just Fucking Use Cron.
Stop building a distributed scheduler for a problem that isn’t distributed. You need a schedule and a command. Calm down.
✅ boring
✅ cheap
✅ debuggable
✅ older than your framework
Golden rule:
If your requirement is “run this thing every X minutes,” you don’t need a workflow engine.
You need a schedule, a command, logs, retries (maybe), and an alert if it breaks.
Yes, even if you call it “orchestration” to feel important.
What cron is great at
- Run something at a specific time (daily, hourly, every 5 minutes).
- Kick off maintenance tasks: cleanup, rollups, exports, backups.
- Send a report, run a script, ping an endpoint, rotate logs.
- Do boring automation without inventing an architecture diagram.
What you’re probably doing wrong
You built “a scheduler” inside your app.
Now deployments, crashes, and scaling change your schedule. Congrats: you reinvented cron badly.
You added a workflow platform.
Because you needed “run this daily” and accidentally bought a new religion.
You run jobs on every instance.
Now “daily email” sends 6 times. Use a single runner or a lock. Or… cron on one box.
You have no alerts.
The job failed three weeks ago and you only noticed when finance asked where the report went.
Examples
Pick one approach. Keep it boring.
1) Classic crontab
# Edit your crontab
crontab -e
# Run every day at 02:15
15 2 * * * /usr/local/bin/my-report --yesterday >> /var/log/my-report.log 2>&1
# Run every 5 minutes
*/5 * * * * /usr/local/bin/ping-healthcheck >> /var/log/ping.log 2>&1
2) systemd timers (cron, but modern)
# Great if you want journal logs, dependencies, and easy status checks.
# Create:
# - /etc/systemd/system/my-job.service
# - /etc/systemd/system/my-job.timer
#
# Then:
systemctl enable --now my-job.timer
systemctl status my-job.timer
3) One-liner: curl a webhook
# Trigger an endpoint every hour
0 * * * * curl -fsS https://example.com/jobs/hourly >> /var/log/hourly.log 2>&1
“But we use X cloud scheduler.” Cool. That’s cron with marketing.
The point is: stop building a cathedral to run a script.
Sanity checklist (read this before shipping a “scheduler”)
- Can this job run twice safely? If not, make it idempotent or add a lock.
- Where do logs go? File, syslog, journal—pick one and keep it consistent.
- What happens on failure? Retry? Backoff? Alert? Don’t silently fail.
- Who gets paged? If nobody gets notified, it doesn’t exist.
- Is there exactly one runner? One host, or one “job runner” container, or a leader lock.
- Do you have a timeout? Jobs that run forever are just services with commitment issues.
When cron is not enough
Be honest. Cron isn’t magic. Use something heavier only when you actually need it:
- Long-running workflows with human steps, approvals, or multi-stage state machines.
- Massive fan-out where you truly need distributed orchestration and backpressure.
- Exactly-once semantics across multiple systems (rare, expensive, and usually a lie).
- Per-tenant scheduling at scale (thousands/millions of individual schedules).
Translation:
If you can describe your “workflow” as “run this command on a schedule,”
you’re not building a workflow. You’re scheduling a job.
Just fucking use cron.
Just fucking use cron.
FAQ (for the bikeshedders)
- “Cron isn’t reliable.” Your infrastructure isn’t reliable. Cron is fine. Add monitoring.
- “But we’re serverless.” Great. Schedule one function. You still don’t need a platform.
- “What about retries?” Make the job idempotent, then retry responsibly. Or alert a human.
- “What about locking?” File locks, DB locks, leader election—pick the simplest thing that works.
Final reminder:
If your job needs a UI, a SDK, a control plane, and a blog post to explain it, you are not “scaling.”
You are avoiding a crontab.
If your job needs a UI, a SDK, a control plane, and a blog post to explain it, you are not “scaling.”
You are avoiding a crontab.