What was broken
I was starting every morning the same way: fifteen minutes scrolling Twitter, ten more on LinkedIn, another five on RSS readers, looking for the three or four things that actually mattered for that day. Most mornings I gave up before finding them and started work feeling vaguely unsettled rather than informed. The signal-to-noise ratio of a morning news scroll is miserable, and I was paying for it with my best hour.
What I built
A Python script that runs every morning at 7:00 CET via GitHub Actions. Pulls from RSS feeds, NewsAPI keyword searches, and YouTube creator channels, sends everything to Claude with a summarisation prompt, and delivers a short punchy briefing to my phone via Telegram before I'm out of bed.
No database. No state. Stateless by design. It runs once, sends once, and forgets. The next day's digest doesn't know what yesterday's said, and that's deliberate — it forces the summariser to surface what's actually noteworthy today, not what's a continuation of a thread it's already chewing on.
How the 3-layer pattern shows up here
This is the smallest, cleanest example of the pattern in my toolkit. Three directive files in markdown — fetch_news.md, summarize_news.md, deliver_digest.md — describe what each stage should do in plain English, like SOPs I'd hand to a junior. An LLM reads the directives and acts as orchestration, deciding what to call when. Deterministic Python scripts in execution/ do the actual work: fetching feeds, calling APIs, formatting Telegram messages.
The scripts don't make judgement calls. The LLM doesn't fetch RSS. The directives describe intent, not implementation. When I hit a NewsAPI rate limit in week two — free tier is 100 requests a day, I learned the hard way — I updated the directive, rewrote the fetcher to batch keywords, and the system was permanently stronger. That's the loop the pattern is built to support: errors become directive updates become more reliable runs.
Tech
Python 3.12, GitHub Actions cron, feedparser, NewsAPI, Anthropic Python SDK, Telegram Bot API. Total runtime cost: zero, because GitHub Actions free tier covers a daily 7am job indefinitely.
Why it matters for an ops team
This is what reliable LLM automation actually looks like. Most "AI workflow" pitches I see in ops are a single prompt asking the model to do everything — fetch, decide, format, deliver — in one shot. Then they're surprised when reliability craters at scale.
The pattern that works is the boring one: pull the deterministic work into scripts, let the LLM do judgement, write the SOPs down so the system can be reasoned about and improved. This is the same shape an automated lead scoring or churn prediction workflow needs. Different inputs, same architecture.
What I'd do differently
Built it as a tool first instead of a script. Right now it's a single-purpose digest. The fetcher, summariser, and Telegram sender would all be more useful as standalone tools the orchestration layer could compose into different workflows — weekly review, conference talk research, anything that takes inputs and turns them into a delivered summary. Same parts, different recipes. That's where this is heading next.