I applied to over 500 jobs and heard nothing back. That is when I built Life OS.
I am the CTO at Sports Excitement, a founding-stage company with a fifteen-engineer team. Founding-stage means founding-stage compensation, the trade founders make for ownership and a real hand on the wheel. I am also the sole engineer at StrictlySurf on contract, finishing an M.Sc. in Computer Science (AI Optimization) at WGU, and prepping for an AWS Machine Learning Associate exam two weeks out. I am running daily standups, sprint reviews, and backlog refinements at SPEX. The job search runs in parallel because the math of one founding-stage role plus three other commitments does not close on its own.
Four commitments, twenty-four hours, one head. I had every signal a hiring manager would want, and the inbox was silent.
I did not have time to fix that silence by sending more applications. The compounding leak was that I built real things and told nobody, the line one of the rare humans who replied gave me. The tooling I had been using to keep my own day on track was not built to fix that leak. So I built one that was.
The pressure was simple to name. The cognitive load was not. Every morning I woke up to the same question, "what is the most important thing to do right now," and the answer cost forty minutes to find by hand because it lived in seven different places.
Life OS is the answer to that question. It is a personal operating system built on Obsidian, Claude Code, scheduled cron tasks, and a vault of structured markdown. It runs as a set of routines (R0 through R37 at last count) that read state, surface decisions, and write back. The point is not to automate my life. The point is to compress the time between waking up and acting from forty minutes to three.
This article walks through what it does, how it is built, and what it has taught me about applied AI in production systems where the user is one person.
The shape
The vault is the source of truth. Everything else is a view.
Domains live as folders in 2_domains/:
jobs/, every application, follow-up, sub-status, cooldown tracker.income/, the career engine. A daily five-slot ritual (capture, engage, outreach, inbox, build), weekly rotation, pipeline triggers, presence channels, articles index, outreach log.projects/, every active codebase with state, blockers, next unblocks, recent records.people/, every meaningful interaction, threaded by person.learning/, topics in flight and captures.mental/, decision records and a pattern register.me/, capability graph, experiences, evidence, gaps, education.fitness/, workouts and nutrition.
Each domain has an index.md, the file the AI reads first when the user mentions that domain. Records are dated, frontmatter-typed, and one event per file.
Cross-cutting indexes live in 0_index/:
task_board.md, the board (Today, Waiting, Doing, Done) refreshed by the board-refresh routine.goals.md, two strategic goals, the goals routine owns it.pending_inputs.md, the two-phase queue. When a routine fires and needs the user, it parks the question here.command_aliases.md, every verb (log, capture, morning, ask, save, add, done, wait, drop, board, help) routes to a routine.daily_recurring.md, sleep, energy, project pulses. The single source of truth for daily-state checks.system_index.md, controller state, cache, last-run timestamps.
The vault auto-syncs to GitHub every five minutes. Two devices stay aligned. If the SSD is unmounted, GitHub Mode fetches files via raw URLs.
The routines
Each routine has a number, a name, a Pipeline section, and a written invariant. They are markdown files at 4_agents/prompts/. When a verb fires (typed by me, or scheduled by cron), the matching routine prompt loads and runs. The routine reads what it needs, writes what it must, and surfaces a Followup block per the writing contract.
Four worth describing in detail.
The morning brief, fires daily at 06:00. Reads the task board, the daily recurring file, every domain index relevant to today, the income engine surface, and recent evidence. Outputs a single comprehensive morning brief. Today's ranked board, daily-recurring status, project pulse, inbox highlights, week peek, goal tether, and three reflection prompts (sleep, intention, energy). The brief is the first thing on screen. The user types one of done, wait, drop, defer, rolling, -, or free text per row. State updates at the bottom.
The evening close, fires daily at 20:00. Reverse of the morning brief. The user walks the day's still-open items one by one. Each row gets the same terse syntax. The evening close also captures sleep and energy from the user. The Followup block at the bottom is the catch-up handoff if the routine missed (machine off at trigger time).
Save, mode-driven. When the user says save <something>, the routine routes by signal. A decision record goes to 2_domains/mental/decisions/. A behavioral pattern goes to patterns.md ## Candidate Patterns. Evidence goes to me/evidence/<date>__<slug>.md and bumps the linked capability's evidence_count. A concept consolidation goes to 2_domains/concepts/. Save is the input verb. Without it the capability graph never grows.
Capability health, fires Sunday inside the weekly review. Reads the capability files in me/capabilities/. For each capability, reads actual_level, evidence_count, last_evidence_date, resume_claim, and gap_note. Outputs a delta report into me_index.md ## Capability Health. Over-claimed (resume claim outpaces evidence). Under-claimed (evidence outpaces claim). Stale (no evidence in 180 days). Growth-target gaps not covered by goals. The capability-health sweep is the honesty layer. It tells me when my resume is starting to lie.
There are thirty-plus more routines: capture, ask, weekly digest, board refresh, verb router, architecture audit, consolidate, inbox triage, week architect, and the rest. The pattern is the same. Read narrowly. Write authoritatively. Surface a Followup block. Never duplicate state across routines.
The AI patterns that earned their keep
Three patterns moved this from a static markdown vault into something that compresses cognitive load.
Smart retrieval, never bulk loading. Every routine prompt names exactly what it reads. When I ask "what should I do today," the routine fetches task_board.md, daily_recurring.md, the income engine surface, and the matching domain indexes. It does not load 2_domains/me/capabilities/* (sixteen files), or every dated daily log, or every job record. The cost of context is the cost of cognitive load. Loading less means the AI ranks better.
The vault enforces this with parent-to-child linking and per-routine load lists. The MAP.md file at the top of 0_index/ is the bootstrap router. It tells the model what to read for each task type. Scheduled routine? This file plus the routine prompt. Ad-hoc question? Plus one domain index. Vault restructure? Plus AGENTS.md. The discipline is the system.
The two-phase contract. Long-running routines need user input mid-flight. The board can have a thirty-row Today list at 06:00, but I do not want to walk thirty rows at 06:00. The morning brief fires Phase A at 06:00. The matching Phase B, the per-row walk, lands on demand when I open the laptop. The link is pending_inputs.md. Each Phase A appends a row. Each Phase B reads that row, runs the prompt verbatim, and closes it.
The phase split is what makes scheduled AI usable. Without it, every cron-fired routine is either a full blocking walk-through (impossible at 06:00) or a notification with no follow-up (information without action).
The honest-delta layer. The capability graph in me/capabilities/ has, per capability, both a resume_claim (the public claim on the resume) and a gap_note (the private softening). When save runs on new evidence, the capability bumps. When capability health audits, it reads both fields and flags drift. When the outreach grounding routine runs, it reads both fields and never lets the outreach exceed the gap_note.
The AI does not author the gap_note. I write it once, by hand, after each capability lands. The gap_note is the place where I name what the resume claim does not actually prove. "Forty-plus REST APIs" is accurate; complexity skews toward CRUD plus auth and payments. "Lead architect across a fifteen-engineer team" is organizationally true; how much senior depth rests on me alone versus collaborative versus inherited is a nuance the AI must not overclaim.
The honesty layer is what keeps the system from compounding hype. I have seen the failure mode in agent-built personal sites. Every iteration nudges the claim up. By the third draft the resume reads like a Series-A founder. The gap_note is the brake.
What I learned about applied AI as a sole engineer
Three takeaways I would carry into a founding-engineer role.
Structured input wins over big context. A focused two-thousand-token prompt with the right twelve files in scope outranks a fifty-thousand-token prompt with everything, every time. The smart-retrieval discipline in Life OS is not a cost optimization. It is a quality optimization. A model with a clean context makes better decisions than a model with the entire vault in its mouth.
LLMs are state machines, not oracles. The morning brief does not "decide what is most important today." It reads the board, applies the ranking rules in its own prompt, and outputs the result. The ranking rules are written by me. The model executes them. When I want to change the ranking, I edit the prompt, not the model. This is the pattern that scales without runaway agentic behavior.
The user is the source of truth, until proven otherwise. Every routine ends with a Followup block. The block surfaces what changed, what is pending, and what needs the user's input. The user can override anything. The user can correct anything. The system trusts itself only as far as the last verified state. When a routine writes something controversial (a capability level change, a sub-status flip on a job, a goal revision), it writes a draft and surfaces it for the user to confirm.
This is the design choice that makes Life OS livable. The system never feels like it is acting on me. It feels like an assistant who has read everything, ranked it, and is asking me to weigh in. The agency stays mine.
What it does not do
Honest list.
It does not auto-decide. Save captures, capability health flags, but the goals routine only changes when I run goals revise.
It does not handle ambiguous natural-language at scale. The verb router maps about thirty-five bare verbs to routines. Free-form questions hit a fallback that loads the relevant domain index, but there is no general-purpose dialogue agent. The vault is the brain; the routines are the limbs.
It does not work without me. If I do not run the evening close, the next morning's brief is stale. If I do not save evidence, the capability graph drifts. The system is a force multiplier on my discipline, not a substitute for it.
It does not solve the actual problem. The problem is one engineer trying to land a founding role from a silent inbox. Life OS does not write the outreach DMs (it drafts them, I edit). It does not run the AWS exam (it reminds me to study). It does not turn evidence into reach on its own (it flags the captures that could become content I publish, so the work surfaces where hiring managers actually look).
That compression is the difference between a thirty-minute morning routine and a six-hour rabbit hole. Some days it saves me five hours. On those days, those five hours go into the things that matter, the AWS coursework, the StrictlySurf marketplace, the warm-network DMs, the engineering work that grounds the next capability claim.
Why I am writing this
Two reasons. First, the income engine inside Life OS surfaces "build slot" sessions where I move one piece of content forward thirty minutes a day. This article is one of those pieces. Second, when I describe the system in interviews, eyes light up. Founders especially, because the system is the kind of thing you build when you cannot afford to lose to context-switching. I have built it as the engineer I want to hire. If you are reading this and you are a founder, that is the engineer I am offering.
The vault, the routines, and the architecture are all real. The silent inbox was real. The pressure was real. The system is what I do because of it. If a founding-engineer role is the next chapter, this is what I bring on day one. Production AI sensibility, distributed systems instincts under pressure, and the discipline to ship the boring parts well.
If you are an engineer reading this, pick one routine, write the prompt, fire it manually for a week, and see if it earns its keep. That is how Life OS started. The evening close was the first routine, a single evening-log workflow. The morning brief came two months later, after the evening close had paid off enough times that the morning-brief idea felt obvious.
Life OS itself is private. It is built around my specific workflows, capabilities, and constraints, so the repo would not be useful to anyone else as is. If a clean, generalizable subset is worth releasing later, that will be its own project.