← All projects / Personal AI Operating System
Active · in production daily

My AI Operating System

A simple, inspectable system that connects every AI tool I use, so my work has memory, my decisions get written down, and nothing starts cold. The quiet thing every other project here runs on.

Obsidian + Dataview Markdown source of truth Cross-machine ~12 months running
What is this?

Most people use AI tools as separate islands, ChatGPT forgets the project you mentioned yesterday, Cursor doesn't know what Claude said last week. I built one shared notebook (in plain markdown) that every tool reads from at the start of a session and writes back to at the end. Continuity by design, not by luck.

Who it's for

This one is built for me, but the pattern travels. Anyone who uses multiple AI tools and feels the friction of context loss can apply it: pick a canonical place to store decisions, point every tool at it, automate the boring sync. The result is fewer cold starts and a paper trail you can actually trust.

Role
Architect, builder, daily user
Stage
In production, ~12 months running
Hosts
MacBook Pro · Mac Mini
Source of truth
Obsidian Vault on Google Drive
~12mo
Running
2
Machines synced
3
Scheduled jobs
7+
Active projects
100%
Markdown · inspectable
Walk through the build

How the system grew up

Step 1 of 5

Folders everywhere · v0

context/drift
memory/drift
projects/drift
Tools agreeing on truthno
Three competing sources
Iteration 1 · Folders everywhere

Three competing sources of truth

Started with separate context/, memory/, and projects/ folders. Each tool bootstrapped from a different one. Decisions landed in one place but were referenced from another. Predictably, drift.

Drift problem
🗂 Drop an Obsidian vault screenshot here The 00 System / 01 Projects / Daily Notes folder tree would explain instantly.
Iteration 2 · One canonical home

The Obsidian vault becomes the only truth

Consolidated everything into a single Obsidian vault on Google Drive. Both machines read from the same paths. The root CLAUDE.md and the Codex bootstrap rewritten to point at vault paths in 00 System/.

Obsidian + Dataview Markdown

Scheduled writes

Nightly memory sync10:00 PM ET
Cowork daily summary2:01 AM ET
Session-close (ad-hoc)on demand
Both write to vaultyes
All green tonight
Iteration 3 · Automate the writes

Nightly sync + session-close tasks

Two scheduled jobs. A nightly sync at 10pm runs memory maintenance, pruning stale entries, promoting decisions from chatter to durable. A session-close task writes the daily note. Both write directly to the vault.

LaunchAgents Cowork
🧭 Drop a Mission Control screenshot here The DataviewJS dashboard inside Obsidian, live project status from frontmatter.
Iteration 4 · Mission Control

The dashboard reads the same source

The original dashboard was a separate HTML file. Replaced with DataviewJS inside Obsidian, it queries the YAML frontmatter on every project note live. Adding a project just means dropping a new note; the dashboard picks it up instantly.

Dataview YAML frontmatter

Local LLM layer

LibreChat · Mac Mini:3080
Ollama runtimelocal
OpenRouter routingcloud
WorkhorseSonnet 4.6
Local fallbackllama3.2
Iteration 5 · Local LLM layer

One chat surface, multiple model paths

LibreChat in Docker as the everyday chat UI. Ollama for local inference. OpenRouter for cloud routing. The system prefers cloud for tool use; local models are the fallback for everyday-cost work.

Claude Sonnet Ollama Docker

The goal

The way most people use AI tools is fragmented: ChatGPT for brainstorming, Claude for writing, Cursor for code, with no memory between them. Every conversation starts cold. Every decision evaporates the moment the tab closes.

I wanted the opposite. One inspectable source of truth that every tool reads from at session start and writes back to at session close, so context, decisions, and project state are durable across machines, models, and weeks. Not a productivity app. A personal operating system.

Mission Control a glimpse
Active Projects · 7
FIRE Calculatordeploy-blocked
AwardReservepre-beta
LinkedIn Autoe2e test
Home NASlive
AI OS itselfrunning
Last Nightly Sync
10:00 PM ET · OK
vault → memory · context · daily note
Today's Daily Note
Auto-written by Cowork · session summaries from MBP + Mac Mini merged
---
name: AwardReserve
status: Active
priority: Medium
next_step: "Week 1 of revenue sprint..."
last_touched: 2026-04-27
tags: [project, side-venture, ai]
---

The architecture

Every tool in the system bootstraps the same way: read a fixed list of markdown files in 00 System/ at session start, work, then write back at session close.

Profile.md → System.md → Memory.md → Context.md → Playbook.md
the bootstrap read order, identical for Claude Code / Cowork / Codex / OpenClaw
01 Projects/<name>.md
one note per project with YAML frontmatter, Dataview queries everything
02 Daily Notes/YYYY-MM-DD.md
session summaries written automatically at session close
03 Decisions/ + 04 Briefings/ + 05 Reviews/
durable artifacts, separating decisions from chatter
Mission Control (DataviewJS)
live dashboard inside Obsidian, no separate HTML to keep in sync

The process

Iteration 1 · Folders everywhere

Three competing sources of truth

Started with separate context/, memory/, and projects/ folders at the workspace root. Each tool had its own bootstrap path. Predictably, things drifted: a decision would land in memory/ but never make it to the project file, or the Codex bootstrap would be 3 weeks behind Claude's.

Iteration 2 · Pick one canonical home

Obsidian Vault becomes the only source of truth

Consolidated everything into a single Obsidian Vault on Google Drive. Both machines read from the same paths. The root CLAUDE.md and Codex bootstrap were rewritten to point at vault paths in 00 System/, one read order, every tool.

Iteration 3 · Automate the writes

Nightly sync + ad-hoc session-close tasks

Built two scheduled jobs. A nightly sync at 10pm runs memory maintenance, pruning stale entries, promoting decisions from chatter to durable. A session-close task (manual trigger) writes the daily note and updates the relevant project file. Both write directly to the vault.

Iteration 4 · Mission Control dashboard

Replace the old HTML view with DataviewJS

The original dashboard was a separate HTML file that had to be regenerated. Replaced with DataviewJS inside Obsidian, it queries the YAML frontmatter on every project note live. Adding a project just means dropping a new note; the dashboard picks it up immediately.

Iteration 5 · Local LLM layer

Ollama + LibreChat for everyday chat

LibreChat on the Mac Mini in Docker as the main local chat UI (port 3080). Ollama for inference. OpenRouter for cloud routing. Default workhorse: Claude Sonnet 4.6 in paid contexts; Gemini Flash Lite as the budget cloud model; llama3.2:latest as the safest local fallback.

Hiccups (and what I learned)

Hiccup #1 · Three folders, three drifts

The pre-vault era had separate context/, memory/, projects/ trees. Each tool bootstrapped from a different one. Decisions landed in one place but were referenced from another.

Collapsed everything to the Obsidian Vault. Source of truth hierarchy is now explicit: latest user instruction → System.md → Memory.md → project note. No ambiguity.

Hiccup #2 · Cowork and Claude Code can't see each other's transcripts

Nightly Cowork sync at 2:01 AM ET writes to the daily note, but it has no visibility into Claude Code sessions running on my Mac. Whole sessions of work were missing from the daily log.

Added a Claude Code session-close playbook (CC-Session-Close.md), Claude Code writes its own section to the daily note before exiting. Triggered by the user saying "session close" / "wrap up this session". The vault then has both perspectives.

Hiccup #3 · Scheduled tasks need pre-approved tool permissions

The first scheduled run of each automation hit a permission prompt that no human was there to answer, so the agent stalled.

Run each scheduled task manually once before letting it auto-fire, so tool approvals are pre-cached. Boring fix, but it's the lesson: anything that runs unattended must be exercised attended first.

Hiccup #4 · Claude Desktop projects don't sync between machines

Each machine maintains its own project list inside Claude Desktop. There's no cross-machine sync. I assumed otherwise and lost a project's worth of state.

Wrote a sync script (sync_openclaw_workspace.py) that mirrors the real source, the vault, into both machines' Claude Projects folders. The vault wins; the local Claude Desktop view follows.

Hiccup #5 · Context window blowouts

Long Claude Code sessions used to OOM the context window, I'd lose the thread and have to start over.

Hard rule baked into the operating contract: when conversation context exceeds ~150K tokens, consolidate state into the vault BEFORE continuing. Compaction loses context; the vault preserves it. Memory-writes are now treated as save-points.

Tools used

Obsidian + Dataview
canonical store + query
Claude Code
primary build agent
Cowork
remote / nightly agent
Codex CLI
terminal-side coding
OpenClaw
automation runtime
LibreChat
local chat UI (Docker)
Ollama
local inference
OpenRouter
cloud model routing
Google Drive
cross-machine sync
macOS LaunchAgents
scheduled tasks
MCP Servers
Google Workspace integration
Markdown + YAML
durable state format

Roadmap

  • NEXT
    Update LibreChat system prompt to vault paths.

    It still points at the old folder structure, the last lingering reference to the pre-vault world.

  • NEXT
    Run each scheduled task manually once.

    Pre-approve tool permissions so unattended cron runs don't stall on prompts.

  • v2
    Mac Mini becomes primary automation host.

    Move OpenClaw, scheduled tasks, and the chat layer entirely onto the always-on Mini. MacBook becomes the developer surface, not the ops surface.

  • v2
    Specialized agents on top of the executive loop.

    One strong executive loop is the foundation. Layer specialized agents (research, writing, finance) only after the loop is solid, not before.

  • LATER
    Move secrets out of synced .env files.

    Convenient but not ideal long-term. 1Password CLI or macOS keychain.

  • LATER
    Claude + Databricks MCP integration.

    Researching: which Databricks workflows are worth exposing through MCP, and what the ROI vs. just running queries directly looks like.


← Back to all projects