← All projects / LinkedIn Job Pipeline
Active · final test pending

LinkedIn Job Pipeline

A quiet helper that scores LinkedIn jobs against my goals, writes a tailored resume and cover letter for each one, and drops everything into a Drive folder for me to review. Free to run, no API keys, no cloud bill.

Local AI · Ollama $0 to run Python Google Drive
What is this?

Job hunting is repetitive in a way that automation handles well: same resume tweaked for each role, same cover letter restructured, same forms filled out. This pipeline pulls in LinkedIn jobs, scores how good a fit they are, and uses a local AI model to write tailored materials, so I open a polished draft instead of a blank page.

Who it's for

Active job seekers who apply at volume and want to spend their energy on the parts that matter, interviews, networking, picking the right roles to chase, instead of on the 15th cover letter of the week.

Role
Solo build
Stage
Materials pipeline working; auto-apply next
Cost to run
$0, local LLM, free APIs
Throughput
~50 jobs/day scoreable
$0
Cost to run
~50
Jobs scored / day
100%
Local AI
2
Files per application
Walk through the build

From scoring to materials, end to end

Step 1 of 5

Manual rubric · spreadsheet

Role match (0–25)22
Seniority (0–20)17
Comp signal (0–20)15
AI / product weight (0–20)19
Remote posture (0–15)14
Total87
Step 1 · Calibrate by hand

Spreadsheet rubric first

Scored ~30 LinkedIn jobs in a spreadsheet against a rubric, role match, seniority, comp, remote posture, AI/product weight. Got the rubric to where my gut and the score agreed roughly 85% of the time.

Manual rubric

Job ingestion

job_id4403624468
companyAnthropic
roleAI Operations Manager
posted2h ago
statusqueued
Structured record
Step 2 · Pull the listing

Structured record per job

A fetcher takes a LinkedIn job ID, parses the posting into fields, and stores it. Once jobs are structured, scoring is just a function, and tailoring materials is just templating with substitutions.

Python SQLite
📊 Drop a dashboard screenshot here The localhost:8771 approval queue with score badges and Approve / Skip buttons.
Step 3 · Approval dashboard

One surface for the queue

Lightweight UI on localhost:8771, score badges, links to generated materials, Approve / Skip / Unskip buttons, auto-refresh. Designed to be the only surface I touch; everything else is automation.

FastAPI Auto-refresh

Materials generation · qwen3:8b

Prompt 1: Tailor resume bullet points to this JD, keeping measured language and concrete metrics.

Prompt 2: Cover letter, first paragraph hooks on company mission, second on relevant experience, close on next-step.

Local · free · no API keys
Step 4 · Generate materials

Local LLM, two prompts, structured output

generate_materials.py takes an approved job and runs two prompts against qwen3:8b via Ollama: one for resume tailoring, one for cover letter. Free, local, no rate limits, no API keys to rotate.

Ollama qwen3:8b
📁 Drop a Drive folder screenshot here The cover-letters folder filling up with timestamped PDFs.
Step 5 · Upload to Drive

Tailored docs in dedicated folders

Cover letter is written into a copied Google Doc via gog docs write, exported as PDF, uploaded to a dedicated cover-letters folder. Resume copies go to their own folder. Each job gets a discoverable paper trail.

Google Drive API Docs API

The goal

Job hunting is a high-volume, high-rejection process where most of the work is repetitive, the same resume tweaked slightly, the same cover letter restructured for a new company, the same Greenhouse form filled out again. That's exactly the shape automation handles well.

The goal: a pipeline I trust enough to run unattended on ~50 LinkedIn jobs a day. It scores fit, generates tailored materials, drops everything into Drive, and surfaces the approve/skip decision through a dashboard. The hard part isn't the automation, it's making sure I never accidentally apply to something I'd be embarrassed about.

Approval Dashboard a glimpse
Score Company Role Posted Actions
87 Anthropic AI Operations Manager 2h ago Approve Skip
82 Stripe Product Manager, Payments 5h ago Approve Skip
71 Notion Tech PM, AI Workflows 1d ago Approve Skip
58 Big Co. Generic Manager Role 2d ago auto-skipped

↻ Auto-refresh · score threshold = 70 · materials generation runs in background after approve

The process

Step 1 · Manual scoring

Spreadsheet first, automation later

Same rule again: don't automate something I haven't done manually. Scored ~30 LinkedIn jobs in a spreadsheet against a rubric, role match, seniority, comp, remote posture, AI/product weight. Got the rubric to where my gut and the score agreed ~85% of the time.

Step 2 · Job ingestion

Pull the listing into a structured record

Built a fetcher that takes a LinkedIn job ID, parses the posting into fields, and stores it. Once jobs are structured, scoring is just a function, and tailoring materials is just templating with substitutions.

Step 3 · Score routing

Two queues: approved and skip

Scoring runs first. Above threshold → approved queue. Below → skip queue. Both are visible in the dashboard with Unskip buttons, because the rubric gets things wrong sometimes and I want the override to be one click.

Step 4 · Local LLM materials generation

Ollama + qwen3:8b, no API keys

generate_materials.py takes an approved job and runs two prompts against qwen3:8b via Ollama: one for the resume tailoring, one for the cover letter. Output is markdown. Free, local, no rate limits, no API keys to rotate.

Step 5 · Google Drive uploads

Tailored docs land in dedicated folders

The cover letter is written into a copied Google Doc via gog docs write, then exported as PDF and uploaded to a dedicated cover-letters folder. Resume copies go to their own folder. Each job gets a discoverable paper trail.

Step 6 · Approval dashboard

Localhost UI with auto-refresh

Lightweight web UI on localhost:8771 showing the queue, scores, generated materials links, and Approve/Skip/Unskip buttons. Designed to be the only surface I touch, everything upstream and downstream is automation.

Hiccups (and what I learned)

Hiccup #1 · gog CLI flag inconsistency

The gog drive copy and gog drive upload commands both use --parent, but I'd been using --parent-folder in my script. Failed silently in some cases, loudly in others.

Fixed the flag, then added a startup smoke test that exercises a copy + upload before the real pipeline runs. Cheap insurance against future CLI-flag drift.

Hiccup #2 · Cover letter writing to the wrong place

The cover letter content was being generated correctly but landing in the wrong section of the copied Google Doc, the template had a heading the script wasn't accounting for.

Switched to writing into an explicit anchor placeholder {{COVER_LETTER}} in the template, replacing it with the generated content. Templates with explicit anchors fail visibly when something's off; "write to position N" fails invisibly.

Hiccup #3 · qwen3:8b vs cloud LLM quality gap

Local models are weaker than Claude/Gemini for live tool use and nuanced writing. Cover letters from qwen3:8b alone were good enough for a first draft but read templated.

Two-pass approach: qwen3:8b generates the structured first draft (free, fast, local). Final polish pass is optional and uses a cloud model only when I'm shortlisting. Most of the volume runs on the local pass; only the high-score jobs get the cloud spend.

Hiccup #4 · LinkedIn doesn't want to be automated

Anything that looks like browser automation against linkedin.com gets challenged or rate-limited. Direct submission was the dream, wasn't going to happen safely.

Pipeline stops at "materials ready in Drive + dashboard approval" today. The auto-apply step (apply_job.py) targets the downstream ATS systems, Greenhouse, Ashby, Lever, Easy Apply, which are more automation-friendly. LinkedIn stays manual on the click; the ATS form fills automate.

How I iterated

v0 · Spreadsheet rubric

Manual scoring against criteria

Calibrated the rubric by scoring real jobs and comparing to gut.

v0.5 · Scored ingestion + Drive uploads

Pipeline ends at "files in Drive"

Resume + cover letter generated and uploaded; manual application from there.

v1 · Approval dashboard

Single surface for the queue

Localhost UI, auto-refresh, Approve/Skip/Unskip.

v1.x · Final e2e test

End-to-end run with a real job ID

generate_materials.py --job-id 4403624468 --skip-browser, confirm cover letter writes to the right Doc, PDF uploads cleanly. Last gate before v2.

v2 · Auto-apply

Downstream ATS submission

The next big build: apply_job.py that submits to Greenhouse / Ashby / Lever / Easy Apply directly. LinkedIn stays manual on the click.

Tools used

Ollama
local LLM runtime
qwen3:8b
tailoring + cover letter
Python
pipeline glue
FastAPI
dashboard backend
Google Drive API
uploads + folder routing
Google Docs API
cover letter writes
gog CLI
Google ops shorthand
OpenClaw
automation runtime
SQLite
job queue persistence
Claude Code
build agent

Roadmap

  • NEXT
    Final end-to-end test.

    Run generate_materials.py against a real job ID. Confirm cover letter content writes correctly to the Google Doc and the PDF lands in the cover-letters folder.

  • v2
    Build apply_job.py.

    Detect ATS type (Greenhouse / Ashby / Lever / Easy Apply) and submit directly. Resume PDF + cover letter PDF + structured field fill.

  • v2
    Per-company memory.

    Track which companies I've applied to, when, with what materials, and the outcome. Avoid re-applying to the same role; learn which roles convert.

  • LATER
    Two-pass writing: local draft + cloud polish.

    Local model on every job (free volume); cloud model on the top 5/day for final polish (cheap, justified).

  • LATER
    Outcome tracking + rubric tuning.

    Score → application → response data. Use it to retrain the scoring rubric so the queue gets sharper over time.


← Back to all projects