Skip to main content
How a growth engineer used Claude Code, browser automation, and a Tinder-style swipe tool to turn job searching into a data problem — and what 162 European startups revealed about the hiring landscape.

The Recruiter’s Inbox Problem

I’ve shipped three products to exit, led teams from zero to a million users, and been featured in an MIT course taken by 50,000 students. I write TypeScript and C++, build scraping pipelines for fun, and have eight years of React under my belt. And I still can’t figure out where to apply for a job. This isn’t a competence problem. It’s a systems problem. And it’s not just my problem — 72% of job seekers report negative mental health impacts from long hiring processes, and 66% say they feel burned out from job searching altogether. The system is failing everyone, but it fails senior engineers with specific criteria in a particular way: by burying them in noise. Gergely Orosz spoke with 30+ hiring managers for his 2025 market analysis and found that only 10% of inbound applications are qualified. Not “strong candidates” — qualified. Meeting the bare minimum requirements listed in the job ad. One recruiter told him that out of 150+ inbound messages, exactly one person was genuinely excellent technically. A hiring manager at a publicly traded fintech who hired 30+ engineers in 2025 reported that inbound candidates made up only 10% of actual hires — the rest came through sourcing and referrals. Meanwhile, LinkedIn is processing 11,000 applications per minute, up 45% year-over-year. One recruiter for a senior frontend position received 600 applications in two days, 150 in the first few hours alone. Daniel Chait, CEO of Greenhouse, calls this the “AI doom loop.” Both sides weaponized AI at the same time. Candidates use GPT to spray-and-pray optimized resumes — 75% of U.S. job seekers now use AI to polish applications. Employers respond with more aggressive automated screening. Candidates adapt with prompt injections (41% admit to doing this). Quality signals degrade on both sides. Trust collapses. Nearly 50% of job seekers report decreased trust in hiring, and only 8% believe AI screening makes the process fairer. The mechanics are vicious. Candidates mass-apply with AI-optimized resumes. Employers get flooded with generic applications. They deploy more aggressive AI screening to cope. Candidates adapt with more sophisticated prompt engineering. The cycle repeats, and each iteration erodes trust further. Ninety percent of employers now report more spam applications, and some recruiters say volume went from thousands per month to thousands per day. New auto-apply services let candidates pay a few dollars to spray hundreds of applications overnight. Jamie Kohn, Senior Director of Research at Gartner, put it bluntly: the technology designed to make the process better is making it worse. The result?
MetricStat
Job applications with no response75%
Decrease in response rate vs 20213x less likely
Average time to fill senior role42-50 days
Hiring managers who admit to ghosting80%
Interviews per hire (2021 → 2025)14 → 20 (+42%)
75% of job applications vanish with no response. Applicants are three times less likely to hear back than they were in 2021. The average senior engineering role takes 42-50 days to fill. And the people running the process know it’s broken — 80% of hiring managers admit to ghosting candidates. Hiring teams now conduct 42% more interviews per hire than they did in 2021 — 20 per hire instead of 14 — because nobody trusts the signal anymore. Here’s the thing about being a senior engineer with specific criteria: you don’t want “a job.” You want ownership of a revenue-adjacent problem at a funded European startup in an interesting vertical that isn’t fintech. Every criterion halves the pool. Five requirements means roughly 3% of the original space. The tools designed for volume — LinkedIn, Indeed, Wellfound — show you everything except what matters. They’ll surface ten thousand jobs and none of them are right. So I did what any engineer would do. I built something.

What I Was Actually Looking For

Prague, February 2026. I’m a full-stack engineer with 15 years of experience, three exits, and a very specific problem: I know exactly what I want, and nothing on the market is designed to help me find it. The criteria started simple and got narrow fast. I wanted a role with real ownership — not ticket execution, but a problem I could adopt as my own. Something revenue-adjacent where I could see the impact of what I built. A funded startup or scale-up that valued engineering-led growth. European-based, because I’ve lived in Prague for eight years and the timezone matters. And the vertical had to be interesting: music technology, games, events, travel, creative tools — something I’d actually care about on a Saturday morning. Hard no on fintech and generic B2B SaaS. This is what happens when you accumulate enough experience to know what “good” looks like. After three exits, you know the difference between a leadership team that trusts engineers and one that treats them as ticket machines. After eight years of React and fifteen years of shipping products, salary and title stop being the primary filters. What matters is autonomy, technical relevance, the quality of the people above you, and whether the problem space is genuinely interesting. SignalFire’s engineering talent report confirms this pattern — culture, clarity, challenge, and trust in leadership are the main drivers of retention for senior engineers, not compensation. The more experienced you are, the narrower your definition of “right” becomes, and the less useful any tool designed for volume will ever be. That last criterion — interesting verticals only — is the one that makes traditional job searching useless. LinkedIn’s algorithm doesn’t know the difference between “music tech startup with a creative engineering culture” and “enterprise middleware company with a Spotify integration.” The existing tools optimize for volume, not specificity. 57% of EU firms report they can’t find qualified tech staff, and yet the qualified staff can’t find them either. The market is experiencing what LeadDev calls “experience compression” — entry-level roles demand senior portfolios, mid-level roles want lead experience, and actual leadership positions expect innovation track records. Everyone is squeezed into an increasingly narrow band. The European landscape makes this harder. Atomico’s State of European Tech 2025 reports $58 billion in venture funding across the continent, with more than 27,000 new founders — the highest number on record. Public and private European tech companies are now worth nearly $4 trillion — a fourfold increase over a decade. But the Ravio 2026 Compensation Trends Report shows European tech hiring sitting at 29%, stable but selective. Entry-level hiring collapsed 73%. The market wants specialists, and AI/ML hiring grew 88% while generalist full-stack demand held steady. Germany is the only major European market showing positive hiring growth. The UK is down 21%, France down 28%. Meanwhile, 36% of European VC dollars went into deeptech companies in 2025, up from 19% in 2021 — the money is flowing toward hardware, biotech, and AI research, not traditional web products. For a growth engineer who uses AI tools but isn’t an ML researcher, the positioning challenge is real: the market’s hottest category isn’t quite your category. Prague specifically tells an interesting story. There are 470+ startups, 60,000 IT specialists, and 5,700 new ICT graduates every year in the city, and companies like Mews, Apify, and Kiwi.com have put Czech tech on the global map. But Czech Founders reports that venture capital investment sits at 0.07% of GDP — less than half the European average of 0.17%, and a fraction of the UK’s 0.35%. Pension funds essentially don’t invest in startups, and 25% of Czech founders consider relocating abroad due to capital constraints. The city produces great engineers but struggles to fund the ambitious startups those engineers want to join. The best opportunities might not be local. So the problem was clear: I needed to find 160+ companies across Europe that matched a very specific and personal set of criteria, evaluate them against dimensions that no job board tracks — culture, reputation, growth trajectory, tech stack fit — and do it without spending three months manually browsing individual careers pages.

Swiping Blind on 162 Companies

The first research sprint produced 101 companies in ten minutes. Three parallel AI agents searched Wellfound, remote job boards, EU startup databases, and company career pages. They came back with names, locations, one-line descriptions, funding stages, and rough fit scores. Music tech startups in Berlin. Gaming studios in Prague. Event platforms in Barcelona. Creative tools companies across Scandinavia. And the data was almost useless. Not because it was wrong — the companies were real, the locations accurate, the funding data correct. The problem was that a one-line description like “AI music discovery” or “entertainment sound platform” gives you nothing to make a decision on. Is the product real? Does anyone use it? Are they hiring engineers or data scientists? Is the engineering culture good or toxic? Are they funded enough to survive the next 18 months? Research from Qlik shows that 81% of companies struggle with AI data quality. A Gartner study found that winning AI programs earmark 50-70% of their timeline and budget for data readiness — extraction, normalization, governance. The pattern transfers directly: when you automate data collection without automating comprehension, you shift the bottleneck from “finding information” to “understanding information.” I had 101 companies and zero ability to say yes or no to any of them. The standard approach to company evaluation doesn’t scale either. The Interview Guys’ 2025 report found that 58.9% of job seekers explore a company’s website, 34.7% check Glassdoor, and only 22.3% reach out to current or former employees. That’s fine for evaluating five companies. When you have 101, each of those steps becomes a full-time job. And the platforms themselves are fragmented — LinkedIn for professional networks, Glassdoor for reviews, Crunchbase for funding data (starting at $49/month for API access), GitHub for engineering culture. Each tool offers partial signal. No single tool gives you the full picture, and combining them manually doesn’t scale. Experts estimate that up to 70% of job opportunities come from the hidden job market — positions filled through referrals, networking, or internal hiring — which means the companies most worth working at may never appear on any job board at all. This is where the swipe tool came in. I built a Tinder-style review interface inside the existing Next.js monorepo — one card at a time, arrow keys for yes/no/maybe, company logos pulled from Google’s favicon API, decisions persisted to a JSON file on disk. The psychology is sound: Hick’s Law tells us that decision time increases logarithmically with choices. Reducing 101 simultaneous options to a sequence of binary decisions eliminates the paralysis of staring at a spreadsheet. The decision science backs this up. The paradox of choice tells us that the more options we have, the less satisfied we feel with our decision — the cognitive effort of evaluating 101 unfamiliar companies simultaneously leads to avoidance, not action. A study of job-searching college students found that maximizers who examined every option selected jobs with 20% higher salaries but felt less satisfied, more stressed, and more regretful than satisficers who used threshold-based decisions. The swipe interface enforces satisficing: you see one company, you make one call, you move on. But the swipe tool did something I didn’t expect: it became a forcing function for data quality. When a card showed up with just “Music rights management” and a fit score of 3, I couldn’t swipe with confidence. I didn’t know enough. Was this a three-person startup running on angel money, or a profitable company with 200 employees? Was the founder a music industry veteran or a fintech pivot? The tool didn’t solve the research problem — it exposed exactly where the research was thin. Every hesitation was a data gap. Every “maybe” was a missing signal. So I started adding layers. First, live website screenshots — a company’s homepage tells you immediately whether you’re looking at a real product or a landing page that hasn’t been updated since their seed round. Then Hacker News mentions via the Algolia API, showing whether the tech community had ever discussed these companies — and more importantly, what they said. A company with zero HN mentions is invisible; a company with twenty mentions and flame wars in the comments tells a different story. Then one-click research links to Google News, LinkedIn, Glassdoor, and GitHub. Each layer added signal. Each layer revealed what was still missing. The biggest gap was reputation. A company can score 5/5 on fit and still be a nightmare if they’re doing rolling layoffs or have a toxic CEO. Fit tells you “do I want to work there?” Reputation tells you “would it be safe to work there?” I needed both dimensions.

Building the Machine

The system has five layers, and each one exists because the previous one broke. Layer one: the research agents. Claude Code was the orchestration layer — not just an autocomplete tool, but an agent that could take a research brief like “find Series A+ European startups in music tech,” execute multi-step workflows across web searches and career pages, and produce structured output. The AI2 Incubator’s 2025 assessment of AI agents notes that they’re still “very much prototypes” for unsupervised use — but they excel at semi-supervised research workflows where a human reviews outputs. That’s exactly how I used them. I built custom “skills” — reusable research workflows that could be invoked like functions. The first skill handled company discovery: search, evaluate, score fit, check reputation, write to three synchronized data stores (TypeScript for the swipe tool, markdown for detailed dossiers, CSV for structured analysis). Three parallel agents could research 20-28 companies each in a single pass. Over six research rounds, covering everything from mainstream categories like music tech and gaming to deliberately niche searches — anti-cheat companies, autonomous drone startups, biohacking platforms — the agents collectively evaluated hundreds of candidates and surfaced 162 that scored 3/5 or higher on fit. The profile scraping sprint was a precursor to all of this. Before researching companies, I needed to know what I was working with. Three concurrent browser agents scraped my Contra portfolio, withSeismic consultancy site, and LinkedIn simultaneously. GitHub data came via the API instead of browser scraping — structured data beats parsing rendered HTML. The result: a complete asset library with every project, testimonial, tech stack, and metric instantly accessible to the agents that would later craft outreach. Layer two: the swipe tool. Built in the existing Next.js monorepo with zero new dependencies — just React components, the existing Tailwind setup, and a JSON file read from the API route. Each card shows a logo (pulled from Google’s favicon API, which is free and keyless), company name, role, location, work model, salary, fit score badge, and notes. Arrow keys for rapid decisions, backspace to undo with full history. A progress bar shows reviewed count and running yes/maybe tallies. Decisions write to a decisions.json file on disk — not localStorage, because a file is more useful downstream. Other agents can consume it, it can be committed to the repo, you can diff it. Layer three: the reputation scoring system. This is where the research got serious. I ran three parallel agents, each handling a batch of companies, searching Glassdoor reviews, layoff trackers, news articles, and employee forums. The output: an A-F reputation rating for every company with enough public data. An “A” meant strong Glassdoor scores, no recent layoffs, positive press, and visible employee advocacy. An “F” meant major scandals, mass layoffs, or financial distress. Most companies landed somewhere in the middle — the B-C range where you need to read the details before making a call. Why not just use Glassdoor? Because Glassdoor scores are structurally compromised. An Originality.AI study found that AI-generated reviews on the platform surged 376.3% between 2022 and 2024 — nearly 1 in 3 reviews on S&P 500 company pages is now likely AI-written. A Wall Street Journal investigation identified over 400 companies with suspicious rating spikes, including SpaceX offering branded mugs for positive reviews and CEOs explicitly organizing review campaigns. A Journal of Management study found that 68% of HR professionals admitted to actively managing their company’s online reputation on review platforms — flooding with positive reviews timed around recruiting campaigns, using third-party reputation services. When the platform that’s supposed to hold companies accountable is funded by those same companies, the incentive structure is broken. The reputation system cross-references multiple signals: Glassdoor trend (not just the number), layoff history, CEO approval trajectory, news coverage, financial stability. The combination creates a 2D map — fit score on one axis, reputation on the other. High-fit, high-reputation companies are priority targets. High-fit, low-reputation companies are warnings. The map changed my shortlist dramatically — several companies I would have applied to blind ended up with D or F ratings once the full picture emerged. Layer four: the screenshot service. I wanted every company card to show a live screenshot of their website. Started with thum.io — “image not authorized.” Switched to WordPress mShots — never loaded. Two SaaS services, both broken. So I complained to Claude — “these screenshot services are both broken, can we just build our own?” — and had a working Puppeteer screenshot API in under two minutes. Not two hours. Not two sprints. Under two minutes, one-shot, and it just worked. Fifty lines of code. Headless Chrome launches, captures a 1280x800 screenshot, saves it to disk, serves it as a static file on subsequent requests. First capture takes a few seconds; every repeat view is instant.
This was a micro case study in something that feels like the beginning of a genuine paradigm shift in software development. Two years ago, you’d evaluate screenshot services, compare pricing tiers, maybe write an API wrapper, file a support ticket when the free tier broke. Today the conversation was literally: “These don’t work, build it.” And Claude did — correctly, on the first try, with a better result than either paid service would have provided. Controlled studies show AI coding assistants cut development time by more than 50%, but that understates what’s actually happening. It’s not that the same work takes less time. It’s that the entire build-vs-buy equation has inverted. A lot of SaaS tooling exists because building the thing yourself used to be too much hassle. When the build cost collapses to two minutes of conversation, the calculus flips completely. We have the benefit of local-first development — everything runs on our machines, no deployment pipeline, no infrastructure to manage — but this pattern extends far beyond screenshots. Every time a SaaS tool broke or fell short during this project, the answer was “just build the replacement,” and the replacement was usually better. Ironically, one blog post argues you shouldn’t build your own Puppeteer screenshot API because of the complexity — but their argument assumes you’re building a production SaaS. For a personal research tool processing 162 companies, the complexity is trivial and the alternative is paying for something that doesn’t work. Layer five: the live research panel. Every company card now has a panel beneath it showing a website screenshot, the top five Hacker News mentions (via the Algolia API — free, 10k requests/hour), and one-click links to Google News, LinkedIn, Glassdoor, and GitHub. Results are cached to a JSON file on disk, same philosophy as everything else in this system: files over ephemeral state, human-readable and diffable. The entire data architecture followed one principle: everything on disk, nothing in browser state. Decisions go to decisions.json. Company data lives in three synced files. Screenshot images sit in a local directory. HN mentions cache to JSON. The result is a system where every piece of state is versionable, diffable, and consumable by other agents — which matters when you have ten Claude threads running simultaneously and any of them might need to read another’s output.
The whole stack — five layers, six research rounds, 162 companies scored and rated — was built and operated in about 45 minutes and two cups of tea. My girlfriend Inna and I, both software engineers, worked from two MacBooks connected to a shared code-server. At peak, ten concurrent Claude threads were running: some doing research, some building features, some debugging. It’s pair programming, but instead of two people on one problem, it’s two people directing ten parallel AI streams across different problems.

Forty-Five Minutes and an Outreach Machine

The research pipeline was running. 162 companies scored, rated, and loaded into the swipe tool. But a database of companies is a phonebook — useful only if you know who to call and what to say. The next phase took forty-five minutes and two cups of tea. I’d swiped through 48 companies manually — arrow keys, gut reactions, one card at a time. The decisions followed patterns I hadn’t consciously articulated. Music tech and gaming companies got consistent yeses. Anything with a D or F reputation rating got an instant no. Companies whose tech stack didn’t overlap with mine got rejected regardless of how interesting the domain was. B2B SaaS, fintech, crypto — no, no, no. Agencies with interesting clients — yes. The swipe tool wasn’t just capturing decisions. It was generating training data. So we applied those patterns algorithmically to the 96 companies I hadn’t reviewed yet. Music tech with a B or better reputation? Yes. Hardware-focused with no web stack? No. Generic enterprise tools? No. Prague-based with strong culture ratings? Yes. In minutes, the system classified all 96 remaining companies using my own decision logic. Final tally: 144 companies reviewed — 50 yes, 14 maybe, 80 no. Fifty companies that survived needed prioritization. Not every “yes” deserves the same investment. Apify — a Prague-based scraping and automation company where my literal specialty is their product category, with a 4.8 Glassdoor and zero layoffs — deserves a fundamentally different approach than a Tier 3 gaming studio where I’d be one of two hundred applicants. So we built a tier list.
Full investment: deep-dive intelligence dossier, proof-of-work project, personalized outreach to specific people by name.
CompanyLocationWhy
ApifyPragueScraping and automation is my literal specialty
PhotoroomParisGlassdoor 4.9, culture rated 5.0
FramerAmsterdamFormer client, warm intro possible
MewsPrague$1.2B unicorn, hospitality tech
OverwolfGamingMaps directly to the Source 2 project I’m building on weekends
Bakken & BaeckOslo / Amsterdam / London / Barcelona / Bonn4.7 Glassdoor, 100% recommend, 5.0 work-life balance
TractiveAustriaFour-day work week, actively hiring Senior Full-Stack, 3hrs from Prague
The tiering also revealed geographic and vertical clusters that make the outreach efficient. Fourteen music tech companies share enough domain overlap that one well-crafted music demo covers a huge surface area. Seven gaming companies are all addressed by the Source 2 modding platform I’m already building. Six creative agencies respond to the same portfolio narrative. Proof-of-work scales by cluster, not by company — which means seven proof-of-work projects can reach fifty companies. Then we built the pipeline. Three new Claude Code skills, each feeding the next like stages in a compiler. The first — a company deep-dive skill — takes a company name and produces a full intelligence dossier: current job openings, recent news, tech stack analysis, two to five key contacts with LinkedIn profiles, pain points mapped to my specific experience, outreach angles, and a first-ninety-days sketch. The second — an outreach strategy skill — reads the dossier and produces a personalized approach: which channels to use, who to contact first, what proof-of-work to create, and a sequenced outreach plan. The third — a craft-outreach skill — reads the strategy and produces ready-to-send messages in my actual voice, with CV tailoring notes, portfolio selection, and follow-up drafts. Each skill checks its prerequisites: strategy requires a dossier, outreach requires a strategy. The pipeline goes from “I’ve never heard of this company” to “here’s a personalized message from someone who clearly understands your product” in about fifteen minutes of agent execution. We ran the deep-dive skill across all fifty companies at once. Ten parallel agents, each handling a batch of three to seven companies. Tier 1 got two dedicated agents for the deepest research. Tier 2 and Tier 3 were distributed across the remaining eight. Forty-four out of fifty dossiers completed before the agents hit the API usage ceiling. Two follow-up agents caught the remaining six. Result: 51 intelligence dossiers — each one the equivalent of a few hours of manual LinkedIn stalking, careers-page reading, and news searching, compressed into minutes by agents that don’t get bored or forget to check the engineering blog. The dossiers surfaced things that no amount of database scanning or fit scoring would have revealed:

Raycast

Tier 3 → Tier 1 candidate. “Design Engineer” role is actually senior web engineering — EUR 100-135k, fully remote from Prague, Next.js/TypeScript with Radix Primitives. Open-source repo ray-so has 2,200 stars and 7 open issues — a ready-made contribution path.

Musixmatch

Two hidden roles invisible from surface data: Frontend React Developer and Senior Backend JavaScript Engineer. Profitable company doing nearly $68M in revenue.

Moises AI

No web role posted, but $40M Series A just closed. Open-source project openDAW — a TypeScript web-based DAW updated 2 days before discovery — the single highest-value OSS contribution opportunity across all 50 companies.

Spitfire Audio

Tier 3 → Tier 2. Dossier revealed their WebView architecture bridges directly to web engineering expertise — a connection invisible from the outside.
These discoveries fed the outreach strategy — a playbook of six approaches, each calibrated to a different company type and a different kind of proof:

Growth Audits

30-minute teardowns of a target company’s public product, delivered as a one-page PDF alongside the application.

Open-Source Contributions

Submit a PR to their codebase, then apply as someone who already shipped code in their repository.

Micro-Demos

2-4 hour builds: Three.js experiments for creative agencies, API-powered toys for voice AI companies, Source 2 modding platform for gaming studios.

What I'd Build Documents

One-pagers outlining the first 90 days at a specific company, referencing their actual product and actual gaps.

Forward Deployed Engineer Pitch

Reframing a decade of consultancy across Contra, Sky, Groupon, and MIT as the parachute-in-and-ship pattern agencies need.

This Article

Reaches all 162 companies at once while demonstrating the exact growth engineering thinking the role requires.
The last piece was making the research browsable. We added dynamic company detail pages to the web app — every one of the 162 companies now has its own URL, showing the full profile, the yes/no/maybe decision, and the rendered dossier in one place. Click a company name in the swipe tool, read the complete intelligence file, click back. The dossiers went from raw markdown files in a directory to a browsable research interface that makes the outreach phase feel like working from a CRM, not a filesystem. Forty-five minutes earlier, we’d had a database. Now we had a machine — tiered priorities, intelligence dossiers, a three-stage skill pipeline, seven outreach approaches, hot opportunities surfaced from the data, and a browsable UI pulling it all together. The research found the companies. The machine was how we’d approach them.

What 162 Companies Actually Taught Me

The data told three stories I wasn’t expecting. The first: Glassdoor scores lie, systematically. Not occasionally, not at the margins — at the core of how the platform works. The most dramatic example is Bending Spoons, an Italian software company with a 4.7/5 Glassdoor rating and 97% of employees recommending it. Sounds like a dream employer. Here’s their acquisition history: Evernote — 129 employees fired. Filmic — entire workforce terminated. Mosaic Group — all 330 staff cut immediately. WeTransfer — 75% of staff fired. Vimeo — majority of workforce laid off, including the entire video team. The pattern is ruthlessly consistent: acquire a well-known brand, fire nearly everyone, operate it with a small team from Milan. The overall Glassdoor score is “real” in the narrowest sense — The Pragmatic Engineer profiled them and confirmed the internal team rates the experience positively. But the reviews come from the survivors, not the hundreds of people fired from acquired companies. It’s survivorship bias in star-rating form. This pattern repeated across the database:
CompanyGlassdoorReality
Bending Spoons4.7 (97% recommend)Evernote: 129 fired. Filmic: all fired. Mosaic: 330 fired. WeTransfer: 75% fired. Vimeo: majority fired.
Spitfire Audio4.4~25% staff cut in 2023, co-founder departed
Ableton3.220% workforce reduction
Epidemic Sound2.8$182M revenue, rolling layoffs every 6-8 weeks
ProductboardWave layoffs so severe employees describe culture as “trauma bonding”
The structural conflict runs deeper than bad actors. Glassdoor’s business model relies on revenue from the very companies being reviewed — employer branding products, job listings, premium company profiles. The platform that’s supposed to hold employers accountable is funded by those employers. The WSJ found that during review manipulation surges, five-star ratings made up 45% of reviews, compared to just 25% in the preceding six months. Glassdoor claims to reject 5-10% of reviews for violating guidelines, but the actual manipulation rate appears much higher. The lesson: never trust a single number. Cross-reference Glassdoor with layoff history, news coverage, financial trajectory, and the trend of reviews over time. A high score with recent mass layoffs is more concerning than a moderate score with stable employment. The second finding: the companies I found most exciting weren’t the ones that needed my skills. Robot bartenders, autonomous drones, brain-computer interfaces, satellite imaging — genuinely thrilling technology. I spent an entire research round deliberately hunting for the weird and outlandish: game modding platforms, generative audio startups, biohacking companies, space tech. The stranger the company, the more excited I got reading about them. But they need embedded systems engineers, hardware specialists, and ML researchers. Not TypeScript/React developers. The companies that actually need someone with my stack — BI tools, developer platforms, e-commerce infrastructure — sound less exciting on paper. This is the skills-excitement gap, and it’s a trap that engineers fall into when browsing job boards. You optimize for what sounds interesting rather than where your skills create the most leverage. The data from the 162 companies made this painfully clear: the more novel the hardware or science, the less likely they need a web platform engineer. The robot bartender company needs someone who can write firmware for servo motors. The satellite imaging startup needs signal processing specialists. The brain-computer interface company needs people who understand neural data pipelines. TypeScript doesn’t appear in their stack. The sweet spot turned out to be narrow: interesting domains where the web platform IS the product. Realm.fun runs game servers but their engineering challenge is a TypeScript dashboard. MapTiler does mapping but their product is a TypeScript SDK. Tractive tracks pets with GPS but what they’re hiring for is a React app. The domain is exciting; the engineering work matches your stack. That intersection is where you want to be — and it’s far smaller than you’d expect when you start looking. The third finding was a strategic breakthrough: agencies. My girlfriend Inna suggested I’d be open to agencies, and it unlocked an entirely new category I hadn’t been considering. My career already follows an agency pattern — my consultancy withSeismic has delivered for Contra, Sky, Groupon, MIT, and Framer. I’ve spent a decade hopping between industries, adapting to new domains, shipping products for clients across media, edtech, travel, and consumer tech. I’ve been a “forward deployed engineer” my whole career without using the label. The agency insight reframed the entire search: instead of picking one vertical and committing, I could pick a structure that lets me work across all of them. Agencies solve the “can’t pick one vertical” problem structurally. At a product studio like ustwo — the majority employee-owned studio that built Monument Valley as an internal venture alongside client work — engineers work across music tech, fintech, AI, and healthcare in a single year. They describe their team as “a diverse group of innovative, curious, product-minded techies” where cross-discipline teams combine design, engineering, strategy, and product. At Bakken & Baeck, a 60-person studio across Oslo, Amsterdam, London, Barcelona, and Bonn, the team includes several PhDs in machine learning and AI. Engineers do everything from ML prototyping to blockchain alongside traditional web development. Their Glassdoor shows a 4.7 rating, 100% of employees recommend, and a 5.0 on work-life balance. What differentiates these studios from traditional agencies is that engineers are partners, not resources. The employee ownership model at ustwo creates a fundamentally different incentive alignment. These studios don’t just execute briefs — they co-create products with clients and build their own ventures alongside. The trade-off is real: agencies typically lack equity upside and you ship a product then move on. But for someone who genuinely cares about music tech AND gaming AND travel AND creative tools, an agency isn’t a compromise — it’s the only structure that lets you do all of them without changing jobs.

Growth Engineering Your Own Career

The irony isn’t lost on me: I used growth engineering to find a growth engineering job. The skills are the same. Finding signal in noise. Building systems that scale data collection. Measuring what matters instead of what’s easy. Optimizing for conversion. A/B testing approaches. The discipline that makes me effective at growing a product is the same discipline I applied to growing my career options from zero to 162 qualified targets in 45 minutes. The data on traditional job searching is grim. Cold online applications have roughly a 2% success rate. One senior engineer documented his entire 2025 search: 150 applications, 53% with no response whatsoever, 88 hours invested in studying and interviewing. Five offers from 30 interviews from 150 applications — and only one inbound recruiter contact during the entire process. The referral path is 15x more effective, with a 30% hiring rate compared to 7% for all other methods combined. But it requires the one thing most engineers don’t invest in: making their work visible before they need a job. The pattern is consistent across the industry’s most famous hires. Dan Abramov built Redux as a conference demo — he met React team member Jing Chen at React Europe, and she facilitated his hiring at Facebook’s React Core team. Kenneth Reitz wrote the Python Requests library in two hours and got hired as Heroku’s Python Architect. Kim Swift’s student project Narbacular Drop got a 15-minute demo in front of Gabe Newell, who offered jobs to the entire seven-person team on the spot — and it became Portal. Greg Kroah-Hartman of the Linux Foundation put it plainly: five commits in the kernel, and you’ll get offered a job. None of these people got hired by submitting a CV into an ATS. They got hired because their work was already visible. The thesis is simple: demonstrate, don’t apply. Show up with the receipt, not the request. A hiring manager reading “I’d improve your onboarding” on a CV doesn’t believe it. A hiring manager seeing a specific, insightful audit of their actual product does. Nearly 70% of junior tech applicants without a portfolio fail to pass first-round screening, while applicants with showcased projects are twice as likely to get interviews. Portfolios bypass the ATS entirely because they demonstrate actual capability — something keyword matching can never evaluate. As the author of Producing Open Source Software notes, most of an open-source developer’s resume is already public — recruiters can check code style, architectural choices, and project decisions before they even reach out. This entire project is the demonstration. It shows systems thinking (a multi-agent research pipeline feeding a three-stage outreach machine). Data literacy (extracting patterns from 162 company profiles, then using 48 manual decisions to auto-classify 96 more). Build-vs-buy judgment (Puppeteer over broken SaaS, custom skills over manual research). Communication (you’re reading the proof right now — and this article is itself one of the seven outreach approaches, reaching all 162 companies at once while demonstrating the thinking). Every skill a growth engineering role requires is embedded in the thing I built to find that role. The job search process became the portfolio piece. And it wasn’t a solo effort. Two engineers, one shared code-server, ten parallel Claude threads — that’s how you research 162 companies, build a review tool, create a reputation scoring system, replace two SaaS products, and write the strategy playbook in 45 minutes. The bottleneck was never the AI. It was human attention — directing the agents, reviewing their output, catching errors, deciding what to research next. Having two sets of eyes meant twice the capacity to review, redirect, and catch the things agents miss. The agents are fast and thorough, but they don’t know which companies make your gut feel uneasy or which job description sounds like it was written by someone who understands engineering. That judgment layer is still human. If you’re a senior engineer staring down your next job search, here’s what I’d suggest: spend 80% of your time on demonstration and networking, 20% on applications. Build something that solves your actual problem. Make it public. Write about it honestly — including what didn’t work. The best outreach isn’t a cover letter — it’s work that speaks for itself. One referral is worth approximately 40 cold applications. Every hour spent building in public compounds; every hour spent refreshing LinkedIn doesn’t. The system found 162 companies, produced 51 intelligence dossiers, built a tiered outreach strategy, and generated the playbook for approaching every one of them — in 45 minutes. But the real product was never the database or the dossiers or the outreach machine. It was the proof that I know how to find signal in noise, build systems that compound, and ship under pressure — which is exactly what growth engineering is.