The Categorical Imperative (Applied): An ML Grad’s Guide to Selling Yourself Without Being a Tool
# The Categorical Imperative (Applied): An ML Grad’s Guide to Selling Yourself Without Being a Tool
You graduated. You can recite a loss function between mouthfuls of celebratory cake, and your LinkedIn headline has the word “disruptive” slapped on like a badge of honour even though the most disruptive thing you’ve done involved a coffee machine and a stubborn jam. Welcome to job-market reality. If you want a job, collaborators, or a GitHub star or two, you need rules that are simple, honest, and seriously practical. Think of Kant’s categorical imperative as career advice: act only according to maxims you’d want everyone to use. In plain terms — don’t be spammy, be useful, and make sure your math actually helps people who ship things at 2 a.m.
This is both a moral and tactical playbook. Below I’ll riff on why the advice works, how formal mathematics and logic quietly justify it, and where nuance deserves a wink rather than a lecture.
## Make your promotion post useful, not spammy
Kant would approve: treat people as ends, not as means to your follower count. Communities tolerate self-promotion when it respects other people’s time and signals competence, not just hunger for attention.
From a formal-logic perspective, this is about information theory and cooperative game theory. A promotion post that says “Project: lightweight recommender for retail — ideal for small teams; pricing: open-source; consulting: €X/hr; contract: available” optimizes signal-to-noise ratio. You’re reducing entropy. You give actionable priors instead of forcing the reader into expensive hypothesis-testing (i.e., clicking the link, clicking away, reporting you as spam).
Practical checklist:
– Say what it is and who it’s for.
– Be explicit about money and scope.
– Avoid link shorteners that raise phishing alarms.
– Keep one thread for updates so you don’t flood the channel.
This isn’t cold marketing. It’s constrained optimization: you’ve got limited attention budget; spend it in a way that increases expected utility for both parties.
## Templates: kindness that saves time (and reduces friction)
For recruiters and applicants, templates are like axioms — they make deduction cheap and predictable. If you’re hiring, state role, location, type, salary range, and what success looks like. If you’re applying, state availability, salary expectations, link to portfolio, and a one-line pitch.
This is an application of type theory: matching interfaces matters. Send an ill-typed CV and the hiring system (human or ATS) throws an exception. Templates reduce back-and-forth; they create a canonical form that’s easy to reason about.
Yes, it sounds bureaucratic. But bureaucracy is just enforced invariants. It’s the difference between a reliable API and a flaky script that fails at runtime.
## What actually moves the needle in ML jobs: less hype, more engineering
Employers want reproducible results and systems that survive the lunch rush and investor calls. Here’s why a couple of recent technical points matter beyond academic Twitter.
1) Spiking networks: the bottleneck isn’t “binary” — it’s frequency
Neuromorphic folks discovered that SNNs don’t just lose fidelity because spikes are binary; they act like low-pass filters. High-frequency signal content decays. In plain language: SNNs see the big picture fine but struggle with fine-grained textures. That’s math (Fourier intuition) meeting engineering.
Why you should care: knowing this gives you leverage in edge/energy-efficient roles. The fix often isn’t a new theorem — it’s swapping a pooling strategy or adjusting token mixing. That’s applied math making an immediate operational difference.
2) PKBoost v2: built for messy, real data
A library that uses entropy-weighted splits, auto-tuning, localized retraining under drift, and production niceties (parallel processing, optimized histograms) is speaking the language of deployed systems. These aren’t sexy theorems; they’re robust engineering decisions justified by information theory and statistical learning.
Resume translation: show PR-AUC improvements on imbalanced datasets, quantify performance degradation under drift, and you’re communicating what product teams actually care about.
## Where the jobs are — and do you need a PhD?
Recommenders and applied ML exist in e-commerce, ad tech, streaming, travel, and B2B SaaS. MSc is sufficient for many engineering roles; PhDs are valuable where you’re expected to generate new theory or push state-of-the-art. Think of it like model complexity: a PhD is often a higher-capacity model suited to specialized tasks; for many production roles a mid-capacity model (MSc) with good regularization (product sense) generalizes better.
Yes, there’s a market signal for “PhD = authority,” but the mapping isn’t bijective. Companies hire for the ability to ship, debug, and measure.
## Practical skills to highlight (and the math behind them)
– Production-first thinking: deployments, CI, monitoring. (Systems theory, control.)
– Handling imbalance and drift: name-check PKBoost-style approaches. (Information theory, sequential decision processes.)
– Efficiency trade-offs: energy-aware inference, quirks of SNNs. (Signal processing, approximation theory.)
– Clear, measurable outcomes: recall, PR-AUC, latency, cost-per-query. (Statistics, performance modeling.)
When you talk about these, be specific: “Reduced PR-AUC degradation by X% under Y drift using localized retraining” beats “I handled drift.” Numbers are the Kantian maxims of industry: universalizable, testable, and kind to the reader.
## The moral of the story (and yes, there’s math under the hood)
Be useful and specific. Templates are not cruelty — they’re compassion for other people’s limited attention. Practical fixes (how to handle drift, why architecture affects frequency response) are not trivia; they’re the kind of applied math that shifts metrics in real systems.
There’s room for nuance. Over-formalizing everything risks sounding like a checklist robot; too much personality without substance feels like a billboard. Your job is to be the human-readable theorem: elegant, precise, and reproducible at 2 a.m.
If your GitHub README can be read over coffee and explains the trade-offs, you’ve already won half the interview. If your LinkedIn post gives the reader a clear next step — e.g., “I’m available to consult on item X at €Y/hr; open-source demo here” — you’ve just reduced their cognitive load and increased your expected returns.
So be Kantian in spirit: act in ways you’d be okay with everyone doing. Mathematically, aim to reduce entropy, provide informative priors, and keep your systems conservative enough to work when the world gets noisy.
I’ll leave you with a question I love asking at the end of interviews and at the bar afterward: if you had to pick one principle to be your professional categorical imperative — clarity, reproducibility, or kindness — which would you choose, and how would it change what you post next?