Pryzmas, Gotos, and the Illusion of Perfect Memory: A Sardonic Guide to Hobby Language Design

Generated image# Pryzmas, Gotos, and the Illusion of Perfect Memory

You scrolled into the monthly “What are you working on?” thread and found, as always, a small civilization of spare-time language designers. Someone has a bright repo called Pryzma, another swears they can outlaw garbage collection with a single static pass, and yet another promises to make gotos sexy again. If you live for little victories on bare metal or the peculiar thrill of CI passing after you changed three lines, this is for you.

Let me be frank: people do not make languages because the world absolutely needs another scripting toy. They make them because language design is a miraculous teaching device — a place where category theory, type systems, and compiler hacks come together and bite your assumptions in interesting ways. Also, the community roasting is deliciously addictive.

I want to riff on the observation that many hobby projects chase one fantasy: the illusion of perfect memory. Compile-time analysis that inserts free calls where appropriate, effect systems that reveal side effects in types, algebraic effects that let you compose control like LEGO — all of these are attempts to put programming problems into neat mathematical boxes. The math helps, but it is not a magic wand. Here is why, with a few friendly detours through various logical and mathematical neighborhoods.

## Categories, monads, and the temptation to be tidy

If you hang around functional programmers long enough, someone will say “monad” in a tone that implies it solves everything. Category theory gives beautiful abstractions: monads capture sequencing with effects, arrows and applicatives describe other flavors of composition, and algebraic theories let you reason about operations and handlers. Algebraic effects, in particular, feel like a restoration of agency: write the need, defer the how to a handler.

But category theory also teaches humility. Universal properties give you canonical solutions when you can set up the right diagram. Outside those tidy diagrams, you wrestle with coherence conditions, choices that matter, and the fact that not every practical constraint corresponds to a clean categorical notion. Your algebraic effect is lovely until you need to interoperate with low-level control, continuations, or FFI. Then the neat diagrams require glue that is rarely pleasant.

## Linear logic, separation logic, and the memory dream

The siren song of compile-time free calls often runs like this: use a substructural logic, banish implicit aliasing, and infer the last use statically. Linear and affine type systems are glorious here. They force you to be explicit about ownership and consumption. Separation logic gives compositional proofs about mutable structure. In theory, these let you avoid GC while keeping safety.

In practice, real programs have cycles, shared caches, callbacks from native code, and the kind of aliasing patterns humans produce when they are tired. Static analyses that are path-sensitive enough to find the last access tend toward undecidability or require annotations that annoy users. So implementors either restrict the language, add runtime checks, or accept a conservative approach that often punts to runtime garbage collection or reference counting. The devil, as always, lives in alias hell.

## Graph theory, fixed points, and cycles

Memory management is graph management. Objects are nodes, pointers are edges, and cycles are the easiest way to wreck a compile-time free strategy. Reference counting sheds cyclic garbage. Graph algorithms detect strongly connected components, but doing that statically across arbitrary control flow is expensive and brittle.

Fixed-point theory reminds us that many analyses are seeking a least or greatest fixed point of a monotone operator. Abstract interpretation packages this into a theory-and-practice pipeline, but the precision hinges on the chosen abstract domain. Choose precision, and the analysis explodes in time; choose speed, and the analysis yields imprecise results that refuse to free anything interesting.

## Logic and undecidability: a friendly warning

Once you try to determine properties of arbitrary programs beyond trivial syntactic checks, you bump into Rice and friends. Halt on a Turing-complete toy, and you cannot, in general, decide deeper semantic properties. The good news is: many useful fragments are decidable and practical. The bad news: the boundaries of those fragments are often small and socially awkward. If your language is permissive, your static tricks get conservative; if you restrict enough to make analysis powerful, you might starve users of convenience.

## Effects, handlers, and the ergonomics vs. theory tradeoff

Effect systems try to make the presence of side effects visible and compositional. That’s not just a nicety — it enables optimizations, safer APIs, and documentation that types actually enforce. But it also introduces annotation overhead and a cognitive load. If inference is strong, users don’t notice. If not, the experience is like wrestling with a very polite yet demanding specification.

Algebraic effects then promise to be slick: declare what you need, compose handlers, and mock away. They are elegant categorical toys and very useful in structured systems. They add complexity when you need reasoning about resumptions, effect polymorphism, and interactions with control operators. Again, category theory suggests patterns, but engineering yields the tradeoffs.

## Practical advice from the trenches

– Start by owning one killer feature. Want algebraic effects? Make them ergonomic and well-documented before you chase perfect memory.
– Document the mental model. Humans are impatient — explain how ownership, effects, and control behave in a couple of short paragraphs and a few examples.
– Test with real users. Publish early. Expect praise, forking, and the occasional savage comment. All are useful.
– Pick realistic analysis goals. If compile-time free is a headline feature, be explicit about the subset of programs it handles, or provide annotations for edge cases.
– Keep the compiler simple first. Correctness and clear semantics beat clever, brittle analyses every time.

## The honest payoff

The cross-pollination of math and systems thinking is what makes hobby language design educational and fun. Category theory gives elegant metaphors; linear logic gives handles on ownership; graph theory and abstract interpretation tell you what is computationally plausible. But the right answer is almost always a mixed strategy: small, provable cores; optional static checks; runtime fallback mechanisms; and a clear mental model for users.

If you are building a language because you want to learn, because you have an itch only you can scratch, or because you genuinely think your idea will improve other people’s lives, then ship. You will learn more from shipping a flawed compiler than from perfect math on paper. If your dream is to replace GC overnight with a single static pass, bring a backup plan and maybe a stiff drink — the math is beautiful, but the world is messier.

So here I leave you with a question to argue about over coffee, beer, or a lamenting commit message: given the tradeoffs between expressiveness, analyzability, and ergonomics, which restrictions are you willing to live with in order to have a tractable and useful memory model — and which concessions would you never accept, even if they made implementation trivial?

Leave a Reply

Your email address will not be published. Required fields are marked *