
The five esolangs discussed in this piece -- Coem, Love Languages, Prāsa, Kip, and Captive -- draw on aspects of natural language usually avoided in code: nuance and ambiguity, complex grammars and morphologies. They stand apart from the better‑known, jokier esolangs like LOLCODE or RockStar, which borrow from natlangs only for their vocabulary, building lexicons of memespeak or power-ballad lyrics, respectively. Since they keep the same grammar as typical imperative languages, there's a stiffness to the way their natural-language-inspired phrases combine and they still feel code-like. These five langs, which we might call esoNatLangs, do not; they foreground linguistic expressiveness over algorithmic execution.
"Esonatlang" is a play on sub-genres of Oulipo ("the workshop of potential literature" which featured constraint-based writing), e.g. Outrapo, the "workshop of potential tragicomedy." These langs multicode with prose: their programs have multiple readings that constrain each other aesthetically, much like Piet does with images or Velato with music. The constraint sets of some are abstract enough to border the steganographic: an English sentence gives little hint that it is also a Love Languages program on first reading. Other esonatlangs leave more visible stylistic traces, like the musical cadence of Prāsa. But all prioritize code as a text for human reading: their computational aspect -- how they perform as code -- shapes how they are written, rather than serving as a primary goal. As per usual in esolang practice, esonatlangs are made by practitioners from many backgrounds: academics and students, artists, and as side projects by working programmers. The final language, Captive, is one of mine. It should be noted that none of these were made in reaction to each other, and the term esonatlang is my own; this is an emerging trend, not a label these esolangers have necessarily adopted themselves.
If there is a common precursor to this style, it may be the first English‑prose‑like esolang, Shakespeare. This twenty-year-old language was explicitly a joke -- one that has long worn out. Yet it took the first step toward linguistic complexity in code and continues to inspire languages that explore the possibilities it hinted at but didn't fully explore (previously covered: in:verse, Cree#, Ashpaper). While nearly all Shakespeare programs sound the same, it's more sophisticated than the esoteric-in-lexicon-only joke langs like LOLCODE. In Shakespeare, values are expressed in lines of dialogue full of nouns and adjectives classified as positive, neutral, or negative. This creates an enormous lexicon. For example, there are 25 negative nouns, including “pig,” “bastard,” and “Microsoft.” The esonatlangs pick up where this leaves off, moving the lexicon from a static list of words, however large, into abstractions like sentence structure, the pattern of individual letters within larger texts, or the rhythm and flow of phrases.
Esonatlangs are of particular interest in this moment of AI-generated code. Agentic coding uses natural language to negate it. Prompts are discarded as they are too ambiguous, too untrustworthy, so they are discarded. What is kept is the fixed, unambiguous code they resolve to. Not only is the texture of the original prompt lost, but also the personal style of hand-typed code that would have been written in their stead: the small choices that let you recognize one coworker's code from another's.
The esonatlangs invert this logic. They embrace ambiguity and personal expression as the point. They center the text of code over algorithmic efficiency. Below are short introductions to each language.

Katherine Yang built Coem to explore "the experience of writing and the tactile feeling of words on the page or screen." Its REPL-ish interface evaluates expressions immediately. However, there is no separation between code and the responses from the interpreter. They appear immediately to the right of the line of text they belong to, separated with a dagger. The dagger is usually a sign for a footnote; in Coem it marks both output and comments.
Coem does not treat non-Coem texts are error, it simply ignores it. This means that, in effect, the programmer can include lines of text from outside its grammar; they will simply trigger no output. Whether a program is "correct" in Coem is a secondary concern for its programmer-poets. As Yang describes:
[A] Coem text should look like a piece of poetry writing that can be appreciated on its own — then, when placed into the editor and "run", there might be a secondary delight of experiencing writing that can "run" like code... there's no sense of the "right" set of words (see e. e. cummings or Lewis Carroll); the words you write will do something different to readers, whether that's eliciting an emotion or producing no effect at all. In Coem, I keep syntax highlighting from the code world... A Coem text will still have variable and ungovernable emotional effects on readers that all writing does, but it also has a code-like property of doing something predictable, even if all it's doing is echoing words back to you.
Yang began the project without a particular end-result in mind, it might have gone a different way and not become a language. It is designed for code-curious poets and creative coders. It de-emphasizes mathematical operations and is built around string generation and textual output. However, it does support looping and branching.
One unusual feature described in its grammar guide is its regex-based identifiers:
let mis(t|sed) be “thick”
This declares mis, mist, and missed as different names for the same variable, initialized with the string "thick". Other metacharacters, including | and ?, can also be used. This allows for wordplay that's often difficult in code poetry where variable names are locked down to a single string that can't be pluralized, verbed, or altered for other grammatical use.

Love Languages' lexicon consists not of words, but of the parse trees of English phrases. Cassidy Diamond introduced the language in a paper that won the "fuckery" award at Carnegie Mellon. Love Languages is a genuinely pattern‑based language: it encodes commands in the structure of English phrases. For example, a noun phrase that resolves to an adjective plus a noun corresponds to one command, while a verb phrase that resolves to a single verb corresponds to another. A Love Letters program is a series of English sentences whose actual words are irrelevant apart from their grammatical role.
The language is named for Love Letter Generator, a foundational generative poetry work from 1953 that Output calls "perhaps the first example of digital literary art." As Diamond explains:
Strachey's love letter algorithm is often interpreted as a critique of phatic (and looking at Strachey's identity as a gay man, usually heterosexual) expressions of love. His program creates convincing love letters, but if you look under the hood to see the Mad Libs kind of algorithm behind this generation, it quickly becomes obvious how shallow these letters and the actual source material that inspires them are. If Strachey considered the process of writing these love letters to be algorithmic, then my Love Languages project literalizes this by creating a programming language where love letters can be executed as algorithms themselves.
Love Languages uses X‑bar theory, an influential model of syntactic analysis introduced by Noam Chomsky in 1970 and later expanded by Ray Jackendoff. While it's not in wide use today, it's a precursor to current methods, so looks familiar in how it parses sentences. It's named for the "bar" that connects a lexical item (the word itself) to a containing phrase or larger structure, but first through an "intermediate projection." This is the N' for an N, where N' is the site of recursion. N' might attach an adjective to a noun, but is still below the "maximal projection" or the full phrase.
Love Languages draws on brainfuck, the language that Chris Pressey once called the twelve-bar blues of esolangs. It translates these mappings into brainfuck commands; here is how noun phrases resolve:
| NP → N′ | > | (10) |
| NP → NP Conj NP | ] | (11) |
| NP → N′ | + | (12) |
| NP → N′ PP | (none) | (13) |
| NP → N | < | (14) |
Given the recursion of natural language, brainfuck commands might appear at many levels within the same tree:

The project has a website and a github repo.

Kip uses the morphology of Turkish as its syntax. Written in Haskell, it is a functional language, but its surface form for defining and applying functions is entirely different. It borrows not only Turkish vocabulary but also its grammar. Turkish has a system of noun case endings, where the subject and object are indicated by different cases (the nominative vs the accusative), making word order flexible. Other cases indicate things like which parameter in a function an id is passed to, including the Dative (to/for), Locative (location), Ablative (motion away, as in from), Genitive (possession), Instrumental (with/by), Possessive.
These cases are expressed through word endings, so a single word in Kip often lexes into two tokens. This is unusual for programming languages, where a token — the smallest meaningful unit in code — is typically a whole word. In Kip, the beginning of a word can be the identifier itself, while the ending marks how it will be used: for example, to call it as a function (via the imperative verb form), or to pass it as an argument to another function (when in noun form). Which parameter slot is marked by the mood of that noun. For example, the following code from the tutorial performs subtraction:
(bu tam-sayıyla) (şu tam-sayının) farkı
It breaks down in this way:
• farkı → possessive: "the difference" (the function name)
Since the part of speech determines meaning, both of these are valid ways of writing the same line:
bunla şunun farkı
Thus Kip’s grammar breaks from the word‑order conventions of most programming languages (and of English itself), instead adopting a case‑driven approach modeled on Turkish. Rather than relying on position, Kip uses Turkish‑style case endings, giving many ways (different orders of words) to express the same line of code, although with some limitations (the function name farkı should appear at the end of a line, for instance).
The conditional is, no surprise, marked by the conditional verb mood:
doğruysa ... yanlışsa ...
This resolves to:
match value {
Joomy Korkut, who created Kip, had the idea to use grammatical cases for function arguments ten years ago; five years later, he came up with the basic rules:
However, it took more time to get to his breakthrough. "I found the right design that uses an existing morphological analyzer, preserves morphological ambiguities in the abstract syntax tree, and leaves disambiguation to type checking, which can often determine what was meant."
The Kip website is geared to developers and it has an active github repo. There is also a paper on the language, co-written by Korkut. Korkut seems a bit overwhelmed with the response to his language. "I am surprised anyone wrote any programs in it at all. Within a day or two of the language reaching #1 on Hacker News, I started receiving messages about people writing Kip programs: a number-guessing game, an interactive Sudoku game, Project Euler solutions... A few people even started making Visual Studio Code plugins for Kip. I'm thankful for all kinds of interest, but especially bug reports."

Prāsa is a programming language whose code follows the syllabic stress rhythms of Telugu, a Dravidian language with a literary tradition thousands of years old. Prāsa is not exclusive to Telugu speakers; in fact, its code is not written in Telugu. Instead, it "approximates aspects of Telugu prosody through English phonetics" by using the syllabic stress rhythms from Telugu poetic forms. The current implementation is English-only, but Prāsa is language-agnostic: prose of any language is valid so long as it follows the underlying syntactic structure.
The Prāsa interpreter analyzes a text's syllables, marking stressed ones with U and unstressed with I. To be valid Prāsa, it needs to follow one of these patterns:
| Utpalamāla | UII UIU III UII UII UIU IU | Length: 20 |
| Campakamāla | III IUI UII IUI IUI IUI UIU | Length: 21 |
| Mattēbham | IIU UII UIU III UUU IUU IU | Length: 20 |
| Śārdūlam | UUU IIU IUI IIU UUI UUI U | Length: 19 |
These patterns do not affect the performance of code. Any of the four can be used; the interpreter only enforces that one of them is in use. This emphasizes the text of code, rather than its performance, as the poetic work. Koundinya Dhulipalla, its designer, describes it:
Through the grammatical design of the syntax, the intention is that the computational work occurs during the composition itself. Anything the interpreter does has already been done by the programmer in order to write valid Prāsa code. In this sense, correctness is not deferred to execution but is embedded in the act of writing.
The Prāsa machine is a sequence of memory cells. You advance through them by indenting each line, so programs that use many cells have deeper indentation and a more stylized poetic form. The number of syllables on a line encodes a value, which by default is assigned to the current cell. Other commands rely on familiar syntactic cues from poetry, like parentheses (similar to Ashpaper). These are fairly simple to encode, giving more space for the programmer to lean into the rhythmic constraints at the center of the language.
Dhulipalla created Prāsa as part of a thesis project poetic.computer, considering computing as poetry and politics through a decolonial lens. The language was workshopped first as hand-written code on paper and compiled collectively without a machine. This allowed the style of the language to emerge before considering implementation too deeply (and possibly falling into more familiar approaches to language design). While it was developed before agentic coding was common, its emphasis on the writer feels like pushback against automated code generation.
A central argument of the research is that computation does not have to be confined to machines. People compute. Historically, computers referred to women, whose labour has since been erased through the redefinition of the word, along with a general inclination towards anthropomorphising the language around machines. I think this is particularly more visible within discourses around AI.
The text at the top of this section is an excerpt of a Fibonacci generator.

Code in Captive can be written in any natural language built on the Roman alphabet. It requires only use of the letters b, d, f, g, h, j, k, l, p, q, t, and y. These letters alone have programmatic meaning; all other text is considered a comment and ignored.
Captive (a language I designed) was inspired by an Oulipian constraint set called Prisoners' Constraint. In that system, the writer avoids any letters that rise above the x-height or fall below the baseline, as if a prisoner conserving paper by using only the smallest of letters. The assumption is all lower-case (although Captive ignores upper-case entirely). Our use of these letters is flipped: now they are the only ones that have (programmatic) use.
Captive is designed for code written accidentally. It is primarily for text generation: one text generates another. Here is an excerpt (from my esolangs book Forty-Four Esolangs), showing the output of the first paragraph of Moby Dick run as a Captive program:
QQQ,Q ,ᦡ ᦡ ,ᦡㄨQQThe ᦡ character is called high da and is in the New Tai Lue Unicode block. The question mark in a box is an unprintable control character.
While Captive is designed for the accidental program, this is not the only way to use it. Hewing closer to the Oulipian practice that inspired it, one can construct a bare Captive program, of only "meaningful" characters; the tall letters that make up its lexicon. The creative challenge is to fill in words around them, so that a program might sound natural while still performing its intended work. This, for example, is a phrase that prints "Hi" to the screen:
long praise styles a droopy, overt tattoo
Like Prāsa, Captive was designed before LLMs were commonly used, yet feels like it could have been designed as an anti-AI language; in this case because it uses hard textual constraints that contemporary models handle poorly. Older text generators fare better: simple Markov chains can produce valid Captive programs, though they tend to emerge in the familiar surreal‑yet‑bland register of spam (perhaps not so different from the hand-crafted example above). One can, of course, write Captive by hand as the Oulipians would do; perhaps starting with the letters for a working program, and filling in short letters around them. This is still a new language, and it is not yet clear where it might go -- which is one of the most exciting aspects of designing a language. The most interesting application, however, may be scouring large corpora for texts that trigger more compelling Captive programs. The infinite monkeys of the internet release mountains of text daily; some may have secret readings as Captive texts, however unintentional.