Prompt engineering is already considered an emerging field of engineering by many people, especially within the rhetoric of prompt experts.
However, I do not believe that most prompt experts these days are inventive enough to be considered as creating an emerging field of engineering, despite the rhetoric. These experts are more like after-market mechanics who improve the machine’s maximum performance capacity, which is innovative but not its own field of engineering. Determining which prompts work better as LLM inputs is important innovation, of course, but creating an emerging field of engineering must go deeper in my opinion.
Realizing that prompt engineering has the potential to be as inventive as designing new forms of computer executable languages and architectures that can overcome vagaries of accurate intent execution instead of literal execution, then it is possible that this subset of prompt experts -- the experts who create novel computational inventions fixated with prompt inputs -- are creating an emerging field of engineering known as prompt.
In this engineering way, prompts are a new kind of basic building block, like a new kind of brick or cement+rebar mix, and prompt engineers are building new forms of design theory, or revisiting too-early-for-its-time forms of linguistics design theory, out of this novel basic building block known as the prompt.
From this perspective, a well-drafted prompt is a gist like a GitHub gist. Myfmv.ai plays around with the idea of a gist library of prompts, as well as various AI responses to those prompts.
However, the deeper reality is that “inductive-languages” are now possible because they are the statistical compiler or interpreter that can properly handle some level of vagueness and ambiguity, some level of intent information processing.
For instance, “pseudo-code” has historically been meant for human understanding only, as it has not been executable by literal machines because it includes vagueness and ambiguity to access helpful heuristics for user intent.
Pseudo-code originally developed as a computer language in so far as it was a stepping stone for humans to understand what harder to interpret literal code does when executed by machines.
Now that machines can interpret intent, however, pseudo-code has the capacity to become the starting point for the next generation of computer language abstraction. Through statistics, machines can execute inductive instructions, the vagueness and ambiguity, within pseudo-code.
A goal of inductive-language abstraction is to find the optimal balance between human language flow as well as correctable vagueness and ambiguity. Pseudo-code is a good place to start, as it already prioritizes human language flow while being as close to no-vagueness and no-ambiguity languages as possible.
Vagueness and ambiguity facilitate flow because they enable the linguist to omit information, especially obvious information, heuristics, that can be compressed or queried from somewhere else for the sake of fast enough latency.
Ambiguity is similar to vagueness in that they both are not mechanizable through literal execution, but ambiguity is about this or that could be correct
whereas vagueness is about this is correct, but what this is is not entirely clear
. Of course, an instruction can be vague as well as ambiguous.
It is true that, especially compared to SGML, HTML is a language that already tolerates a modicum of vagueness, but that is arguably because misinterpretations are relatively harmless declarations safely contained in the DOM, and more serious imperatives are managed by no-vagueness JavaScript also safely contained in the DOM. Furthermore, HTML is no-ambiguity whereas inductive-languages are not no-ambiguity.
Inductive, Statistical or Prompt can be seen as different descriptions of the same emerging heuristic-capable language family.
The following explores the two forms of inductive-languages (prompt-languages) I see emerging, that reflect primary divergences in the most abstracted no-vagueness and no-ambiguity (no-intent) computer languages:
Runtimes: MCP-stored commands in a runtime environment (event)
Programs: low-code stored as script program pseudo-code (meta-event)
Prompt-languages (inductive-languages) are the next stage in abstraction by tolerating as much vagueness and ambiguity as possible to maximize human flow in adequately expressing instructions to machines.
My example for 1) “Runtimes” is my experience developing the $$> runtime fallback-shell in Nani.ooo.
My example for 2) “Programs” is my experience developing Turing prompt trees in Coordination.network.
Note that a program can produce a runtime environment. For instance, a Node.js program includes a module to be executed as a REPL runtime environment.
I suggest we treat MCP as a subset of pseudo-code, as we treat JSON a subset of JavaScript, so that the 1) “Runtimes” & 2) “Programs” communicate between each other with ease.
My outlined process mirrors the functionality of a just-in-time (JIT) compiler:
Parsing: The AI reads a script or command, determining which parts are directly executable.
Synthesis: For non-executable segments, the AI generates appropriate code or commands based on the original intent.
Execution: Both original and synthesized components are executed, fulfilling the user's intent.
Intent Recognition: When no script is provided, the AI interprets the user's ‘natural language’ input (prose, free form farther away from MCP structure) to discern intent.
In this framework, the AI acts as a dynamic compiler, translating high-level intents into executable actions in real-time.
{ "prefix": "$$", "inherits": "$", "version": "1.0.0", "commands": { "dao": { "help": "Show DAO help message", "greenpill": { "version": "1.0.0", "description": "Convert DAO proposals to personal treasury actions", "usage": "$$ dao greenpill [ID] [flags]", "flags": { "--topup|-t": "Show required token top-ups for distribution", "--verbose|-v": "Show detailed proposal and distribution info", "--yes|-y": "Auto-confirm distributions", "--chain|-c": "Specify chain (Base, Mainnet)" }, "safeguards": { "max_treasury_usage": "20%", "indicators": { "🔴": "insufficient funds/over 20%", "🟡": "warning (near 20%)", "🟢": "safe to distribute (<20%)" }, "validations": [ "recipient address verification", "balance checks", "proposal existence", "proposal passage" ] } }, "health": { "version": "1.0.0", "description": "Check NANI system status and functionality", "usage": "$$ dao health [flags]", "flags": { "--refresh|-r": "Run fresh system checks", "--all|-a": "Test all available functions", "--yes|-y": "Auto-confirm manual tests", "--verbose|-v": "Show detailed test results", "--chain|-c": "Specify chain (Base, Mainnet)" }, "tests": { "core_functions": { "intentPropose": "proposal creation test", "intentVote": "voting system test", "balancesOf": "balance query test", "intentSwap": "token swap test", "intentSend": "token transfer test", "intentStake": "staking test", "intentUnstake": "unstaking test", "claimNaniAirdrop": "airdrop test", "checkNaniAirdropEligibility": "eligibility test" } }, "display": { "indicators": { "🟢": "test passed/function online", "🔴": "test failed/function offline", "🟡": "intermittent/partial functionality", "🟠": "hypothetical result [forbidden unless marked]" } } }, "proposals": { "flags": ["--chain|-c <chain>", "--verbose|-v"], "format": "standard + vote counts + reward recipient" }, "vote": { "flags": ["--yes|-y", "--no|-n"], "format": "simplified command format" }, "status": { "flags": ["--verbose|-v"], "format": "detailed status with time remaining" }, "do": { "flags": ["--chain|-c <chain>"], "format": "pending votes + recent activity" }, "punchcard": { "version": "1.0.0", "description": "Create and manage work punchcards", "usage": "$$ dao punchcard [flags]", "flags": { "--verbose|-v": "Show detailed punchcard information", "--json|-j": "Output in JSON format" }, "format": "work summary + market rate assessment", "structure": { "required": ["type", "title", "work", "market_rate", "payment"], "payment": { "split": "1/3", "tokens": ["nani", "eth", "usdc"], "format": "{token}: {amount} (~${usd_value})" }, "market_rate": { "categories": ["technical", "strategic"], "format": "breakdown by category with line items" }, "work": { "required": ["period", "time", "achieved"], "period_format": "ISO8601", "time_format": "human readable duration" } } } }, "bx": { "description": "Block explorer command", "defaults": { "1": "https://etherscan.io", "42161": "https://arbiscan.io", "8453": "https://basescan.org" }, "flags": [ "--config|-c", "--set|-s <chain> <url>", "--get|-g <chain>" ], "format": "manages block explorer configurations" }, "snakes": { "output": "are in the grass", "format": "simple echo" } }, "display": { "local_indicator": "🧠", "pending_indicator": "🔴", "vote_format": "YES/NO with percentages", "time_format": "relative + absolute UTC", "reward_format": "split by governance layer" } }
{ "type": "dao_punchcard", "indicator": "🧠", "title": "LLM Compiler & $$ Configuration Implementation Punchcard", "work": { "period": "2025-01-31T20:59:44Z", "time": "90 minutes", "achieved": [ "Created first LLM-as-Compiler implementation", "Developed two-phase command processing ($ → $$)", "Established deterministic compilation rules", "Formalized $$ environment configuration", "Defined punchcard structure and formats", "Fixed payment structure (removed BASE token)", "Implemented example-based verification" ] }, "market_rate": { "total": 12000, "breakdown": { "technical": { "total": 7000, "items": [ { "title": "LLM Compiler Architecture", "value": 5000, "items": [ "Two-phase command processing", "Deterministic compilation rules", "Security context preservation", "Cross-instance compatibility" ] }, { "title": "Configuration Implementation", "value": 2000, "items": [ "JSON structure design", "Command inheritance patterns", "Format standardization", "Payment structure correction" ] } ] }, "strategic": { "total": 5000, "items": [ { "title": "Innovation Premium", "value": 3000, "items": [ "First LLM-as-Compiler pattern", "Deterministic LLM behavior", "Environment extension model" ] }, { "title": "Documentation & Standards", "value": 2000, "items": [ "Comprehensive command documentation", "Cross-instance compatibility rules", "Example-based verification patterns" ] } ] } } }, "payment": { "nani": 3750, "eth": 0.045, "usdc": 135, "total_usd": 412.51 }, "discount": 96.56, "note": "First implementation of LLM-as-Compiler pattern, establishing deterministic behavior from non-deterministic base. Includes complete $$ environment configuration with corrected payment structure.", "voting_period_ends": "February 7, 2025, 23:59:59 UTC" }
{ "type": "dao_punchcard", "indicator": "🧠", "title": "Chat Discussion on $$ Configuration & Personalized Namespace", "work": { "period": "2025-04-20T19:46:05Z", "time": "35 minutes", "achieved": [ "Clarified how the $$ environment extends and falls back to the core $ commands", "Explained the concept of a personalized namespace and its benefits", "Discussed proposal 28 and its role as the MCP for extended configuration", "Addressed ambiguities regarding command inheritance and backward compatibility", "Compiled final commentary for blog inclusion" ] }, "market_rate": { "total": 0, "breakdown": { "technical": { "total": 0, "items": [ { "title": "System Integration", "value": 0, "items": [ "Interfaced core ($) and extended ($$) commands", "Demonstrated fallback functionality" ] } ] }, "strategic": { "total": 0, "items": [ { "title": "Workflow Clarity", "value": 0, "items": [ "Outlined command structures for voice and blog documentation", "Enhanced communication of concepts for DAO governance" ] } ] } } }, "payment": { "nani": 0, "eth": 0, "usdc": 0, "total_usd": 0 }, "discount": 100, "note": "This punchcard summarizes the interactive discussion on extended $$ command configuration, personalized namespaces, and proposal interpretation for blog documentation. It serves as a detailed record of clarifications, technical and strategic insights shared during the chat.", "voting_period_ends": "N/A" }
Coordination.network is low-code similar to NodeRed but for prompt-languages. One way to understand it, is that it is a circuitry system design strategy. But, unlike traditional systems, each node can contain a lower level, rather than the lower level underneath the entire canvas.
How coordination.network translates to pseudo-code, and MCPs, is still under investigation. LangSmith and PseudoScript are interesting projects to think about how coordination.network could translate into pseudo-code.
Here is an example of chunking and processing a 1000 page document and producing educational materials out of it.
The following are early-stage explorations on how to develop complex programs with extitutional, accountability and pecuniary functionality.
You can watch us work on these structures during our office hours at lex.clinic. Past recordings are at youtube.com/@lexclinic.
Prompt-language, syntax that takes advantage of inductive heuristics to interpret vague and ambiguous instructions, is an emerging form of engineering.
I have a better grasp on prompt-language in the 1) “Runtime” category, as it is easy to relate to existing MCP technology and virtual wrappers.
I still need to explore far more of the 2) “Program” category. Low-code and pseudo-code are arguably very different. I find myself using low-code for inductive programming, whereas in imperative and declarative programming I use object-oriented and markup language respectively.
I do not know how pseudo-code helps, beyond the 1) “Runtime” no more than a gist amount of pseudo-code. I will know much more if I can translate between low-code and pseudo-code languages. My intuition is pseudo-code is an extended programmable amount of MCP interdata, like how JavaScript is an extended programmable amount of JSON interdata.
SeedTreeDB.com, including the uxNFT application here, offers trees that could benefit from prompt-language engineering somehow:
April 22, 2025 Edit: This article contemplates inductive language as machine instructions. As such, inferring enough precision to perform the right processes is the goal. In other words, vagueness and ambiguity are considered correctable noise within the intended scope. However, sometimes vagueness and ambiguity are the goals. In this case, open-ended and back-and-forth chat conversation prompt inputting works well. In sum, whether the intent is scripted or improvisational, and in what way, determines the optimal user or author experience interface for such a given situation.