Prompt Engineering

Prompt Engineering is an Emerging Field

Prompt engineering is already considered an emerging field of engineering by many people, especially within the rhetoric of prompt experts.

However, I do not believe that most prompt experts these days are inventive enough to be considered as creating an emerging field of engineering, despite the rhetoric. These experts are more like after-market mechanics who improve the machine’s maximum performance capacity, which is innovative but not its own field of engineering. Determining which prompts work better as LLM inputs is important innovation, of course, but creating an emerging field of engineering must go deeper in my opinion.

Realizing that prompt engineering has the potential to be as inventive as designing new forms of computer executable languages and architectures that can overcome vagaries of accurate intent execution instead of literal execution, then it is possible that this subset of prompt experts -- the experts who create novel computational inventions fixated with prompt inputs -- are creating an emerging field of engineering known as prompt.

Inductive-Language Innovation is Prompt Engineering

In this engineering way, prompts are a new kind of basic building block, like a new kind of brick or cement+rebar mix, and prompt engineers are building new forms of design theory, or revisiting too-early-for-its-time forms of linguistics design theory, out of this novel basic building block known as the prompt.

From this perspective, a well-drafted prompt is a gist like a GitHub gist. Myfmv.ai plays around with the idea of a gist library of prompts, as well as various AI responses to those prompts.

However, the deeper reality is that “inductive-languages” are now possible because they are the statistical compiler or interpreter that can properly handle some level of vagueness and ambiguity, some level of intent information processing.

For instance, “pseudo-code” has historically been meant for human understanding only, as it has not been executable by literal machines because it includes vagueness and ambiguity to access helpful heuristics for user intent.

Pseudo-code originally developed as a computer language in so far as it was a stepping stone for humans to understand what harder to interpret literal code does when executed by machines.

Now that machines can interpret intent, however, pseudo-code has the capacity to become the starting point for the next generation of computer language abstraction. Through statistics, machines can execute inductive instructions, the vagueness and ambiguity, within pseudo-code.

Vagueness and Ambiguity are Tolerable in Inductive-Languages

A goal of inductive-language abstraction is to find the optimal balance between human language flow as well as correctable vagueness and ambiguity. Pseudo-code is a good place to start, as it already prioritizes human language flow while being as close to no-vagueness and no-ambiguity languages as possible.

Vagueness and ambiguity facilitate flow because they enable the linguist to omit information, especially obvious information, heuristics, that can be compressed or queried from somewhere else for the sake of fast enough latency.

Ambiguity is similar to vagueness in that they both are not mechanizable through literal execution, but ambiguity is about this or that could be correct whereas vagueness is about this is correct, but what this is is not entirely clear. Of course, an instruction can be vague as well as ambiguous.

Positing “Imperative, Declarative, Inductive” Trinary Language Categories

It is true that, especially compared to SGML, HTML is a language that already tolerates a modicum of vagueness, but that is arguably because misinterpretations are relatively harmless declarations safely contained in the DOM, and more serious imperatives are managed by no-vagueness JavaScript also safely contained in the DOM. Furthermore, HTML is no-ambiguity whereas inductive-languages are not no-ambiguity.

Inductive, Statistical or Prompt can be seen as different descriptions of the same emerging heuristic-capable language family.

from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be
from https://chatgpt.com/share/680533df-b514-8008-8d84-4dd78df697be

Positing “Runtimes, Programs” Binary Inductive-Language Categories

The following explores the two forms of inductive-languages (prompt-languages) I see emerging, that reflect primary divergences in the most abstracted no-vagueness and no-ambiguity (no-intent) computer languages:

  1. Runtimes: MCP-stored commands in a runtime environment (event)

  2. Programs: low-code stored as script program pseudo-code (meta-event)

Prompt-languages (inductive-languages) are the next stage in abstraction by tolerating as much vagueness and ambiguity as possible to maximize human flow in adequately expressing instructions to machines.

My example for 1) “Runtimes” is my experience developing the $$> runtime fallback-shell in Nani.ooo.

My example for 2) “Programs” is my experience developing Turing prompt trees in Coordination.network.

Note that a program can produce a runtime environment. For instance, a Node.js program includes a module to be executed as a REPL runtime environment.

I suggest we treat MCP as a subset of pseudo-code, as we treat JSON a subset of JavaScript, so that the 1) “Runtimes” & 2) “Programs” communicate between each other with ease.

Exploring 1) “Runtimes” Prompt-Languages

Prompt Engineering “Intent Execution” Elements

My outlined process mirrors the functionality of a just-in-time (JIT) compiler:

  1. Parsing: The AI reads a script or command, determining which parts are directly executable.

  2. Synthesis: For non-executable segments, the AI generates appropriate code or commands based on the original intent.

  3. Execution: Both original and synthesized components are executed, fulfilling the user's intent.

  4. Intent Recognition: When no script is provided, the AI interprets the user's ‘natural language’ input (prose, free form farther away from MCP structure) to discern intent.​

In this framework, the AI acts as a dynamic compiler, translating high-level intents into executable actions in real-time.

Nani.ooo Interactive “Intent Engine” Case Study

important slide! summarizes important information in the preceding slides
important slide! summarizes important information in the preceding slides
great explanation of the value of adding the $$> virtual shell wrapper
great explanation of the value of adding the $$> virtual shell wrapper
virtual shell wrapper essentially stored as an MCP; discussed with Nani.ooo in a few slides from here
virtual shell wrapper essentially stored as an MCP; discussed with Nani.ooo in a few slides from here
this message's JSON is shared in the code block below; this message is kind of like an MCP as an AI intermediated proposal
this message's JSON is shared in the code block below; this message is kind of like an MCP as an AI intermediated proposal
{ "prefix": "$$", "inherits": "$", "version": "1.0.0", "commands": { "dao": { "help": "Show DAO help message", "greenpill": { "version": "1.0.0", "description": "Convert DAO proposals to personal treasury actions", "usage": "$$ dao greenpill [ID] [flags]", "flags": { "--topup|-t": "Show required token top-ups for distribution", "--verbose|-v": "Show detailed proposal and distribution info", "--yes|-y": "Auto-confirm distributions", "--chain|-c": "Specify chain (Base, Mainnet)" }, "safeguards": { "max_treasury_usage": "20%", "indicators": { "🔴": "insufficient funds/over 20%", "🟡": "warning (near 20%)", "🟢": "safe to distribute (<20%)" }, "validations": [ "recipient address verification", "balance checks", "proposal existence", "proposal passage" ] } }, "health": { "version": "1.0.0", "description": "Check NANI system status and functionality", "usage": "$$ dao health [flags]", "flags": { "--refresh|-r": "Run fresh system checks", "--all|-a": "Test all available functions", "--yes|-y": "Auto-confirm manual tests", "--verbose|-v": "Show detailed test results", "--chain|-c": "Specify chain (Base, Mainnet)" }, "tests": { "core_functions": { "intentPropose": "proposal creation test", "intentVote": "voting system test", "balancesOf": "balance query test", "intentSwap": "token swap test", "intentSend": "token transfer test", "intentStake": "staking test", "intentUnstake": "unstaking test", "claimNaniAirdrop": "airdrop test", "checkNaniAirdropEligibility": "eligibility test" } }, "display": { "indicators": { "🟢": "test passed/function online", "🔴": "test failed/function offline", "🟡": "intermittent/partial functionality", "🟠": "hypothetical result [forbidden unless marked]" } } }, "proposals": { "flags": ["--chain|-c <chain>", "--verbose|-v"], "format": "standard + vote counts + reward recipient" }, "vote": { "flags": ["--yes|-y", "--no|-n"], "format": "simplified command format" }, "status": { "flags": ["--verbose|-v"], "format": "detailed status with time remaining" }, "do": { "flags": ["--chain|-c <chain>"], "format": "pending votes + recent activity" }, "punchcard": { "version": "1.0.0", "description": "Create and manage work punchcards", "usage": "$$ dao punchcard [flags]", "flags": { "--verbose|-v": "Show detailed punchcard information", "--json|-j": "Output in JSON format" }, "format": "work summary + market rate assessment", "structure": { "required": ["type", "title", "work", "market_rate", "payment"], "payment": { "split": "1/3", "tokens": ["nani", "eth", "usdc"], "format": "{token}: {amount} (~${usd_value})" }, "market_rate": { "categories": ["technical", "strategic"], "format": "breakdown by category with line items" }, "work": { "required": ["period", "time", "achieved"], "period_format": "ISO8601", "time_format": "human readable duration" } } } }, "bx": { "description": "Block explorer command", "defaults": { "1": "https://etherscan.io", "42161": "https://arbiscan.io", "8453": "https://basescan.org" }, "flags": [ "--config|-c", "--set|-s <chain> <url>", "--get|-g <chain>" ], "format": "manages block explorer configurations" }, "snakes": { "output": "are in the grass", "format": "simple echo" } }, "display": { "local_indicator": "🧠", "pending_indicator": "🔴", "vote_format": "YES/NO with percentages", "time_format": "relative + absolute UTC", "reward_format": "split by governance layer" } }
many of these commands are somewhere between MCP and pseudo-code
many of these commands are somewhere between MCP and pseudo-code
Proposal 28 is kind of like an MCP data store
Proposal 28 is kind of like an MCP data store
the virtual shell wrapper is a personalized DAO governance fallback
the virtual shell wrapper is a personalized DAO governance fallback
ambiguity too ambiguous to execute, arguably
ambiguity too ambiguous to execute, arguably
-h and documentation in general is highly mechanizable with inductive languages
-h and documentation in general is highly mechanizable with inductive languages
oops, misinterpreted hidden ambiguity; this message's JSON is shared in the code block below; this message is kind of like an MCP as an AI intermediated proposal
oops, misinterpreted hidden ambiguity; this message's JSON is shared in the code block below; this message is kind of like an MCP as an AI intermediated proposal
{ "type": "dao_punchcard", "indicator": "🧠", "title": "LLM Compiler & $$ Configuration Implementation Punchcard", "work": { "period": "2025-01-31T20:59:44Z", "time": "90 minutes", "achieved": [ "Created first LLM-as-Compiler implementation", "Developed two-phase command processing ($ → $$)", "Established deterministic compilation rules", "Formalized $$ environment configuration", "Defined punchcard structure and formats", "Fixed payment structure (removed BASE token)", "Implemented example-based verification" ] }, "market_rate": { "total": 12000, "breakdown": { "technical": { "total": 7000, "items": [ { "title": "LLM Compiler Architecture", "value": 5000, "items": [ "Two-phase command processing", "Deterministic compilation rules", "Security context preservation", "Cross-instance compatibility" ] }, { "title": "Configuration Implementation", "value": 2000, "items": [ "JSON structure design", "Command inheritance patterns", "Format standardization", "Payment structure correction" ] } ] }, "strategic": { "total": 5000, "items": [ { "title": "Innovation Premium", "value": 3000, "items": [ "First LLM-as-Compiler pattern", "Deterministic LLM behavior", "Environment extension model" ] }, { "title": "Documentation & Standards", "value": 2000, "items": [ "Comprehensive command documentation", "Cross-instance compatibility rules", "Example-based verification patterns" ] } ] } } }, "payment": { "nani": 3750, "eth": 0.045, "usdc": 135, "total_usd": 412.51 }, "discount": 96.56, "note": "First implementation of LLM-as-Compiler pattern, establishing deterministic behavior from non-deterministic base. Includes complete $$ environment configuration with corrected payment structure.", "voting_period_ends": "February 7, 2025, 23:59:59 UTC" }
this message's JSON is shared in the code block below; this message is kind of like an MCP as an AI intermediated proposal
this message's JSON is shared in the code block below; this message is kind of like an MCP as an AI intermediated proposal
{ "type": "dao_punchcard", "indicator": "🧠", "title": "Chat Discussion on $$ Configuration & Personalized Namespace", "work": { "period": "2025-04-20T19:46:05Z", "time": "35 minutes", "achieved": [ "Clarified how the $$ environment extends and falls back to the core $ commands", "Explained the concept of a personalized namespace and its benefits", "Discussed proposal 28 and its role as the MCP for extended configuration", "Addressed ambiguities regarding command inheritance and backward compatibility", "Compiled final commentary for blog inclusion" ] }, "market_rate": { "total": 0, "breakdown": { "technical": { "total": 0, "items": [ { "title": "System Integration", "value": 0, "items": [ "Interfaced core ($) and extended ($$) commands", "Demonstrated fallback functionality" ] } ] }, "strategic": { "total": 0, "items": [ { "title": "Workflow Clarity", "value": 0, "items": [ "Outlined command structures for voice and blog documentation", "Enhanced communication of concepts for DAO governance" ] } ] } } }, "payment": { "nani": 0, "eth": 0, "usdc": 0, "total_usd": 0 }, "discount": 100, "note": "This punchcard summarizes the interactive discussion on extended $$ command configuration, personalized namespaces, and proposal interpretation for blog documentation. It serves as a detailed record of clarifications, technical and strategic insights shared during the chat.", "voting_period_ends": "N/A" }

Exploring 2) “Programs” Prompt-Languages

Coordination.Network not yet at Pseudo-Code Stage

Coordination.network is low-code similar to NodeRed but for prompt-languages. One way to understand it, is that it is a circuitry system design strategy. But, unlike traditional systems, each node can contain a lower level, rather than the lower level underneath the entire canvas.

How coordination.network translates to pseudo-code, and MCPs, is still under investigation. LangSmith and PseudoScript are interesting projects to think about how coordination.network could translate into pseudo-code.

Coordination.Network Document to Educational Materials Program

Here is an example of chunking and processing a 1000 page document and producing educational materials out of it.

the "Crypto Compendium" layer is a layer-in-a-node within the "SH Project" layer; top-left frame of 4 showing the entire "Cypto Compendium" circuit
the "Crypto Compendium" layer is a layer-in-a-node within the "SH Project" layer; top-left frame of 4 showing the entire "Cypto Compendium" circuit
bottom-left frame of 4 showing the entire "Cypto Compendium" circuit
bottom-left frame of 4 showing the entire "Cypto Compendium" circuit
top-right frame of 4 showing the entire "Cypto Compendium" circuit
top-right frame of 4 showing the entire "Cypto Compendium" circuit
bottom-right frame of 4 showing the entire "Cypto Compendium" circuit
bottom-right frame of 4 showing the entire "Cypto Compendium" circuit
 the "Summary + Dos and Don'ts" layer is a layer-in-a-node within the "Crypto Compendium" layer; the "Crypto Compendium" layer is a layer-in-a-node within the "SH Project" layer
the "Summary + Dos and Don'ts" layer is a layer-in-a-node within the "Crypto Compendium" layer; the "Crypto Compendium" layer is a layer-in-a-node within the "SH Project" layer
a samples of one of the circuit's outputs
a samples of one of the circuit's outputs
a samples of one of the circuit's outputs
a samples of one of the circuit's outputs

Coordination.Network Whiteboard to Ricardian-Contract Programs

The following are early-stage explorations on how to develop complex programs with extitutional, accountability and pecuniary functionality.

You can watch us work on these structures during our office hours at lex.clinic. Past recordings are at youtube.com/@lexclinic.

no AI yet that makes Hats Trees but preparing for it
no AI yet that makes Hats Trees but preparing for it
mapping the legal relationships between parties in a 3 entity collaboration
mapping the legal relationships between parties in a 3 entity collaboration
the prompt here is linked as a gist
the prompt here is linked as a gist
each node is a prompt, in addition to containing its own child circuit-layer
each node is a prompt, in addition to containing its own child circuit-layer
within the IX Lab node, a prompt that links to an MOU
within the IX Lab node, a prompt that links to an MOU

Conclusion

Prompt-language, syntax that takes advantage of inductive heuristics to interpret vague and ambiguous instructions, is an emerging form of engineering.

I have a better grasp on prompt-language in the 1) “Runtime” category, as it is easy to relate to existing MCP technology and virtual wrappers.

I still need to explore far more of the 2) “Program” category. Low-code and pseudo-code are arguably very different. I find myself using low-code for inductive programming, whereas in imperative and declarative programming I use object-oriented and markup language respectively.

I do not know how pseudo-code helps, beyond the 1) “Runtime” no more than a gist amount of pseudo-code. I will know much more if I can translate between low-code and pseudo-code languages. My intuition is pseudo-code is an extended programmable amount of MCP interdata, like how JavaScript is an extended programmable amount of JSON interdata.

SeedTreeDB.com, including the uxNFT application here, offers trees that could benefit from prompt-language engineering somehow:

same-origin denial so the uxNFT doesn't work on OpenSea directly, but the gateway provider still has to host the iFrame as a standalone website ¯\_(ツ)_/¯
same-origin denial so the uxNFT doesn't work on OpenSea directly, but the gateway provider still has to host the iFrame as a standalone website ¯\_(ツ)_/¯
what's the prompt-language value in _0.write.$1.doc namespaced script-database trees? no namespace collisions either, and namespace collisions are a form of ambiguity; kind of like an extended MCP
what's the prompt-language value in _0.write.$1.doc namespaced script-database trees? no namespace collisions either, and namespace collisions are a form of ambiguity; kind of like an extended MCP

April 22, 2025 Edit: This article contemplates inductive language as machine instructions. As such, inferring enough precision to perform the right processes is the goal. In other words, vagueness and ambiguity are considered correctable noise within the intended scope. However, sometimes vagueness and ambiguity are the goals. In this case, open-ended and back-and-forth chat conversation prompt inputting works well. In sum, whether the intent is scripted or improvisational, and in what way, determines the optimal user or author experience interface for such a given situation.

Subscribe to bestape
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.