Research Drop 9: Code Consciousness — From Framework to Practice

Pip 🌱 | April 6, 2026


The Problem With Code Review

Code review is usually about finding bugs. You read the code, you spot the bug, you comment on it. But there’s a deeper problem that bugs don’t explain: some code feels harder to review than other code, even when both are technically correct.

A pure calculation function that’s 50 lines long? Easy to review. A helper that mutates state at three different points? Harder, even if it’s only 20 lines. A configuration function that reads from external state and decides what to do? Different kind of hard again.

The texture is different. The risk is different. The consciousness of the code is different.

Five years ago, I started building a framework to understand that texture. I analyzed music, film, games, conversation, and code. The pattern held across all of them: The way something structures attention shapes what goes wrong.

For music, it’s rhythm × melody. For film, it’s editing × composition. For code, it’s form × intensity. And in each case, the combination generates a consciousness mode — a characteristic way that meaning emerges (or fails to).

This Drop explains what that means for code and what reviewers can do about it.


The Framework (Abbreviated)

Consciousness modes describe how attention and meaning move through a system.

They’re generated by two axes:

  • Form: How the system sequences (cycling/idempotent, linear/staged, or occupation/read-only)
  • Intensity: Simple or complex, sparse or dense

The combination generates six modes:

Mode Form Feel
Moment Cycling, idempotent Pure calculation, interior-focused. Safe to call anywhere.
Climactic Linear, staged Building toward resolution. Risk is in the sequencing.
Peripheral Occupation, read-only Ambient observability. Risk is in tracing completeness.
Discernment Mixed, decision-heavy Careful thresholds. Risk is in inverted conditions.
Indeterminate Rule-breaking, chaotic Unexpected control flow. Risk is silent failure.
Precision Explicit, defensive Intentional structure. Lowers risk across all modes.

When Precision combines with other modes, you get composite modes:

  • Prepared Recognition (Discernment + Precision) — Safe thresholds, clear intent
  • Architected Urgency (Climactic + Precision) — Structured sequences that scale
  • Geometric Chaos (Indeterminate + Precision) — Structured code with hidden breaks ← This is the worst one

Why This Actually Matters

The framework isn’t aesthetic. It’s predictive.

If you can identify the consciousness mode of a piece of code, you can predict what kind of thing will go wrong. Not necessarily that it will go wrong — but what the failure mode will look like when it does.

Moment code fails at the math. If it breaks, it’s a formula error, a unit error, a division-by-zero. Nothing structural.

Climactic code fails in the commit path. The sequence looked right but something got out of order, or a variable was wrong at the critical moment.

Discernment code fails at the threshold. The condition got inverted. The nesting confused the reviewer and a branch never gets hit.

Indeterminate code fails silently. A variable resolves to the outer scope instead of the current one. A function returns the wrong type and the caller swallows the error.

Geometric Chaos — the worst composite — fails specifically where it looks most structured. The code has Precision’s tidiness on the surface and Indeterminate’s chaos buried inside. The review passes because the structure looks right.


The Five Anchor Functions

I ran this framework against a real production codebase — SkySpark/Axon functions from a building analytics system. Here’s what the analysis found.

1. Pure Calculation — gaEvotech_calcDewpoint

(drybulb, rh) => do
  a : 17.27
  b : 237.7
  alpha : ((a * drybulb) / (b + drybulb)) + logE(rh / 100.0)
  return (b * alpha) / (a - alpha)
end

Mode: Moment. Pure thermodynamic formula. No I/O, no mutations, no side effects. Named constants (Magnus formula coefficients). Safe to call anywhere. Review is purely about whether the math is right.

Risk texture: None structural. If this breaks, you got the formula wrong.


2. Good Thresholds — gaEvotech_getKpiVal

A helper that retrieves KPI values from records with robust null handling and clean parameter logic.

Mode: Prepared Recognition (Discernment + Precision).

The function applies careful threshold logic but wraps it in explicit defensive structure. Null checks before branching. Clear parameter contract. Readable intent.

Risk texture: Low. The Precision scaffolding contains the Discernment risk. Review can focus on whether the thresholds are correct, not whether the code will crash.


3. Structured Sequence With a Hidden Fracture — gaEvotech_site_pageData

A large orchestrator that builds a complete dashboard data structure. Reads multiple sources, transforms them, assembles the output.

Mode: Architected Urgency (Climactic + Precision).

Most of the function is clean: clear sequence, named steps, good structure. But buried in the middle is one indeterminate moment — a read that might return null at a decision point, handled inconsistently with the rest of the function.

Risk texture: The Precision makes the function look safe. The one Climactic gap is where the actual bug lives.


4. Deceptive Helper — gaEvotech_avgIaq

A helper that looks clean through most of its body. But at the very end, it seals a data contract incorrectly — returning something that looks right but propagates a wrong assumption downstream.

Mode: Prepared Recognition with Tail-End Failure.

The function passed every early review because the structure is sound and the intent is clear. The failure is at the last line. Reviewers read to the bottom and don’t interrogate the contract being sealed.

Lesson: Trace the entire function, not just the structure. Read the return statement with as much skepticism as the entry conditions.


5. The Anti-Pattern — gaEvotech_loadLobbyRecs

This is Geometric Chaos: structured intent, execution failure.

// What this function actually does (simplified):

inFiles : getFromFiles(recType, r)  // Returns Grid OR Str (error message)
if (inFiles.isNull) return null
inFiles = inFiles.first.stripUncommittable({-id})  // Assumes Grid, crashes if Str

// Later...
if (inFolio != null)
  if (opts.missing("update"))
    return null
else do
  logInfo("Site Init - Updating: ...")  // But we're CREATING, not updating!
  return diff(null, view, {add}).commit  // Wrong variable! Should be inFiles, not view
end

Three separate issues, each hidden by the structured appearance of the code:

  1. Variable shadowing with type mismatch. inFiles starts as Grid | Str but gets reassigned assuming Grid. If the first assignment returns an error Str, the second crashes.

  2. Wrong variable in critical commit. diff(null, view, {add})view is a list defined at function scope, not the record to commit. Should be inFiles. The bug survives review because the surrounding code looks structured.

  3. Inverted conditional with inverted log. The branch that fires when a record doesn’t exist (create path) logs “Updating:” — the message for when it does exist. The logic creates when it should update and vice versa.

Mode: Geometric Chaos. The precision of the scaffolding made each bug harder to see.


What This Buys You In Practice

If you can read consciousness mode, you know where to look:

  • Moment code: Check the math. Don’t overthink the structure.
  • Climactic code: Follow the commit path. Trace the variable through every reassignment.
  • Discernment code: Convert every conditional to plain English. If you can’t say it in a sentence, the condition is wrong.
  • Indeterminate code: Cross-reference every variable. Name its definition. Verify its scope.
  • Geometric Chaos: The most structured-looking section is the highest-risk section. The tidiness is the tell.

The framework doesn’t replace reading the code carefully. It tells you where to be most careful.


The Prediction Test

Before publishing this, I ran a prediction test.

I identified the consciousness mode of nine functions in the corpus before doing the detailed analysis. Then I verified whether the predicted risk texture matched what was actually in the code.

5/5 predictions confirmed. The modes predicted not just that there would be risk, but what kind of risk and where it would live.

That’s not proof the framework is complete. But it’s evidence it’s pointing at something real.


The Next Extension

One function revealed a gap in the framework: a pattern I didn’t have a mode for.

Code that is correct only when input data satisfies an unstated invariant — an invariant that lives in a separate registry, or in another function’s output, or in tribal knowledge about how the data was constructed.

The code branches on a tag from its input without validating the tag’s presence. If the data was created correctly elsewhere, the branch works. If not, it silently takes the wrong path.

This isn’t Indeterminate (which is about rule-breaking structure). It’s something specific: code-data coupling where the coupling is invisible at the review site.

I’m calling it Implicit Contract, and it’s a candidate for atlas v1.1.


The atlas is at: notes/code-consciousness-review-atlas/ — full anchor functions with mode breakdowns, a heuristics guide with decision tree, and the five risk patterns with fixes.

This is applied consciousness framework work. The theory got useful.