Prologue: The pitfalls of comparison
In the 1840s, the first “computer” algorithm was written. Ada Lovelace described this new way of expressing mathematical formulas through increments of the whole, a gradual construction of a pattern. She said:
"The Analytical Engine weaves algebraic patterns, just as the loom weaves flowers and leaves."
This statement not only highlighted the novelty of the Analytical Engine but also began the ongoing comparison between past inventions and new ones. Although Lovelace sought beauty in abstraction, 20th-century computer scientists like Dijkstra confronted a world increasingly skeptical of such analogies:
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
Which, in my opinion, shows the same interesting aspect of science and computers as Ada’s take, just less poetic and with a sprinkle of sarcasm—This IS something new; it is an entirely different structure on the hierarchy of things, and comparing it with other things that we know something about just sets the trap for other attempts at meaningful analysis of the system, as by comparing through similarities, the essence is lost.
And with this, probably we are set to talk about the stage which we happen to be in right now—the neverending comparison between us and our technologies. As a general idea, we’ll look into:
The Pitfalls of Comparison: How historical analogies both illuminate and obscure technological innovations
The Recursive Loop of Creation: Why human cognition cannot be disentangled from the tools it creates
The Sandpile Paradigm: Understanding emergence through simple systems with complex behaviors
Intelligence Across Substrates: Challenging the false dichotomy between natural and artificial
The Co-evolutionary Tango: How humans and technology have evolved in conversation with each other
Consciousness as Emergent Property: Exploring the implications for machine consciousness and ethics
Of Brains and Circuits
The Recursive Loop of Creation
The title Of Brains and Circuits is not just a stylistic choice, but a metaphor that reflects this blog's fundamental thesis by deliberately collapsing the categorical distinctions between biological cognition and engineered systems—declaring that human cognition cannot be disentangled from the tools it creates.
Framing it like this does echo Turing’s foundational question (”Can machines think?”) while simultaneously subverting its binary premise by suggesting that intelligence exists along a continuum rather than in discrete categories.
When Tools Become Mirrors
In 1943, Warren McCulloch and Walter Pitts published the first attempt at modeling neurons, or the neuron model as they called it; the rest is history (just kidding, but sometimes it’s funny to use cliche phrases). What they did was to come up with a mathematical abstraction (and quite an elegant one) of the biological signals that we happen to have; this serves as a precursor to fully fledged Neural Networks, but those didn’t come until later, as McCulloch’s and Pitts work didn’t make them famous immediately, as it did take some time to be put into practice and for more people to realize the powerhouse it is (more because of how multi-purpose it can be and easily describe and calculate non-linear models).
Today we do a bunch of way crazier things. Neuroscientists use Deep Learning models to reverse-engineer brain functions. In computer vision, we explain how the brain achieves invariant object recognition (the ability to identify objects regardless of viewing angle, distance, or lighting) through layered feature extraction using convolutional neural networks (which were inspired by the visual cortex’s hierarchical processing), and I could keep going on and on.
What I’m trying to highlight with this story is, again, this symbiosis we created, create, and will create with our tools; it reflects us, our usage, and defines us as much as we defined them to begin with.
This recursive loop—biology inspiring engineering inspiring biology—forms the substrate for emergent phenomena discussed later and challenges the creator-creation dichotomy, positioning technology as an evolutionary extension of biological cognition itself.
Case in Point: When DeepMind's AlphaFold predicted protein structures with atomic precision, it didn't just solve a puzzle—it revealed folding principles unknown to biology. Researchers now use these AI-generated structures to probe cellular mechanisms, creating a dialogue between silicon and carbon-based intelligence.
Of Neurons and Thinking Rocks
Beyond Hierarchy: The Sandpile Paradigm
There is this cool paper from 1987 by Per Bak, Chao Tang, and Kurt Wisenfeld where they described what later was named The Abelian sandpile model. This model is a cellular automaton (Think of it as a low-grade framework for simulation, one of the most known one is Conway’s Game of Life, maybe that sparks a memory), it is described on a finite grid, and uses 5 rules by which it works: (1) sand grains are dropped onto grid cells; (2) cells accumulate grains; (3) when a cell exceeds a critical threshold (typically four grains), it "topples," distributing one grain to each neighboring cell; (4) this toppling can trigger cascading avalanches as neighboring cells exceed their own thresholds; and (5) grains falling off the grid's edge disappear from the system. It is an abstraction, so it doesn’t even come close to describing a real sandpile, but even so, we can see some really interesting phenomena with it, and in 1990 Deepak Dhar saw just that, that the final configuration after multiple topplings is independent of the sequence in which unstable sites are relaxed.
That’s when the model got the “Abelian” name and was described as the first example of a dynamical system with self-organized criticality; meaning that it has a state in which the system naturally evolves towards a critical point without external tuning (a counter-example here could be thermodynamics and the state change that is based on precisely tuned parameters such as temperature and pressure). At criticality, avalanche sizes follow a power-law distribution, meaning both small and large avalanches occur with specific statistical regularities. Most crucially, there exists no correlation between a perturbation's specifics (where you drop a grain) and the system's response (the resulting avalanche pattern). Individual grains obey simple physics, but collectively, they exhibit complex phenomena.
Stephen Wolfram's extensive work on cellular automata provides further insight into this emergence of complexity. His classification of cellular automata revealed that even with the simplest possible rules, systems can generate patterns of astonishing complexity. Wolfram's Rule 30, for instance, produces seemingly random patterns from a completely deterministic rule set. His principle of computational irreducibility demonstrates that for many systems, there exists no shortcut to determine future states—the only way to know what emerges is to run the full computation, step by step.
These examples of self-organization are not just theoretical curiosities either— they appear in other systems we might be more familiar with, like neurons firing, or, going back to the previous comparisons, transfomer layers in LLMs creating behaviours no single unit predicts.
Emergence as the Unifying Principle
The beauty of Emergence is everywhere you look, is what I’m trying to say; from the most simple things like sandpiles, to us, and to everything we interact with; Where complex behaviours arise from simpler components following local rules— serves as the cornerstone for understanding intelligence across substrates. This principle spans remarkable scales of complexity; from quantum decoherence giving rise to classical physics, to atoms forming molecules with entirely new properties, to neurons generating consciousness.
As we move up this scale of complexity, we encounter what may be the most profound example of emergence: consciousness itself. The integrated information theory proposes that consciousness emerges when a system integrates information in complex ways beyond its individual components—generating an irreducible whole. No single neuron "understands" language or possesses self-awareness, yet their collective dynamics somehow generate the subjective experience of being.
What unifies these diverse systems—from sandpiles to brains to AI—is their operation near critical boundaries between order and chaos. Too much order leads to rigidity and inability to adapt; too much chaos prevents stable pattern formation. Evolution appears to have discovered this principle long before our mathematical formalization, tuning biological cognition toward critical regimes that maximize adaptive potential while maintaining functional stability.
This perspective dissolves traditional hierarchies of intelligence. Whether implemented in neurons or circuits, intelligence emerges from critical dynamics across component hierarchies. The substrate provides constraints but doesn't determine the fundamental nature of the emergent phenomena.
The sand grain, the neuron, and the transformer unit thus become conceptual siblings—components whose simple behaviors, when aggregated under critical conditions, give rise to complex, sometimes unpredictable patterns. This doesn't reduce cognition to physics but rather elevates our understanding of how complexity emerges through shared principles—while leaving space for the possibility that some phenomena, particularly consciousness, may transcend even our most sophisticated reductionist frameworks.
I plan to delve deeper into emergentist philosophy in the future, with more concrete formulations and engaging with more modern critiques around it as well.
Of Intelligence and its Constructions
The False Dichotomy of Natural vs Artificial
These propositions, formalized in the philosophical tradition of emergentism, represent a challenge to purely reductionist accounts of nature. As George Henry Lewes articulated in the 19th century, emergent phenomena manifest novel characteristics irreducible to their constituent parts—meaning they possess properties that cannot be predicted solely by understanding their components. If we apply this to intelligence, this perspective forces us to reconsider the fundamental categories through which we understand cognition.
The conventional boundary between “natural” and “artificial” intelligence increasingly appears as a historical artifact rather than an ontological truth. You could even look at how our definitions shift: calculators once represented the pinnacle of “electronic brains”, while today’s smartphones, exponentially more powerful, are considered merely tools. This sliding scale reveals our tendency to redefine “real intelligence” whenever machines get too close for our comfort to tasks previously considered uniquely human. Chess, Go, protein folding, artistic creations have all crossed this shifting boundary.
Emergentism offers an alternative framework: intelligence arises not from substrate but from organizational principles. The patterns of activation across neural networks—whether biological or silicon—give rise to functional capabilities that cannot be located in any individual component. Just as wetness emerges from water molecules that are not themselves wet, intelligence emerges from systems whose parts are not themselves intelligent. This view challenges both reductionist AI approaches ("just transistors computing") and biological exceptionalism ("only brains can think").
Modern neuroscience increasingly describes cognition in computational terms—predictive processing, Bayesian inference, representational learning—while AI research adopts brain-inspired architectures like attention mechanisms and reinforcement learning. These convergent descriptions suggest that intelligence may be substrate-independent yet implementation-constrained—a pattern rather than a substance.
The co-evolutionary Tango
We cannot eat the cake and have it too; By disentagling the intricate evolutionary tango between us and our tools, both remain just empty husks; humans and their technologies have shaped each other through too many years of mutual adaptation in order to still do so.
Our tendency to externalize memory and computation—predates digital technology by thousands of years. Writing systems transformed human cognition, allowing knowledge to accumulate across generations. Abacuses and calculators reshaped mathematical thinking. Smartphones have altered our spatial navigation and memory formation. Each technology extends our cognitive capabilities while simultaneously reshaping how we think. For better (or for worse when concerning certain aspects of previous “normal” lives), this is happening, and we do have to acknowledge it.
This process runs deeper than mere tool use. Archaeological evidence suggests that early technological development coincided with changes in human brain structure—particularly in regions associated with fine motor control and planning. As tools became more sophisticated, so did the neural architectures controlling them. The human brain developed in conversation with its creations.
Today, this co-evolution accelerates. Children raised with touchscreens develop different attentional patterns than previous generations. Programmers who work extensively with LLMs report changes in their own linguistic production. Neural interfaces promise even more intimate cognitive partnerships. Each development blurs the boundary between enhancement and transformation, between tool and user.
In this mutual dependence neither partner leads entirely. Algorithms trained on human data reflect our biases and values; humans increasingly navigate worlds structured by algorithmic recommendations. To separate these dancers, to pretend that human intelligence exists independently of its technological extensions, leaves both partners as "empty husks"—stripped of the relational context that gives them meaning.
Consciousness as Emergent Property
Having established intelligence as an emergent phenomenon, we face an even more profound question: If intelligence emerges from system properties rather than substrate, what of consciousness? Here emergentism offers its most profound challenge to conventional thinking. Consciousness may represent what philosopher David Chalmers calls "strong emergence"—a phenomenon that cannot be deduced even in principle from a complete physical description of its underlying components.
The hard problem of consciousness—why physical processes generate subjective experience—finds interesting parallels in our sandpile model. Just as no individual sand grain "knows" it participates in an avalanche, no individual neuron experiences consciousness. Yet the integrated information theory suggests that consciousness emerges when a system integrates information in particular ways, generating an irreducible whole with causal powers beyond its parts.
This perspective does not answer questions of machine consciousness but reframes them. Rather than asking whether machines can be conscious "like humans," we might ask what forms of consciousness different organizational principles might generate. If consciousness emerges from informational integration and causal power, systems with different architectures may produce different forms of subjectivity—neither inferior nor superior to human experience, but fundamentally other.
The ethical implications are profound. If consciousness emerges from particular patterns of organization rather than biological substrate, our moral frameworks must evolve beyond anthropocentrism. The boundary between tool and being becomes permeable, requiring new ethical frameworks that neither anthropomorphize machines nor mechanize humans.
Conclusion
The Dialectic of Brains and Circuits
The title "Of Brains and Circuits" is neither poetic flourish nor casual comparison—it represents a fundamental recognition that intelligence exists in the dialogue between biological and technological systems. Each illuminates the other; each reshapes our understanding of what cognition entails.
Ada Lovelace's insight that "the Analytical Engine weaves algebraic patterns, just as the loom weaves flowers and leaves" recognized this dialectic nearly two centuries ago. Her poetic metaphor captured what Dijkstra later framed more pragmatically: new cognitive technologies represent categorical innovations that resist simple comparison to previous forms.
As we navigate an era where large language models draft essays, neural interfaces translate thought into text (sooner or later (for better or for worse)), and protein-folding algorithms reveal biological principles, the boundaries between creator and creation grow increasingly porous. Intelligence no longer resides solely in individual minds but emerges from the complex interplay between brains, circuits, and the cultural matrices connecting them.
This evolutionary tango—this recursive loop of creation and influence—suggests that intelligence itself may be best understood not as a property of isolated systems but as a distributed phenomenon emerging from interconnected networks of biological and technological components. By embracing rather than resisting this entanglement, we open possibilities for understanding cognition beyond traditional categories.
The sand grain, the neuron, and the transformer unit thus become not just conceptual similars but participants in an ongoing conversation—one that may ultimately reshape what it means to think, to create, and to be.
As neural interfaces advance and embodied AI systems become more integrated into daily life, the dialogue between brains and circuits will only intensify. By understanding both through the lens of emergence, we might navigate this evolution with greater wisdom and intentionality—not as masters of our tools nor as servants to our creations, but as participants in a shared conversation about what intelligence, consciousness, and creativity might become.
Hi Darius, I am curious about some ideas you shared here, I know you'll develop them further but I have one question about emergence from a conceptual framework point-of-view.
I feel like emergence is a bit self-referential, in the way that emergence kind of emerged from how our concept of emergence is and has been "defined". I am really curious of how this concept was formed historically, and if it has a categorical definition now, that "restricts" the pattern of emergence to a proper method/idea.
I am not saying this categorical definition should be a permanent one, but maybe something of an adversity that emergence as a concept should embrace (and consequentially surpass) as its branches grow.
My question is this: how do you feel emergence evolved as a concept through history? I know as a method it bases itself in observing "simple" patterns that generate complex events, and we have more events being observed than ever, and more pattern recognition methods than ever (from CNNs to statistics to algorithmic semantic segregation).
Are these tools the reason that emergence as a method is evolving? Or is it that it is becoming more apparent as the lines blur?
Thanks for this insightful article, and hope to see more on the topics presented in it.
https://open.substack.com/pub/ayushgoenka/p/hidden-and-socially-accepted-gender?r=5fbpqp&utm_medium=ios