Sharing our experimental call summaries: Al-generated digests for weekly Yak Collective live study groups. A step forward in exploration of human-machine collaborative cognition and Yak Oracle and Yak Memory systems for collective intelligence capabilities. This week: SeaVoice transcript to Gemini.
Call Overview
Date: Mon Jul 07 2025
Duration: Approximately 27 minutes
Reading: Active Inference: The Free Energy Principle in Mind, Brain, and Behavior, Thomas Parr, Giovanni Pezzulo, Karl J. Friston
https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind and https://doi.org/10.7551/mitpress/12441.001.0001Participants: Ananth, Ben Mahala,
Benny, , Jenna DixonMain topics discussed:
Discussion of a book on "active inference" or the "free energy principle"
The concept of minimizing surprise as a core principle of intelligence and brain function
Reconciling the theory with other ideas and real-world phenomena (e.g., AI, physics, financial world)
The "low and high road" language in the context of the theory
The usefulness and potential overreach of such unifying theories
Practical implications for AI and robotics
Detailed Discussion Summary
Discussion on the Book and the "Free Energy Principle" / "Active Inference"
Ananth started by discussing his reading of a book, specifically the overview chapters and portraits, which he found interesting.
He highlighted the idea of intelligence as "perception cycles" and the concept of "minimizing free energy in terms of minimizing surprises in the environment and all of the action perception cycles is sort, is sort of grounded on that". He found this idea "simple and elegant".
Ananth noted that planning is also framed as a type of inference within this framework and can be formalized using the same method.
He observed that this book seems to be gaining traction in "AI circles" and found the emphasis on "inference" particularly useful in the current "generative AI mod phase".
Ananth drew an analogy between this principle and physics terms like "taking the path of these assistance in Transfer, and even like an Electrical circuits". He mentioned that the book includes chapters at the end on the mathematics used to describe these concepts, which he hopes to spend more time on.
He also connected the idea of "all actions or all perceptions in substances to reduce the expected future surprise" to financial concepts like "discounted cash flows and expected future returns in the financial world," suggesting an "interrelated idea there". The "goal years to minimate surprise" aligns with maximizing "Future returns".
Ben Mahala (EST) acknowledged having heard of the concept of "minimizing surprise" but didn't realize it was called "Act inference, or the free energy principle".
He referenced "slime mo" or "Time mold," and a series of posts about "the mind in the wheel," which presents "a psychological theory of... mind, or at least of emotions that looks at things as though they're control systems, so they're like thermostats trying to push a variable to a known or static thing".
Ben stated that both the "minimizing surprise principle" and the control systems idea are "interesting" and "useful".
However, he expressed suspicion about the tendency of these theories to "overreach" and claim "everything is X". He believes that what we are is "a large mash of various things that, some of which may be well modeled as wax"15.
He concluded that "all these theories are false. Some of them are useful. I think all of them have their uses".
Ben raised the point that while the theory suggests minimizing surprise, people also "seek out surprise" and that there's "a cycle here of learning where you're surprised and then you become unsurprised over time, and that this is sort of how you grow".
Jenna Dixon described the theory as "the obvious, almost no-brainer" and linked it to the concept of "fox versus hedgehog" thinking, where she feels more foxy and views the math-heavy approach of this theory as hedgehog stuff.
She found it interesting that people are trying to "map with math, the secrets of how the brain makes does what it does".
Jenna provided two specific examples supporting the "predictive" nature of the brain:
Cicades (Saccades) in reading: The eye makes predictive movements across a line of text, stopping only a few times per line. She explained how book designers understand this ("long line width, so speaking, as a book designer, the long line, or or hard, because the eye The cage don't work as well once your line Text goes on and on and on") 21and how proofreading tricks (reading backwards or two people proofreading together) interrupt this predictive pattern.
Tai Chi: She explained that Tai Chi's "108 Move form" is "a choreographed fight," where movements are based on predicted responses, and then adapting to what came next. The slow practice helps the nervous system by catching, Oh, you do this weird thing every time... and maybe you can change that," which is a way of getting out of autopilot. Also “there is no try” from Star Wars canon.
She also mentioned sports metaphors ("everybody knows to play Sport, you can can't be in your brain"), "the inner game of tennis", and surfers as relevant metaphors. She noted that "wu wei" from early Taoism suggests that "the early Taoists intuited exactly what this paper is going at".
Jenna quoted from Chapter Five on Neurology, stating that "plants don't need nervous systems because they don't move anything that moves actively requires a nervous system otherwise would come to a quick death". She connected this to Tai Chi training the nervous system and having "smart tendons and a fascia that knows how to do stuff".
She questioned the meaning of "High and low road language" in the theory, stating that Claude (an AI) and she "agree that this... doesn't actually make sense".
Sachin Benny (US Central)'s understanding is that "active and active inference means that any any system needs to reduce the price in order to be viable and survive". He stated that this is "done by reducing free energy", and "free energy is essentially... a definition of Surp... the changes that surprise cause to the system".
He also found the "Mar. Blanket part was also interest... as like something that separates the system from Environment".
Sachin discussed the role of attention and memory in active inference:
Attention: Reduces surprise by "increasing precision... The information... you are sensing".
Memory: Helps "set up your mind so that... future surprises can be like... you can attend to any future surprises, because you have already committed certain things to memory like when you're learning something new like? Driving". He noted that "most things that you encounter while driving is not surprising" due to this.
He connected this to Marshall McLuhan's idea that "every technology is an extension of man" and "every extension, every technology is an extension of mans senses". Sachin suggested that in driving, "the man and the machine" can be considered "to be together in some way," where the car itself (with features like ABS) handles feedback loops outside of human control to reduce surprise.
He concluded that "all technology is sort of like encoding me... a lot of technology is essentially you take Something you have learned... and encode that so that you don't you don't have to think about it anymore".
Sachin mentioned that while the theory "feels like a bit too neat too easily, believe", if true, it supports the idea that "Ll Ms are stochastic parrots" because "everything is... pattern, I guess, including people".
Venkatesh Rao agreed that the theory "feels like a formalization of an obvious intuition," but emphasized its importance because "often, when you try to formalize an obvious intuition you find like hidden bombs in the Idea that are not obvious at all".
He clarified Jenna's question about the "low and high road" language, stating that the "high road part is not actually necessary" for many applications of predictive processing (e.g., "generative Ai to building models of the brain, or like, predicting how humans will behave in practice, or you know, Lisa Feldman's theory of emotion"). The "free energy principle is almost like an extra epiphenomenon that pops out".
Venkatesh explained that such minimization principles can lead to "a causality problem," citing the "principle of least action" in physics (how a ray of light "know[s] that this is the shortest path without actually having gone there"). He suggested that thinking about these principles is useful.
He acknowledged a "growing cult around this in Ai and neuroscience" but asserted it's "not an empty cult" and "it does a score of something worth thinking about here".
Regarding the mathematical examples, Venkatesh hasn't "dived too deep" but believes there are "clues or something much more fundamental, going on and it might be more surprising than we expect". He compared it to the historical development of classical physics ("going from Newton's laws to like grangiance to Hamiltonians") which "revealed Layers of the onion that people kind of didn't expect, and sometimes there's like big surprises along the way".
He predicted "really weird and clever ways to build robots, among other things, if you're looking for practical implications," and philosophically, he thinks "there's more going on here than might suspect".
Countering Jenna's "hedgehoggy" assessment of the math, Venkatesh suggested that "Hedgehog versus Fox is kind of like a recursive thing," and the math in this theory is "actually more foxy than it looks like".
Yak Collective Discord chat:
https://discord.com/channels/692111190851059762/1391602811233374249/1391603338415702176