12 Comments
User's avatar
James Giammona's avatar

Your scale-free agency sounds a lot like properties that hold under renormalization.

Eric Smith has applied these ideas to game theory among population of agents and derived how their effective games change at different scales.

Check it out here and happy to discuss more!

https://iopscience.iop.org/book/mono/978-0-7503-1137-3

(There is also a condensed working paper on the same topic here: https://www.santafe.edu/research/results/working-papers/symmetry-and-collective-fluctuations-in-evolutiona )

I’ve been trying to figure out how to apply these ideas to RL algorithms.

Expand full comment
James Giammona's avatar

I especially like how he models the evolution of agents playing iterated prisoner’s dilemma and demonstrates if IIRC convergence to a mainly cooperating population with a sub population that does tit-for-tat or somehow enforced costs on defectors.

Expand full comment
mtraven's avatar

Are you familiar with the work of George Ainslie? A psychologist specializing in addiction, he had some really deep ideas about what it meant for minds to be composed of conflicting coalitions of goals. And grounded in economic utility, sort of (he introduced the term "picoeconomics" to describe internal mental dynamics). Introduction here: https://www.ribbonfarm.com/2013/10/18/the-government-within/

Expand full comment
Paul Sas's avatar

Really interesting.

Only comment is to contrast this model approach with the thinking of Marvin Minsky's Society of the Mind.

As excited as Marvin was about subagents (daemons back then), he neglected to incorporate any game theory. Pretty weird when you recall that Minsky went to grad school with John Nash. For whatever reason, Society of the Mind completely neglected markets, auctions, and decentralized mechanisms for coordination. Maybe it's totally forgotten, left in the scrap heap of GOFAI

Expand full comment
Rafael Kaufmann's avatar

I'll probably have some more comments on this, but for now let me just say that Scott's "geometric rationality" is more or less well-known in economics, the idea having been developed most recently under the umbrella of "Ergodicity Economics" by Ole Peters, Alex Adamou and others. See for instance this rather widely read Nature paper from 2019: https://www.nature.com/articles/s41567-019-0732-0

Expand full comment
Roger Ison's avatar

I don't see why Arrow's Impossibility Theorem is a meaningful concern here. If a new choice or alternative is added and the outcome changes, so be it. So long as every participant agrees that the *method* is legitimate and executed faithfully, Arrow would seem to be irrelevant.

Expand full comment
Richard Meadows's avatar

"The core idea of coalitional agency is that we should think of agents as being composed of cooperating and competing subagents; and those subagents as being composed of subsubagents in turn; and so on."

This is definitely true of the only agents that we know about for now (living beings). I'm curious how much you've looked into the 'new biology' stuff? The idea being that life is a hierarchy of self-organising modules, all of which have agency: from genes, to proteins, to cells, to tissues, to organs, to organisms. I can't remember whether Philip Ball's 'How Life Work' has much to say about strategic interactions between the various subagents but that will be front of mind for the reread.

Personally I put money on predictive coding/active inference being the correct structure but I have a couple of big confusions remaining. The first is the Deutschian concern around where new creative thoughts come from. The second is how to reconcile the competing predictions of multiple hierarchies of subagents into one unified output. So I'm glad you're working on this problem!

Expand full comment
Nicolas D Villarreal's avatar

I'm not super familiar with active inference, but fundamentally prediction/identifying correlations is what sign production is about. In this way, we can make sense of political systems more broadly. They usually occur via the use of violence to create a sign, to create a whole symbolic system (incorporating tribute into the ritual of everyday life for example). If you can make this predictable in a somewhat sustainable way, you make it rather difficult for ppl to escape from.

I do think this does point to a potential problem with active inference, if thinking was just prediction all the way up then where do new thoughts and imagination come from? Or expectations of things we haven't yet experienced? If you're born and raised in a given political system, why would it ever change if it is predictable? A part of it is that we're not just predicting all of experience, we are extrapolating specific symbolic systems, sort of in a chomskian linguistics sense but with arbitrary rules not just those or universal grammar. https://nicolasdvillarreal.substack.com/p/higher-order-signs-hallucination

To go back to your original question, a political system that works is simply one where everyone recognizes that the symbols and processes do what they say they do and correlated their sign for themselves somewhere within that system as well. That is scale independent, but it's also not necessarily rational, except in the sense that what works has a certain rationality to it. That process of correlating the self to the system is the tricky part, and where incentives come in.

Expand full comment
Benjamin Lyons's avatar

I don't know if I'm properly understanding what you're asking in the second paragraph, but you might want to check out Mike Levin's new work on where patterns come from: https://thoughtforms.life/platonic-space-where-cognitive-and-morphological-patterns-come-from-besides-genetics-and-environment/

Expand full comment
Martin Petersen's avatar

Very inspiring piece - I enjoyed reading it very much

Being scale-free and to derive agency and intelligence from entities being comprised by multiple sub-entities is what I am all about in my thinking. I really like the way you go for finding mathematic and parsimonious solutions to this.

My approach to the same question is more roundabout and also very incomplete. I've written a bit about it on my substack: https://substack.com/chat/2860218?utm_source=share&utm_medium=android

Expand full comment
Benjamin Lyons's avatar

Have you seen the work of Michael Levin on scale-free cognition? It's also based on the idea that agents are made up of subagents: https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02688/full. Here's a related paper with economics: https://osf.io/preprints/osf/3fdya_v1.

Expand full comment
Roman Leventov's avatar

Rafael Kaufmann and collaborator's "Gaia Network" (https://engineeringideas.substack.com/p/gaia-network-an-illustrated-primer) is also a mechanism for steering ActInf agents towards incentive compatibility, it seems. The prototype is developed here: https://github.com/gaia-os/gaia_network_prototype.

Eric Drexler's "Large Knowledge Models" (https://aiprospects.substack.com/p/large-knowledge-models) also looks to me rather similar to Gaia Network, but with a connectionist rather than Bayesian bend.

Expand full comment