7 Comments
User's avatar
Emmett's avatar

From your post on Bayesian epistemology:

“ Perhaps the most promising approach to assigning fuzzy truth-values comes from Garrabrant induction, where the "money" earned by individual traders could be interpreted as a metric of fuzzy truth.

However, these traders can strategically interact with each other, making them more like agents than typical models.”

I view your post here about coup logic as being pretty good evidence in favor of the Garrabrant induction approach. It elucidates how strategic interaction can become truth.

Beliefs are controlled by reality but control reality in turn, any non-agentic model of truth is missing half the flow. FDT halfway solves for this but lacks the theory of identity needed.

Expand full comment
Richard Ngo's avatar

Yepp, that seems right.

The follow-up to this post will be about how we can think of political factions (and identities, and moral values) as agents that are distributed across many people (an idea which I mention at the end of this post).

I think this helps address the problems with FDT, though I don't have a formal way to talk about distributed agents yet.

Expand full comment
Malcolm Ocean's avatar

This is great. The stuff on faith gets me in touch with the sense I've had now and then, and written up eg here (https://malcolmocean.com/2021/11/meta-protocol-for-trust-building/) that there are some principles for coordination that can be derived from any starting point (albeit more or less slowly) and that there's yet a feeling of faith in learning to lean on those principles.

Expand full comment
Vladimir G. Ivanovic's avatar

I thought that game theory was the way out of the infinite regress of "where everyone is trying to predict everyone else’s predictions about everyone else’s predictions about everyone else’s predictions about everyone else’s… "

Expand full comment
Sam Waters's avatar

Read this on LW earlier in the week. Great post!

I very much agree with your observation that, once you grok it, it’s hard not to see this model everywhere. I was chatting with a friend about how Biden’s withdrawal as the Democratic nominee for the presidency can be well described by this model. In the months before he bowed out, there seemed to have been a yawning gap between widespread private expressions of worry by politicos about the Biden’s cognitive health and the public position by Democrats that he was sharp as a tack; often this line was parroted by the same people who were privately expressing concerns! Lots of public messaging tried to emphasize that he was only person who could beat Trump, that he was the best the Democrats had, etc. In retrospect, it’s hard not to view this as an attempt by Biden and the Democratic establishment to make a fact. After the debate and his yielding to Kamla Harris, public discussion on both of these things shifted suddenly and dramatically—a “preference” cascade, perhaps? (It might be that I’m just misreading him, Kuran seems to use the term “preference” even when what’s actually being falsified is a belief. In the Biden case, it seems like what occurred involved falsification of both beliefs and preferences.) Biden’s abysmal performance during the debate probably allowed private knowledge to be suddenly revealed.

The Biden example might be an over-done one. There are lots more interesting examples. How much of the replication crisis in the social sciences can be explained by the persistence of results widely viewed in private to be false because of social pressure? How much of the re-orientation of tech elites towards Trump in the last few weeks is an example of over-correction after a preference cascade?

I’m left wondering if this model gives us a way to think about how a lot of successful propaganda works—and also whether we should be more worried about propaganda. I am also left with a much more favourable impression of Mill (and Popper?), who seemed to have been quite about private censorship (in the form of peer pressure etc.) to free speech being as great as government censorship.

Expand full comment
Roko's avatar

This stuff will be much easier to study with AIs as models for humans. Computational sociology may take off soon.

Expand full comment
Sheikh Abdur Raheem Ali's avatar

I read this between 1:1s at EAG. I think that the post is accurate insofar as I understand political power.

Expand full comment