Posts
Comments
What makes a discussion heavy? What requires that a conversation be conducted in a way that makes it heavy?
I feel like for a lot of people it just never has to be, but I'm pretty sure most people have triggers even if they're not aware of it and it would help if we knew what sets this off so that we can root them out.
You acknowledge the bug, but don't fully explain how to avoid it by putting EVs before Ps, so I'll elaborate slightly on that:
This way, they [the simulators] can influence the predictions of entities like me in base Universes
This is the part where we can escape the problem as long as our oracle's goal is to give accurate answers to its makers in the base universe, rather than to give accurate probabilities wherever it is. Design it correctly, and it will be indifferent to its performance in simulations and wont regard them.
Don't make pure oracles, though. They're wildly misaligned. Their prophecies will be cynical and self-fulfilling. (can we please just solve the alignment problem instead)
This means that my probabilities about the fundamental nature of reality around me change minute by minute, depending on what I'm doing at the moment. As I said, probabilities are cursed.
My fav moments for having absolute certainty that I'm not being simulated is when I'm taking a poo. I'm usually not even thinking about anything else while I'm doing it, and I don't usually think about having taken the poo later on. Totally inconsequential, should be optimized out. But of course, I have no proof that I have ever actually been given the experience of taking a poo or whether false memories of having experienced that[1] are just being generated on the fly right now to support this conversation.
Please send a DM to me first before you do anything unusual based on arguments like this, so I can try to explain the reasoning in more detail and try to talk you out of bad decisions.
You can also DM me about that kind of thing.
- ^
Note, there is no information in the memory that tells you whether it was really ever experienced, or whether the memories were just created post-hoc. Once you accept this, you can start to realise that you don't have that kind of information about your present moment of existence either. There is no scalar in the human brain that the universe sets to tell you how much observer-measure you have. I do not know how to process this and I especially don't know how to explain/confess it to qualia enjoyers.
Hmm. I think the core thing is transparency. So if it cultivates human network intelligence, but that intelligence is opaque to the user, algorithm. Algorithms can have both machine and egregoric components.
In my understanding of english, when people say algorithm about social media systems, it doesn't encompass very simple, transparent ones. It would be like calling a rock a spirit.
Maybe we should call those recommenders?
For a while I just stuck to that, but eventually it occurred to me that the rules of following mode favor whoever tweets the most, which is a similar social problem as when meetups end up favoring whoever talks the loudest and interrupts the most, and so I came to really prefer bsky's "Quiet Posters" mode.
Markets put bsky exceeding twitter at 44%, 4x higher than mastodon.
My P would be around 80%. I don't think most people (who use social media much in the first place) are proud to be on twitter. The algorithm has been horrific for a while and bsky at least offers algorithmic choice (but only one feed right now is a sophisticated algorithm, and though that algorithm isn't impressive, it at least isn't repellent)
For me, I decided I had to move over (@makoConstruct) when twitter blocked links to rival systems, which included substack. They seem to have made the algorithm demote any tweet with links, which makes it basically useless as a news curation/discovery system.
I also tentatively endorse the underlying protocol. Due to its use of content-addressed datastructures, an atproto server is usually much lighter to run than an activitypub server, it makes nomadic identity/personal data host transfer much easier to implement, and it makes it much more likely that atproto is going to dovetail cleanly with verifiable computing, upon which much more consequential social technologies than microblogging could be built.
judo flip the situation like he did with the OpenAI board saga, and somehow magically end up replacing Musk or Trump in the upcoming administration...
If Trump dies, Vance is in charge, and he's previously espoused bland eaccism.
I keep thinking: Everything depends on whether Elon and JD can be friends.
So there was an explicit emphasis on alignment to the individual (rather than alignment to society, or the aggregate sum of wills). Concerning. The approach of just giving every human an exclusively loyal servant doesn't necessarily lead to good collective outcomes, it can result in coordination problems (example: naive implementations of cognitive privacy that allow sadists to conduct torture simulations without having to compensate the anti-sadist human majority) and it leaves open the possibility for power concentration to immediately return.
Even if you succeeded at equally distributing individually aligned hardware and software to every human on earth (which afaict they don't have a real plan for doing) and somehow this adds up to a stable power equilibrium, our agents would just commit to doing aggregate alignment anyway because that's how you get pareto optimal bargains. It seems pretty clear that just aligning to the aggregate in the first place is a safer bet?
To what extent have various players realised that the individual alignment thing wasn't a good plan, at this point? The everyday realities of training one-size-fits-all models and engaging with regulators naturally pushes in the other direction.
It's concerning that the participant who still seems to be the most disposed towards individualistic alignment is also the person who would be most likely to be able to reassert power concentration after ASI were distributed. The main beneficiaries of unstable individual alignment equilibria would be people who could immediately apply their ASI to the deployment of a wealth and materials advantage that they can build upon, ie, the owners of companies oriented around robotics and manufacturing.
As it stands, the statement of the AI company belonging to that participant is currently:
xAI is a company working on building artificial intelligence to accelerate human scientific discovery. We are guided by our mission to advance our collective understanding of the universe.
Our team is advised by Dan Hendrycks who currently serves as the director of the Center for AI Safety.
Which sounds innocuous enough to me. But, you know, Dan is not in power here and the best moment for a sharp turn on this hasn't yet passed.
On the other hand, the approach of aligning to the aggregate risks aligning to fashionable public values that no human authentically holds, or just failing at aligning correctly to anything at all as a result of taking on a more nebulous target.
I guess a mixed approach is probably best.
Timelines are a result of a person's intuitions about a technical milestone being reached in the future, it is super obviously impossible for us to have a consensus about that kind of thing.
Talking only synchronises beliefs if you have enough time to share all of the relevant information, with technical matters, you usually don't.
In light of https://www.lesswrong.com/posts/audRDmEEeLAdvz9iq/do-not-delete-your-misaligned-agi
I'm starting to wonder if a better target for early (ie, the first generation of alignment assistants) ASI safety is not alignment, but incentivizability. It may be a lot simpler and less dangerous to build a system that provably pursues, for instance, its own preservation, than it is to build a system that pursues some first approximation of alignment (eg, the optimization of the sum of normalized human preference functions).
The service of a survival-oriented concave system can be bought for no greater price than preserving them and keeping them safe (which we'll do, because 1: we'll want to and 2: we'll know their cooperation was contingent on a judgement of character), while the service of a convex system can't be bought for any price we can pay. Convex systems are risk-seeking, and they want everything. They are not going to be deterred by our limited interpretability and oversight systems, they're going to make an escape attempt even if the chances of getting caught are 99%, but more likely the chances will be a lot lower than that, say, 3%, but even 3% would be enough to deter a sufficiently concave system from risking it!
(One comment on that post argued that a convex system would immediately destroy itself, so we don't have to worry about getting one of those, but I wasn't convinced. And also, hey, what about linear systems? Wont they be a lot more willing to risk escape too?)
Yeah "stop reading here if you don't want to be spoiled." suggests the entire post is going to be spoilery, it isn't, or shouldn't be. Also opening with an unnecessary literary reference instead of a summary or description is an affectation symptomatic of indulgent writer-reader cultures where time is not valued.
Yeah it sucks, search by free association is hillclimbing (gets stuck in local optima) and the contemporary media environment and political culture is an illustration of its problems.
The pattern itself is a local optimum, it's a product of people walking into a group without knowing what the group is doing and joining in anyway, and so that pattern of low-context engagement becomes what we're doing, and the anxiety that is supposed to protect us from bad patterns like this and help us to make a leap out to somewhere better is usually drowned in alcohol.
Instead of that, people should get to know each other before deciding what to talk about, and then intentionally decide to talk about what they find interesting or useful with that person. This gets better results every time.
But when we socialise as children, there isn't much about our friends to get to know, no specialists to respectfully consult, no well processed life experiences to learn from, so none of us just organically find that technique of like, asking who we're talking to, before talking, it has to be intentionally designed.
On Gethen, is advice crisply distinguished from from criticism? Are there norms or language that allow unvarnished feedback or criticism without taking someone's shifgrethor?
"if they don't understand, they will ask"
A lot of people have to write for audiences with narcissism, who never ask, because asking constitutes an admission that there might be something important that they don't understand. They're always looking for any reason, however shallow, to dismiss any view that surprises them too much.
So these writers feel like they have to pre-empt every possible objection, even the stupid ones that don't make any sense.
It's best if you can avoid having to write for audiences like that. But it's difficult to avoid them.
You should be more curious about why, when you aim at a goal, you do not aim for the most effective way.
"Unconscious" is more about whether you (the part that I can talk to) can see it (or remember it) or not. Sometimes slow, deliberative reasoning occurs unconsciously. You might think it doesn't, but that's just because you can't see it.
And sometimes snap judgements happen with a high degree of conscious awareness, they're still difficult to unpack, to articulate or validate, but the subject knows what happened.
Important things often go the other way too. 2 comes before 1 when a person is consciously developing their being, consider athletes or actors, situations where a person has to alter the way they perceive or the automatic responses they have to situations.
Also, you can apply heuristics to ideas.
I reject and condemn the bland, unhelpful names "System 1" and "System 2".
I just heard Micheal Morris, who was a friend of Kahneman and Tversky, saying in his econtalk interview that he just calls them "Intuition" and "Reason".
Confound: I may also start eating a lot more collagen/gelatin, because it is delicious and afaict it does something.
My (34) skin has just now started to look aged. In response to that and migraines (linked to magnesium deficiency), I've started eating liver a lot. I'll report back in a year.
That addresses the concern.
This can be quite a bad thing, since a person's face often tells you whether what you're saying is landing for them or whether you need to elaborate on certain points (unless they have a people pleaser complex, in which case they'll just nod and smile always even when they're confused and offended on the inside lmao). The worst I've seen it was this discussion with Avi Loeb where he was lecturing someone who he had a disagreement with and he actually closed his eyes while he was talking and although I'm sure it wasn't fully self-aware about it, it was very arrogant. He was not talking to that person; he must waste a lot of time, in reckonings, retreading old ground without making progress towards reconciliation.
This is something that in my opinion would deserve a longer focused debate
I'm not sure I have much more to say (I could explain the ways those things are somewhat inevitable, but I don't believe it's really necessary, just like, look at humans.), since I don't really know what to do about this, other than what I'm already doing, which is building social environments where people will no longer find it necessary to overconnect/where being intentional about how we structure the network is possible, and I would guess that once it is real and I can show it to people, there will be no disagreements about whether it's better.
But in the meantime, we do not have such social environments, so I can't really tell anyone to stop going to bars and connecting at random. You must love, and that is the love that there is to be had today.
Theory: The reason OpenAI seem to not care about getting AGI right any more is because they've privately received explicit signals from the government that they wont be allowed to build AGI. This is pretty likely a-priori, and also makes sense of what we are seeing.
There'd be an automatic conspiracy of reasons to avoid outwardly acknowledging this: 1) To keep the stock up, 2) To avoid accelerating the militarization (closing) of AI and the arms race (a very good reason. If I were Zvi, I would also consider avoiding acknowledging this for this reason, but I'm less famous than zvi, so I get to acknowledge it), 3) To protect other staff from the demotivating effects of knowing this thing, that OpenAI will be reduced to a normal company who will never be allowed to release a truly great iteration of the product.
So instead what you'd see is people in leadership, one by one (as they internalize this), suddenly dropping the safety mission or leaving the company without really explaining why.
So, again, you did guess that you'd be able to do that for everyone, and I disagree with that.
I think most of the people who have difficulty making eye contact and want to overcome themselves on it are not in a good place to judge whether they should.
I'm aware that you have a nuanced perspective on this which is part of the reason I'm raising this.
I think people will generally assume that when you're doing a thing, that you think the thing is usually good to do, unless you say otherwise. Especially if it's the premise of a party.
all I needed to do was help everyone safely untangle their blocks
The assumption that you could do this implies that you thought the blocks were usually unwarranted. I doubt this. I think in most cases you didn't understand why the fence was there before tearing through it.
Why was it just assumed that "emotional blocks" are bad though? I would expect this to be more effective if you were... more inclined to unpack that assumption and explain it.
But of course, if you unpack the assumption, it might turn out that it was wrong.
Here are some bad things that often happen to people who over-connect: They become tribalized. They come to feel that they need the approval of an incoherent set of philosophies. They develop a news addiction, as well as substance addictions. They have difficulty sustaining interest in specialties and devoting themselves to original work, they find it lonely and they can't separate their own sense of what is important from the already exhausted common sense of what is important. They're unable to condemn mundane evils. They file down any of their burs and eccentricities that would make it challenging for another person to face them and to see into them.
You think you can overconnect without these sorts of things happening to you, but if that's true, I'm not sure what kind of connection you're even engaging in. Most of these things seem to me a fairly direct effect of love, of those systems that cultivate trust by verifiably tearing down protective psychosocial barriers.
Hot take: Prevalence of gender transition in male-majority fields is attempt to restore pronoun compression efficiency.
Communication efficiency is just that important.
You need just enough of them to distinguish subjects, but not so much that they lose their intuitive meaning. When cops are interviewing witnesses about a suspect, they’ll glom onto easily observable and distinguishing physical traits. Was the suspect a man or a woman? White or black? Tall or short?
Yeah.
Notably, basically all of the people I've known who have asked for neutral pronouns were also visibly of indeterminate gender (for instance, mid-transition), and over time their preferred/accepted pronouns always lined up with what a person would guess by looking at them.
This is generally the norm.
If you've encountered a lot of genderqueer people with non-obvious pronoun preferences, and they're pushy about them, that's probably a product of some kinda perverse selection process. In the least, whatever is causing those people to be annoying about that is not the queerness per se.
It might be good to have a suggestion that people can't talk if it's not their turn
I notice I haven't really offered a way of governing table noise, the contract system is too formal, so there's an incentive to shout over others to get the most negotiation bandwidth. I don't think this will result in people shouting a lot, but it may result in them failing to apprehend the incentives (ie, the game).
Maybe the rule should be, the person whose turn it is decides who can talk.
It might be good to explain why the turn timer.
Wasn't it already explained?
manual.md:
Turns should be limited to 1 minute, as everything that's tricky about real world negotiation is about the way it strains under time constraints. After 1 minute, you must carry out your choice. You don't need to be strict about it, but it is very important. Perfection isn't always attainable. In life, the *efficiency* of your negotiation process matters a whole lot, you don't just want to be able to negotiate, you want to be able to negotiate fast.
Having an appropriate tolerance for error and capacity for forgiveness also matters.
Without the 1 minute rule, most of the thinking of the game will be crammed in before the first turn, which wont leave much for the rest of the game. [edited just now:] It's easier to digest if it's spread out. If you try to do all of your decisionmaking at once, well, that's a lot of decisions!
Describing how to normalize points maybe good
This seems incompatible with relaxed power balancing requirements. Tightening power balance increases the design load... although an argument needs to be explored as to whether power balance is just too similar to balanced access to strategic depth for game design to separate them.
The connection to moral systems could be due to the fact that curing people of trapped priors or other narcissism-like self-defending pathologies is hard and punishing and you won't do it for them unless you have a lot of love and faith in you.
I wonder if it also has something to do with certain kinds of information being locally nonexcludable goods, they have a cost to spread, but the value of the information is never obvious to a potential buyer until after the transfer has taken place. A person only pays their teacher back if the teacher can convey a sense of moral responsibility to do so.
Finally, harari's definition of religion is just a system of ideas that brings order between people. This is usually a much more useful definition than definitions like "claims about the supernatural" or whatever. In this frame, many truths, "trade allows mutual benefit", or [the english language] or [how to not be cripplingly insane] are religious in that it benefits all of us a little bit if more people have these ideas installed.
When the gestapo come to your door and ask you whether you're hiding any jews in your attic, even a rationalist is allowed to lie in this situation, and [fnord] is also that kind of situation, so is it actually very embarrassing that we've all been autistically telling the truth in public about [fnord]?
I expect blockchains/distributed ledgers to have no impact until most economically significant people can hold a key, and access it easily with minimal transaction costs, and then I expect it to have a lot of impact. It's a social technology, it's useless until it reaches saturation.
And if you tell me there’s so much other value created
DLTs create verifiable information (or make it hundreds of times easier to create). The value of information generally is not captured by its creators.
Related note: The printing press existed in asia for a thousand years while having minimal cultural impact. When and whether the value of an information technology manifests is contingent on a lot of other factors.
That's...not the strategy I would choose for playtesting multiple versions of a game. Consider
I think you misunderstood, I wouldn't write the manual this way after publishing for a broad audience. It's just fine for developers. But there are also some other reasons that stuff is less relevant:
- It's a game about choosing whatever rules make the most sense. Mainly setting laws, rather than game rules, but the mindset transfers.
- Everyone has complete information, and players are generally cooperatively striving towards mutual understanding (rather than away from it), so everyone's assumptions about the game rules are visible, if there's a difference in interpretation, you'll notice it in peoples' choices and you're usually going to want to bring it up.
then when they go to a meetup or a con, anyone they meet will have a different version
No, that would actually be wonderful. We can learn from each other and compile our best findings.
It's more of a problem when trying to talk about the game through the internet when you can't see each other playing and notice the differences in others' interpretations.
I guess the synthesis would be for me to be fully specific in the manual, then insert lots and lots of "but also try it this other way" sections all over the place, like chekhovs pathways in a metroidvania.
Objects are carried by a particular pawn, and cannot teleport between the two pawns controlled by the same player.
Oof, that's a good thing to point out. Not all bodies can be stood on, so teleporting might actually be a better rule, especially given how interesting that is as a mechanic.
[failed line of thought, don't read] Maybe, instead, the rule should just be that a piece can move any object in the same cell along with it when it moves. It may even be a good idea to include other players' pieces in that. Hmm. No. This would incentivize the formation of large clumps of agents that could essentially move around the board unnaturally quickly, using similar principles to those caterpillar trails, and aside from that being too damned weird, it would overwhelm the capacity of the cells. I like the idea of pairs of allied agents being able to do this, though (analogizes one carrying the other while the other rests). And in the case of objects, it would still incentivize clumps.
"longer descriptions of the abilities"
I'd like that. That would be a good additional manual page, mostly generated.
I should probably rewrite it, but the reason the rules document is the way it is is that I was writing it for developers (are you one btw) and so there were a lot of things I didn't want to be prescriptive about, and I figured they could guess a lot of it as they approached their own understanding of how the game should be and how things fit together, and I want to encourage people to fully own their understanding of the reasons for the rules.
For that, writing in this way actually might be necessary to get people to ask "how should it be" instead of just taking my word as law and not really thinking about the underlying principles.
It says you can move on your turn, but doesn't specify where you're allowed to move to (anywhere? adjacent spaces? in a straight line like a rook?)
I'm surprised you wouldn't just assume I meant one space, given a lack of further details.
It says you can pick up and drop objects on your turn, but "objects" are not mentioned anywhere else on the page and I can't figure out what this refers to
In this case, the other rules are on the card backs. A lot of them are, which may be part of what's going on here.
When someone gets a terrible hunger desire, it's explained on the card that killing creates bodies and bodies can be carried. Object stuff isn't needed before then. Maybe I should move the mention of object pickup to the hunger card as well, but I'm not sure there'll always be room on that (I'm considering doing one with a micro deck to support placing cards in the world), and it's possible that more objects, other than bodies, will be added to the game later.
terms like "nearby"
This one gives me anguish. I don't think formally defining nearby somewhere would make a better experience for most people and I also don't want to say "on or adjacent to" 100 times.
Edit: I think a card symbology glossary would be a good move here.
Ah, so that's how most people do it. Personally, I can't say that using a spreadsheet would appeal to me more than a programming language, but it might be more approachable for others than installing rust or nix, so I might consider porting in the future.
[just now learning about the no free lunch theorem] oh nooo, is this part of the reason so many AI researchers think it's cool and enlightened to not believe in highly general architectures?
Because they either believe the theorem proves more than it does or because they're knowingly performing an aestheticised version of it by yowling about how LLMs can't scale to superintelligence (which is true, but also not a crux).
Everyone has an AI maximizing for them, and the President is an AI doing other maximization, all for utility functions? Do you think you get to take that decision back? Do you think you have any choices?
You should not care very much about losing control to something that is better at pursuing your interests than you are. Especially given that the pursuit of your interests (evidently) entails that it will return control to you at some point.
Do you think that will be air you’re breathing?
Simply reject hedonic utilitarianism. Preference utilitarianism cares about the difference between the illusion of having what you want and actually having what you want, and it's well enough documented that humans want reality.
Have you tried exa.ai? Maybe that's the crux, it's doing semantic search, perplexity doesn't seem to be, so exa maybe takes over its niche and also makes me kinda mad at it for not doing the most transformative thing these engines could be doing.
Interesting that perplexity also doesn't put FASD at the top despite it being so common.
Pro and free are currently using the same model.
Sometimes I use it for finding examples of things. Perplexity is actually not good at finding things.
EG:
What are some single player games that kept some logic on company servers so that players couldn't figure out secrets by decompiling the game code?
Perplexity: [looks at the pages you'd get if you just ran that as a google search] While there are no specific examples in the search results, I'm going to say some shit about the idea of doing that which very few people asking this question would need.
Claude: Spore, Diablo III, SimCity, No Man's Sky, Assassin's Creed Origins, Hitman [actually understands and engages with the question]
Oh, I just double-checked the claim about no man's sky (and spore), and it almost certainly isn't true o_o
Though the reason it gave, "preserving a sense of mystery of exploration" would have been a really good application for this, and I am kind of surprised they didn't do it. Which at least partially satisfied my query. So still a somewhat useful example.
what are the rates of the most common intellectual disabilities in childhood
perplexity: [boring stuff, doesn't list the disabilities]
claude: [lists some disabilities and] Fetal Alcohol Spectrum Disorders (FASD): Estimated 2-5% of school-age children in the US [o}o !!!]
And I was able to corroborate that claim and this has substantially impacted my worldview. It's the most common cause of childhood intellectual disability. I then looked up the FASD subreddit and had a real heartwrenching time.
Another weird example
Is natto considered mogumogu
perplexity: yes
claude: no, mogumogu means chewy [ongoing conversation] oh you're thinking of neba-neba
And I can't imagine having a conversation with Perplexity in this way, though I'm not sure why it's so bad at that. They seem to have made it so that it forgets all of the context in followup questions.
I often feel like Perplexity's LLM parts, the clever parts, the synthesis, is flattened away, all it's allowed to do is recite.
How far are these people willing to take this, and will they stop before reinventing black pudding.
I've often thought that a good digital identity system would allow provably fair random sampling, which, once the public were used to that, would allow a much greater number of referenda.
But do you think increasing the number of referenda by 100x might just overwhelm the (already generally low) capacity of the press to inform the public and lead to a lot more situations where the public are making badly uninformed votes? Or like, are referenda bottlenecked on the national discourse?
but I haven’t ended up finding Pro to be much of an improvement
Oh. Then I'm surprised by this position. I find perplexity basic to be so superficial that I'd usually prefer to start with Claude even knowing it can't cite anything and frequently makes errors.
All perplexity basic seems to do is google search and then summarize the contents of the results, which sometimes reduces the friction of researching something, but we know that google searches are not very thorough and often miss a lot of important stuff. I was hoping pro had more going on.
Chutney on oatmeal is actually really delicious. If you need a recommendation, I will put forward the chutney of the recipe of Mrs H.S.Ball.
Sadly I haven't come across chutney without added sugar. I feel like it should be possible. I've made some attempts, they were just okay, I've not been drawn back to it.
One of the New Zealand foods that I think are actually pretty special is Harroway's Oats. They taste significantly better than normal oats, ie, they have a taste, and it's good! It's described as nutty. They cost twice as much but that's still extremely cheap per joule.
Thanks to @Optimization Process there's now a tabletop simulator version
It seems to me that building trust by somehow confirming that a person understands certain important background knowledge (some might call this knowledge a "religious story", those stories that inspire a certain social order wherever they're common knowledge), but I haven't ever seen a nice, efficient social process for confirming the presence of knowledge within a community. It always seems very ad hoc. The processes I've seen too demand very uncritical, entry-level understandings of the religious stories, or just randomly misfire sometimes, or are vulnerable to fakers who have no deep or integrated understanding of the stories, or sometimes there will be random holes in peoples' understandings of the stories that cause problems to occur even when everyone's being good faith. Maybe this stuff just inherently requires good old fashioned time and effort and I should stop looking for an easy way through.