Nathan Young's Shortform
post by Nathan Young · 2022-09-23T17:47:06.903Z · LW · GW · 79 commentsContents
79 comments
79 comments
Comments sorted by top scores.
comment by Nathan Young · 2024-07-16T12:26:43.469Z · LW(p) · GW(p)
Trying out my new journalist strategy.
Replies from: Raemon↑ comment by Raemon · 2024-07-16T19:35:29.062Z · LW(p) · GW(p)
Nice.
Replies from: Raemon↑ comment by Raemon · 2024-07-17T00:55:05.636Z · LW(p) · GW(p)
though, curious to hear an instance of it actually playing out
Replies from: Nathan Young, Nathan Young↑ comment by Nathan Young · 2024-07-18T13:11:32.896Z · LW(p) · GW(p)
So far a journalist just said "sure". So n = 1 it's fine.
↑ comment by Nathan Young · 2024-07-24T01:05:23.748Z · LW(p) · GW(p)
I have 2 so far. One journalist agreed with no bother. The other frustratedly said they couldn't guarantee that and tried to negotiate. I said I was happy to take a bond, they said no, which suggested they weren't that confident.
Replies from: Raemon↑ comment by Raemon · 2024-07-24T01:07:46.530Z · LW(p) · GW(p)
I guess the actual resolution here will eventually come from seeing the final headlines and that, like, they're actually reasonable.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-07-24T17:35:00.204Z · LW(p) · GW(p)
I disagree. They don't need to be reasonable so much as I now have a big stick to beat the journalist with if they aren't.
"I can't change my headlines"
"But it is your responsibility right?"
"No"
"Oh were you lying when you said it was"
comment by Nathan Young · 2024-06-22T12:30:46.440Z · LW(p) · GW(p)
Joe Rogan (largest podcaster in the world) giving repeated concerned mediocre x-risk explanations suggests that people who have contacts with him should try and get someone on the show to talk about it.
eg listen from 2:40:00 Though there were several bits like this during the show.
Replies from: zac-hatfield-dodds, LosPolloFowler↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2024-06-22T16:35:32.219Z · LW(p) · GW(p)
He talked to Gladstone AI founders a few weeks ago; AGI risks were mentioned but not in much depth.
↑ comment by Stephen Fowler (LosPolloFowler) · 2024-06-23T00:10:58.187Z · LW(p) · GW(p)
Obvious and "shallow" suggestion. Whoever goes on needs to be "classically charismatic" to appeal to a mainstream audience.
Potentially this means someone from policy rather than technical research.
Replies from: robert-lynn↑ comment by Foyle (robert-lynn) · 2024-06-23T11:14:08.215Z · LW(p) · GW(p)
AI safety desperately needs to buy in or persuade some high profile talent to raise public awareness. Business as usual approach of last decade is clearly not working - we are sleep walking towards the cliff. Given how timelines are collapsing the problem to be solved has morphed from being a technical one to a pressing social one - we have to get enough people clamouring for a halt that politicians will start to prioritise appeasing them ahead of their big tech donors.
It probably wouldn't be expensive to rent a few high profile influencers with major reach amongst impressionable youth. A demographic that is easily convinced to buy into and campaign against end of the world causes.
comment by Nathan Young · 2024-09-10T10:40:09.481Z · LW(p) · GW(p)
I read @TracingWoodgrains [LW · GW] piece on Nonlinear and have further updated that the original post by @Ben Pace [LW · GW] was likely an error.
I have bet accordingly here.
comment by Nathan Young · 2024-05-21T16:08:41.532Z · LW(p) · GW(p)
A problem with overly kind PR is that many people know that you don't deserve the reputation. So if you start to fall, you can fall hard and fast.
Likewise it incentivises investigation that you can't back up.
If everyone thinks I am lovely, but I am two faced, I create a juicy story any time I am cruel. Not so if am known to be grumpy.
eg My sense is that EA did this a bit with the press tour around What We Owe The Future. It built up a sense of wisdom that wasn't necessarily deserved, so with FTX it all came crashing down.
Personally I don't want you to think I am kind and wonderful. I am often thoughtless and grumpy. I think you should expect a mediocre to good experience. But I'm not Santa Claus.
I am never sure whether rats are very wise or very naïve to push for reputation over PR, but I think it's much more sustainable.
@ESYudkowsky can't really take a fall for being goofy. He's always been goofy - it was priced in.
Many organisations think they are above maintaining the virtues they profess to possess, instead managing it with media relations.
In doing this they often fall harder eventually. Worse, they lose out on the feedback from their peers accurately seeing their current state.
Journalists often frustrate me as a group, but they aren't dumb. Whatever they think is worth writing, they probably have a deeper sense of what is going on.
Personally I'd prefer to get that in small sips, such that I can grow, than to have to drain my cup to the bottom.
comment by Nathan Young · 2024-07-23T16:51:47.724Z · LW(p) · GW(p)
Thanks to the people who use this forum.
I try and think about things better and it's great to have people to do so with, flawed as we are. In particularly @KatjaGrace [LW · GW] and @Ben Pace [LW · GW].
I hope we can figure it all out.
comment by Nathan Young · 2024-05-30T16:36:29.160Z · LW(p) · GW(p)
Feels like FLI is a massively underrated org. Cos of the whole vitalik donation thing they have like $300mn.
Replies from: habryka4↑ comment by habryka (habryka4) · 2024-05-30T22:46:27.126Z · LW(p) · GW(p)
Not sure what you mean by "underrated". The fact that they have $300MM from Vitalik but haven't really done much anyways was a downgrade in my books.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-06-23T13:03:50.791Z · LW(p) · GW(p)
Under considered might be more accurate?
And yes, I agree that seems bad.
comment by Nathan Young · 2024-05-24T19:43:25.195Z · LW(p) · GW(p)
Given my understanding of epistemic and necessary truths it seems plausible that I can construct epistemic truths using only necessary ones, which feels contradictory.
Eg 1 + 1 = 2 is a necessary truth
But 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 10 is epistemic. It could very easily be wrong if I have miscounted the number of 1s.
This seems to suggest that necessary truths are just "simple to check" and that sufficiently complex necessary truths become epistemic because of a failure to check an operation.
Similarly "there are 180 degrees on the inside of a triangle" is only necessarily true in spaces such as R2. It might look necessarily true everywhere but it's not on the sphere. So what looks like a necessary truth actually an epistemic one.
What am I getting wrong?
Replies from: JBlack, Leviad, cubefox↑ comment by JBlack · 2024-05-25T03:53:54.597Z · LW(p) · GW(p)
Is it a necessary non-epistemic truth? After all, it has a very lengthy partial proof in Principia Mathematica, and maybe they got something wrong. Perhaps you should check?
But then maybe you're not using a formal system to prove it, but just taking it as an axiom or maybe as a definition of what "2" means using other symbols with pre-existing meanings. But then if I define the term "blerg" to mean "a breakfast product with non-obvious composition", is that definition in itself a necessary truth?
Obviously if you mean "if you take one object and then take another object, you now have two objects" then that's a contingent proposition that requires evidence. It probably depends upon what sorts of things you mean by "objects" too, so we can rule that one out.
Or maybe "necessary non-epistemic truth" means a proposition that you can "grok in fullness" and just directly see that it is true as a single mental operation? Though, isn't that subjective and also epistemic? Don't you have to check to be sure that it is one? Was it a necessary non-epistemic truth for you when you were young enough to have trouble with the concept of counting?
So in the end I'm not really sure exactly what you mean by a necessary truth that doesn't need any checking. Maybe it's not even a coherent concept.
↑ comment by Drake Morrison (Leviad) · 2024-05-24T20:40:01.798Z · LW(p) · GW(p)
What do you mean by "necessary truth" and "epistemic truth"? I'm sorta confused about what you are asking.
I can be uncertain about the 1000th digit of pi. That doesn't make the digit being 9 any less valid. (Perhaps what you mean by necessary?) Put another way, the 1000th digit of pi is "necessarily" 9, but my knowledge of this fact is "epistemic". Does this help?
↑ comment by cubefox · 2024-05-25T18:32:39.222Z · LW(p) · GW(p)
Just a note, in conventional philosophical terminology you would say:
Eg 1 + 1 = 2 is an epistemic necessity
But 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 10 is an epistemic contingency.
One way to interpret this is to say that your degree of belief in the first equation is 1, while your degree of belief in the second equation is neither 1 nor 0.
Another way to interpret it is to say that the first is "subjectively entailed" by your evidence (your visual impression of the formula), but not the latter, nor is its negation. subjectively entails iff , where is a probability function that describes your beliefs.
In general, philosophers distinguish several kinds of possibility ("modality").
- Epistemic modality is discussed above. The first equation seems epistemically necessary, the second epistemically continent.
- With metaphysical modality (which roughly covers possibility in the widest natural sense of the term "possible"), both equations are necessary, if they are true. True mathematical statements are generally considered necessary, except perhaps for some more esoteric "made-up" math, e.g. more questionable large cardinal axioms. This type is usually implied when the type of modality isn't specified.
- With logical modality, both equations are logically contingent, because they are not logical tautologies. They instead depend on some non-logical assumptions like the Peano axioms. (But if logicism is true, both are actually disguised tautologies and therefore logically necessary.)
- Nomological (physical) modality: The laws of physics don't appear to allow them to be false, so both are nomologically necessary.
- Analytic/synthetic statements: Both equations are usually considered true in virtue of their meaning only, which would make them analytic (this is basically "semantic necessity"). For synthetic statements their meaning would not be sufficient to determine their truth value. (Though Kant, who came up with this distinction, argues that arithmetic statements are synthetic, although synthetic a priori, i.e. not requiring empirical evidence.)
Anyway, my opinion on this is that "1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 = 10" is interpreted as the statement "this bunch of ones [referring to screen] added together equal 10" which has the same truth value, but not the same meaning. The second meaning would be compatible with slightly more or fewer ones on screen than there actually are on screen, which would make the interpretation compatible with a similar false formula which is different from actual one. The interpretation appears to be synthetic, while the original formula is analytic.
This is similar to how the expression "the Riemann hypothesis" is not synonymous to the Riemann hypothesis, since the former just refers to a statement instead of expressing it directly. You could believe "the Riemann hypothesis is true" without knowing the hypothesis itself. You could just mean "this bunch of mathematical notation expresses a true statement" or "the conjecture commonly referred to as 'Riemann hypothesis' is true". This belief expresses a synthetic statement, because it refers to external facts about what type of statement mathematicians happen to refer to exactly, which "could have been" (metaphysical possibility) a different one, and so could have had a different truth value.
Basically, for more complex statements we implicitly use indexicals ("this formula there") because we can't grasp it at once, resulting in a synthetic statement. When we make a math mistake and think something to be false that isn't, we don't actually believe some true analytic statement to be false, we only believe a true synthetic statement to be false.
comment by Nathan Young · 2024-01-02T10:56:49.872Z · LW(p) · GW(p)
I am trying to learn some information theory.
It feels like the bits of information between 50% and 25% and 50% and 75% should be the same.
But for probability p, the information is -log2(p).
But then the information of .5 -> .25 is 1 bit and but from .5 to .75 is .41 bits. What am I getting wrong?
I would appreciate blogs and youtube videos.
Replies from: mattmacdermott↑ comment by mattmacdermott · 2024-01-02T12:16:05.257Z · LW(p) · GW(p)
I might have misunderstood you, but I wonder if you're mixing up calculating the self-information or surpisal of an outcome with the information gain on updating your beliefs from one distribution to another.
An outcome which has probability 50% contains bit of self-information, and an outcome which has probability 75% contains bits, which seems to be what you've calculated.
But since you're talking about the bits of information between two probabilities I think the situation you have in mind is that I've started with 50% credence in some proposition A, and ended up with 25% (or 75%). To calculate the information gained here, we need to find the entropy of our initial belief distribution, and subtract the entropy of our final beliefs. The entropy of our beliefs about A is .
So for 50% -> 25% it's
And for 50%->75% it's
So your intuition is correct: these give the same answer.
comment by Nathan Young · 2024-04-24T22:28:02.763Z · LW(p) · GW(p)
I think I'm gonna start posting top blogpost to the main feed (mainly from dead writers or people I predict won't care)
comment by Nathan Young · 2022-10-03T11:35:14.980Z · LW(p) · GW(p)
If you or a partner have ever been pregnant and done research on what is helpful and harmful, feel free to link it here and I will add it to the LessWrong pregnancy wiki page.
https://www.lesswrong.com/tag/pregnancy [? · GW]
comment by Nathan Young · 2024-05-15T19:59:10.313Z · LW(p) · GW(p)
I've made a big set of expert opinions on AI and my inferred percentages from them. I guess that some people will disagree with them.
I'd appreciate hearing your criticisms so I can improve them or fill in entries I'm missing.
https://docs.google.com/spreadsheets/d/1HH1cpD48BqNUA1TYB2KYamJwxluwiAEG24wGM2yoLJw/edit?usp=sharing
comment by Nathan Young · 2023-07-08T11:57:03.069Z · LW(p) · GW(p)
Epistemic status: written quickly, probably errors
Some thoughts on Manifund
- To me it seems like it will be the GiveDirectly of regranting (perhaps along with NonLinear) rather than the GiveWell
- It will be capable of rapidly scaling (especially if some regrantors are able to be paid for their time if they are dishing out a lot). It's not clear to me that's a bottleneck of granting orgs.
- There are benefits to centralised/closed systems. Just as GiveWell makes choices for people and so delivers 10x returns, I expect that Manifund will do worse, on average than OpenPhil, which has centralised systems, centralised theories of impact.
- Not everyone wants their grant to be public. If you have a sensitive idea (easy to imagine in AI) you may not want to publicly announce you're trying to get funding
- As with GiveDirectly, there is a real benefit of ~dignity/~agency. And I guess I think this is mostly vibes, but vibes matter. I can imagine crypto donors in particular finding a transparent system with individual portfolios much more attractive than OpenPhil. I can imagine that making a big difference on net.
- Notable that the donors aren't public. And I'm not being snide, I just mean it's interesting to me given the transparency of everything else.
- I love mechanism design. I love prizes, I love prediction markets. So I want this to work, but the base rate for clever mechanisms outcompeting bureaucratic ones seems low. But perhaps this finds a way to deliver and then outcompetes at scale (which seems my theory for if GiveDirectly ends up outcompeting GiveWell)
Am I wrong?
comment by Nathan Young · 2024-09-24T11:00:22.144Z · LW(p) · GW(p)
What is the best way to take the average of three probabilities in the context below?
- There is information about a public figure
- Three people read this information and estimate the public figure's P(doom)
- (It's not actually p(doom) but it's their probability of something
- How do I then turn those three probabilities into a single one?
Thoughts.
I currently think the answer is something like for probability a,b,c then the group median is 2^((log2a + log2b + log2c)/3). This feels like a way to average the bits that each person gets from the text.
I could just take the geometric or arithmetic mean, but somehow that seems off to me. I guess I might write my intuitions for those here for correction.
Arithmetic mean (a + b + c)/3. So this feels like uncertain probabilities will dominate certain ones. eg (.0000001 + .25)/2 = approx .125 which is the same as if the first person was either significantly more confident or significantly less. It seems bad to me for the final probability to be uncorrelated with very confident probabilities if the probabilities are far apart.
On the other hand in terms of EV calculations, perhaps you want to consider the world where some event is .25 much more than where it is .0000001. I don't know. Is the correct frame possible worlds or the information each person brings to the table?
Geometric mean (a * b * c)^ 1/3. I dunno, sort of seems like a midpoint.
Okay so I then did some thinking. Ha! Whoops.
While trying to think intuitively about what the geometric mean was, I noticed that 2^((log2a + log2b + log2c)/3) = 2^ (log2 (abc) /3) = 2 ^ log 2 (abc)^1/3 = (abc) ^1/3. So the information mean I thought seemed right is the geometric mean. I feel a bit embarrassed, but also happy to have tried to work it out.
This still doesn't tell me whether the arithmetic worlds intuition or the geometric information interpretation is correct.
Any correction or models appreciated.
Replies from: niplav↑ comment by niplav · 2024-09-24T11:33:59.882Z · LW(p) · GW(p)
Relevant: When pooling forecasts, use the geometric mean of odds [? · GW].
Replies from: neel-nanda-1↑ comment by Neel Nanda (neel-nanda-1) · 2024-09-24T13:39:27.807Z · LW(p) · GW(p)
+1. Concretely this means converting every probability p into p/(1-p), and then multiplying those (you can then convert back to probabilities)
Intuition pump: Person A says 0.1 and Person B says 0.9. This is symmetric, if we instead study the negation, they swap places, so any reasonable aggregation should give 0.5
Geometric mean does not, instead you get 0.3
Arithmetic gets 0.5, but is bad for the other reasons you noted
Geometric mean of odds is sqrt(1/9 * 9) = 1, which maps to a probability of 0.5, while also eg treating low probabilities fairly
comment by Nathan Young · 2024-08-22T10:55:08.409Z · LW(p) · GW(p)
Communication question.
How do I talk about low probability events in a sensical way?
eg "RFK Jr is very unlikely to win the presidency (0.001%)" This statement is ambiguous. Does it mean he's almost certain not to win or that the statement is almost certainly not true?
I know this sounds wonkish, but it's a question I come to quite often when writing. I like to use words but also include numbers in brackets or footnotes. But if there are several forecasts in one sentence with different directions it can be hard to understand.
"Kamala is a slight favourite in the election (54%), but some things are clearer. She'll probably win Virginia (83%) and probably loses North Carolina (43%)"
Something about the North Carolina subclause rubs me the wrong way. It requires several cycles to think "does the 43% mean the win or the loss". Options:
- As is
- "probably loses North Carolina (43% win chance)" - this takes up quite a lot of space while reading. I don't like things that break the flow
↑ comment by RHollerith (rhollerith_dot_com) · 2024-08-22T15:55:05.138Z · LW(p) · GW(p)
I only ever use words to express a probability when I don't want to take the time to figure out a number. I would write your example as, "Kamela will win the election with p = 54%, win Virginia with p =83% and win North Carolina with p = 43%."
↑ comment by Dagon · 2024-08-22T15:25:34.806Z · LW(p) · GW(p)
Most communication questions will have different options depending on audience. Who are you communicating to, and how high-bandwidth is the discussion (lots of questions and back-and-forth with one or two people is VERY different from, say, posting on a public forum).
For your examples, it seems you're looking for one-shot outbound communication, to a relatively wide and mostly educated audience. I personally don't find the ambiguity in your examples particularly harmful, and any of them are probably acceptable.
If anyone complains or it bugs you, I'd EITHER go with
- an end-note that all percentages are chance-to-win
- a VERY short descriptor like (43% win) or even (43%W).
- reduce the text rather than the quantifier - "Kamala is 54% to win" without having to say that means "slight favorite".
↑ comment by Ben (ben-lang) · 2024-08-22T12:02:08.291Z · LW(p) · GW(p)
To me the more natural reading is "probably looses North Caroliner (57%)".
57% being the chance that she "looses North Caroliner". Where as, as it is, you say "looses NC" but give the probabiltiy that she wins it. Which for me takes an extra scan to parse.
↑ comment by RamblinDash · 2024-08-22T11:35:23.177Z · LW(p) · GW(p)
Just move the percent? Instead of "RFK Jr is very unlikely to win the presidency (0.001%)", say "RFK Jr is very unlikely (0.001%) to win the presidency"
comment by Nathan Young · 2024-05-30T04:39:40.853Z · LW(p) · GW(p)
What are the LessWrong norms on promotion? Writing a post about my company seems off (but I think it could be useful to users). Should I write a quick take?
Replies from: kave↑ comment by kave · 2024-05-30T04:45:41.733Z · LW(p) · GW(p)
We have many org announcements on LessWrong! If your company is relevant to the interests of LessWrong, I would welcome an announcement post.
Org announcements are personal blog posts unless they are embedded inside of a good frontpage post.
comment by Nathan Young · 2024-04-26T16:35:45.870Z · LW(p) · GW(p)
Nathan and Carson's Manifold discussion.
As of the last edit my position is something like:
"Manifold could have handled this better, so as not to force everyone with large amounts of mana to have to do something urgently, when many were busy.
Beyond that they are attempting to satisfy two classes of people:
- People who played to donate can donate the full value of their investments
- People who played for fun now get the chance to turn their mana into money
To this end, and modulo the above hassle this decision is good.
It is unclear to me whether there was an implicit promise that mana was worth 100 to the dollar. Manifold has made some small attempt to stick to this, but many untried avenues are available, as is acknowledging they will rectify the error if possible later. To the extent that there was a promise (uncertain) and no further attempt is made, I don't really believe they really take that promise seriously.
It is unclear to me what I should take from this, though they have not acted as I would have expected them to. Who is wrong? Me, them, both of us? I am unsure."
Threaded discussion
Replies from: Nathan Young, Nathan Young, Nathan Young↑ comment by Nathan Young · 2024-04-26T16:38:52.092Z · LW(p) · GW(p)
Carson:
Replies from: Nathan YoungPpl don't seem to understand that Manifold could literally not exist in a year or 2 if they don't find a product market fit
↑ comment by Nathan Young · 2024-04-26T16:44:29.782Z · LW(p) · GW(p)
Austin said [EA(p) · GW(p)] they have $1.5 million in the bank, vs $1.2 million mana issued. The only outflows right now are to the charity programme which even with a lot of outflows is only at $200k. they also recently raised at a $40 million valuation. I am confused by running out of money. They have a large user base that wants to bet and will do so at larger amounts if given the opportunity. I'm not so convinced that there is some tiny timeline here.
But if there is, then say so "we know that we often talked about mana being eventually worth $100 mana per dollar, but we printed too much and we're sorry. Here are some reasons we won't devalue in the future.."
↑ comment by James Grugett (james-grugett) · 2024-04-26T18:10:23.703Z · LW(p) · GW(p)
If we could push a button to raise at a reasonable valuation, we would do that and back the mana supply at the old rate. But it's not that easy. Raising takes time and is uncertain.
Carson's prior is right that VC backed companies can quickly die if they have no growth -- it can be very difficult to raise in that environment.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-04-26T21:22:38.026Z · LW(p) · GW(p)
If that were true then there are many ways you could partially do that - eg give people a set of tokens to represent their mana at the time of the devluation and if at future point you raise. you could give them 10x those tokens back.
↑ comment by Nathan Young · 2024-04-26T16:36:48.073Z · LW(p) · GW(p)
seems like they are breaking an explicit contract (by pausing donations on ~a weeks notice)
Replies from: Nathan Young↑ comment by Nathan Young · 2024-04-26T16:37:18.001Z · LW(p) · GW(p)
Carson's response:
Replies from: Nathan Young, Nathan Youngweren't donations always flagged to be a temporary thing that may or may not continue to exist? I'm not inclined to search for links but that was my understanding.
↑ comment by Nathan Young · 2024-04-26T16:42:29.341Z · LW(p) · GW(p)
From https://manifoldmarkets.notion.site/Charitable-donation-program-668d55f4ded147cf8cf1282a007fb005
"That being said, we will do everything we can to communicate to our users what our plans are for the future and work with anyone who has participated in our platform with the expectation of being able to donate mana earnings."
"everything we can" is not a couple of weeks notice and lot of hassle. Am I supposed to trust this organisation in future with my real money?
↑ comment by James Grugett (james-grugett) · 2024-04-26T18:17:43.102Z · LW(p) · GW(p)
We are trying our best to honor mana donations!
If you are inactive you have until the rest of the year to donate at the old rate. If you want to donate all your investments without having to sell each individually, we are offering you a loan to do that.
We removed the charity cap of $10k donations per month, which is going beyond what we previous communicated.
Replies from: Nathan Young, Nathan Young↑ comment by Nathan Young · 2024-04-26T21:28:26.133Z · LW(p) · GW(p)
Nevertheless lots of people were hassled. That has real costs, both to them and to you.
↑ comment by Nathan Young · 2024-04-26T20:18:34.198Z · LW(p) · GW(p)
I’m discussing with Carson. I might change my mind but i don’t know that i’ll argue with both of you at once.
↑ comment by Nathan Young · 2024-04-26T16:41:14.268Z · LW(p) · GW(p)
Well they have a much larger donation than has been spent so there were ways to avoid this abrupt change:
"Manifold for Good has received grants totaling $500k from the Center for Effective Altruism (via the FTX Future Fund) to support our charitable endeavors."
Manifold has donated $200k so far. So there is $300k left. Why not at least, say "we will change the rate at which mana can be donated when we burn through this money"
(via https://manifoldmarkets.notion.site/Charitable-donation-program-668d55f4ded147cf8cf1282a007fb005 )
↑ comment by Nathan Young · 2024-04-26T16:36:26.185Z · LW(p) · GW(p)
seems breaking an implicity contract (that 100 mana was worth a dollar)
Replies from: Nathan Young↑ comment by Nathan Young · 2024-04-26T16:37:56.719Z · LW(p) · GW(p)
Carson's response:
Replies from: Nathan YoungThere was no implicit contract that 100 mana was worth $1 IMO. This was explicitly not the case given CFTC restrictions?
↑ comment by Nathan Young · 2024-04-26T16:43:36.942Z · LW(p) · GW(p)
Austin took his salary in mana as an often referred to incentive for him to want mana to become valuable, presumably at that rate.
I recall comments like 'we pay 250 in referrals mana per user because we reckon we'd pay about $2.50' likewise in the in person mana auction. I'm not saying it was an explicit contract, but there were norms.
comment by Nathan Young · 2024-04-20T11:24:08.702Z · LW(p) · GW(p)
I recall a comment on the EA forum about Bostrom donating a lot to global dev work in the early days. I've looked for it for 10 minutes. Does anyone recall it or know where donations like this might be recorded?
comment by Nathan Young · 2023-09-26T16:49:14.079Z · LW(p) · GW(p)
No petrov day? I am sad.
Replies from: Dagon, Richard_Kennaway↑ comment by Richard_Kennaway · 2023-09-27T09:33:31.417Z · LW(p) · GW(p)
There is an ongoing Petrov Day poll. I don't know if everyone on LW is being polled.
comment by Nathan Young · 2023-07-19T10:43:42.312Z · LW(p) · GW(p)
Why you should be writing on the LessWrong wiki.
There is way too much to read here, but if we all took pieces and summarised them in their respective tag, then we'd have a much denser resources that would be easier to understand.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-07-19T12:00:58.583Z · LW(p) · GW(p)
There are currently no active editors or a way of directing sufficient-for-this-purpose traffic to new edits, and on the UI side no way to undo an edit, an essential wiki feature. So when you write a large wiki article, it's left as you wrote it, and it's not going to be improved. For posts, review related to tags is in voting on the posts and their relevance, and even that is barely sufficient to get good relevant posts visible in relation to tags. But at least there is some sort of signal.
I think your article on Futarchy [? · GW] illustrates this point. So a reasonable policy right now is to keep all tags short. But without established norms that live in minds of active editors, it's not going to be enforced, especially against large edits that are written well.
Replies from: Nathan Young↑ comment by Nathan Young · 2023-07-21T09:29:10.948Z · LW(p) · GW(p)
Thanks for replying.
Would you revert my Futarchy edits if you could?
I think reversion is kind of overpowered. I'd prefer reverting chunks.
I don't see the logic that says we should keep tags short. That just seems less useful
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-07-21T13:23:13.193Z · LW(p) · GW(p)
I don't see the logic that says we should keep tags short.
The argument is that with the current level of editor engagement, only short tags have any chance of actually getting reviewed and meaningfully changed if that's called for. It's not about the result of a particular change to the wiki, but about the place where the trajectory of similar changes plausibly takes it in the long run.
I think reversion is kind of overpowered.
A good thing about the reversion feature is that reversion can itself be reverted, and so it's not as final as when it's inconvenient to revert the reversions. This makes edit wars more efficient, more likely to converge on a consensus framing rather than with one side giving up in exhaustion.
Would you revert my Futarchy edits if you could?
The point is that absence of the feature makes engagement with the wiki less promising, as it becomes inconvenient and hence infeasible in practice to protect it in detail, and so less appealing to invest effort in it. I mentioned that as a hypothesis for explaining currently near-absent editor engagement, not as something relevant to reverting your edits.
Reverting your edits would follow from a norm that says such edits are inappropriate. I think this norm would be good, but it's also clearly not present, since there are no active editors to channel it. My opinion here only matters as much as the arguments around it convince you or other potential wiki editors, the fact that I hold this opinion shouldn't in itself have any weight. (So to be clear, currently I wouldn't revert the edits if I could. I would revert them only if there were active editors and they overall endorsed the norm of reverting such edits.)
comment by Nathan Young · 2024-06-30T12:48:32.097Z · LW(p) · GW(p)
Here is a 5 minute, spicy take of an alignment chart.
What do you disagree with.
To try and preempt some questions:
Why is rationalism neutral?
It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.
Why are e/accs and EAs in the same group.
In the quick moments I took to make this, I found both EA and E/acc pretty hard to predict and pretty uncertain in overall impact across some range of forecasts.
Replies from: Zack_M_Davis, Seth Herd, quetzal_rainbow↑ comment by Zack_M_Davis · 2024-06-30T17:40:09.576Z · LW(p) · GW(p)
It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.
What? This apology makes no sense. Of course rationalism is Lawful Neutral. The laws of cognition aren't, can't be, on anyone's side.
Replies from: programcrafter, Nathan Young↑ comment by ProgramCrafter (programcrafter) · 2024-07-25T12:28:26.808Z · LW(p) · GW(p)
I disagree with "of course". The laws of cognition aren't on any side, but human rationalists presumably share (at least some) human values and intend to advance them; insofar they are more successful than non-rationalists this qualifies as Good.
↑ comment by Nathan Young · 2024-07-01T10:20:47.146Z · LW(p) · GW(p)
So by my metric, Yudkowsky and Lintemandain's Dath Ilan isn't neutral, it's quite clearly lawful good, or attempting to be. And yet they care a lot about the laws of cognition.
So it seems to me that the laws of cognition can (should?) drive towards flouishing rather than pure knowledge increase. There might be things that we wish we didn't know for a bit. And ways to increase our strength to heal rather than our strength to harm.
To me it seems a better rationality would be lawful good.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2024-07-01T11:06:45.514Z · LW(p) · GW(p)
The laws of cognition are natural laws. Natural laws cannot possibly “drive towards flourishing” or toward anything else.
Attempting to make the laws of cognition “drive towards flourishing” inevitably breaks them.
Replies from: cubefox↑ comment by cubefox · 2024-07-06T11:54:38.156Z · LW(p) · GW(p)
A lot of problems arise from inaccurate beliefs instead of bad goals. E.g. suppose both the capitalists and the communists are in favor of flourishing, but they have different beliefs on how best to achieve this. Now if we pick a bad policy to optimize for a noble goal, bad things will likely still follow.
↑ comment by Seth Herd · 2024-06-30T22:53:46.690Z · LW(p) · GW(p)
Interesting. I always thought the D&D alignment chart was just a random first stab at quantizing a standard superficial Disney attitude toward ethics. This modification seems pretty sensible.
I think your good/evil axis is correct in terms of a deeper sense of the common terms. Evil people don't try to harm others typically, they just don't care- so their efforts to help themselves and their friends is prone to harm others. Being good means being good to everyone, not just your favorites. It's the size of your circle of compassion. Outright malignancy, cackling about others suffering, is pretty eye-catching when it happens (and it does), but I'd say the vast majority of harm in the world has been done by people who are merely not much concerned with collateral damage. Thus, I think those deserve the term evil, lest we focus on the wrong thing.
Predictable/unpredictable seems like a perfectly good alternate label for the chaotic/lawful. In some adversarial situations, it makes sense to be unpredictable.
One big question is whether you're referring to intentions or likely outcomes in your expected valaue (which I assume is expected value for all sentient beings or somethingg). A purely selfish person without much ambition may actually be a net good in the world; they work for the benefit of themselves and those close enough to be critical for their wellbeing, and they don't risk causing a lot of harm since that might cause blowback. The same personality put in a position of power might do great harm, ordering an invasion or employee downsizing to benefit themselves and their family while greatly harming many.
Replies from: Nathan Young↑ comment by Nathan Young · 2024-07-01T10:17:51.720Z · LW(p) · GW(p)
Yeah I find the intention vs outcome thing difficult.
What do you think of "average expected value across small perturbations in your life". Like if you accidentally hit churchill with a car and so cause the UK to lose WW2 that feels notably less bad than deliberately trying to kill a much smaller number of people. In many nearby universes, you didn't kill churchill, but in many nearby universes that person did kill all those people.
↑ comment by quetzal_rainbow · 2024-06-30T13:46:24.819Z · LW(p) · GW(p)
Chaotic Good: pivotal act
Lawful Evil: "situational awareness"
comment by Nathan Young · 2024-03-11T18:07:51.242Z · LW(p) · GW(p)
I did a quick community poll - Community norms poll (2 mins) [LW · GW]
I think it went pretty well. What do you think next steps could/should be?
Here are some points with a lot of agreement.
comment by Nathan Young · 2023-12-15T10:06:30.567Z · LW(p) · GW(p)
Things I would do dialogues about:
(Note I may change my mind during these discussions but if I do so I will say I have)
- Prediction is the right frame for most things
- Focus on world states not individual predictions
- Betting on wars is underrated
- The UK House of Lords is okay actually
- Immigration should be higher but in a way that doesn't annoy everyone and cause backlash
comment by Nathan Young · 2023-12-09T18:47:04.943Z · LW(p) · GW(p)
I appreciate reading women talk about what is good sex for them. But it's a pretty thin genre, especially with any kind of research behind it.
So I'd recommend this (though it is paywalled):
https://aella.substack.com/p/how-to-be-good-at-sex-starve-her?utm_source=profile&utm_medium=reader2
Also I subscribed to this for a while and it was useful:
https://start.omgyes.com/join
Replies from: rhollerith_dot_com↑ comment by RHollerith (rhollerith_dot_com) · 2023-12-09T18:58:40.553Z · LW(p) · GW(p)
You don't want to warn us that it is behind a paywall?
Replies from: Nathan Young↑ comment by Nathan Young · 2023-12-09T21:01:31.194Z · LW(p) · GW(p)
I didn't think it was relevant, but happy to add it.
comment by Nathan Young · 2023-10-30T10:00:43.395Z · LW(p) · GW(p)
I suggest that rats should use https://manifold.love/ as the Schelling dating app. It has long profiles and you can bet on other people getting on.
What more could you want!
I am somewhat biased because I've bet that it will be a moderate success.
comment by Nathan Young · 2023-09-11T14:03:25.274Z · LW(p) · GW(p)
Relative Value Widget
It gives you sets of donations and you have to choose which you prefer. If you want you can add more at the bottom.
comment by Nathan Young · 2023-08-31T15:44:57.683Z · LW(p) · GW(p)
Other things I would like to be able to express anonymously on individual comments:
- This is poorly framed - Sometimes i neither want to agree nor diagree. I think the comment is orthogonal to reality and agreement and disagreement both push away from truth.
- I don't know - If a comment is getting a lot of agreement/disagreement it would also be interesting to see if there could be a lot of uncertainty
comment by Nathan Young · 2022-09-23T17:47:07.108Z · LW(p) · GW(p)
It's a shame the wiki doesn't support the draft google-docs-like editor. I wish I could make in-line comments while writing.