Posts
Comments
Scott Alexander wrote some rationalish music a decade ago.
youtube.com/qraikoth
CronoDAS has uploaded a song, though it's not much rationalist.
youtube.com/CronoDAS
Was over two years ago.
Scott Alexander wrote some music a decade ago.
"Mary's Room" and "Somewhere Prior To The Rainbow" are most likely to make you cry again.
"Mathematical Pirate Shanty", if you can cry laughing.
Here, I'd plot difference from gravitation at sea level.
I've never heard the US civil war described this way.
Thank you.
=
Should be '≠'.
taught
Should be 'taut'.
I've learned the maths before.
I think maybe I have no idea what kinetic energy is.
kinetic energy scales with the square of the speed
Why is this?
Ideally you'd try to have a separate bakery with reversed gender-roles.
I probably can't go to the October meetup, due to coincidence. How do I unRSVP on Meetup?
Unrelated, I still think I have a good chance of making it next time.
Thank you. I was probably wrong.
In most examples, there's no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don't understand what you mean. What's Aumann agreement? How's it a useful concept?
I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can't intentionally exchange information, and can see only the other's assigned probability. [I checked Wikipedia; with common knowledge of each other's probabilistic belief about something, ideal agents with shared priors have the same belief. There's something about dialogues, but Aumann didn't prove that. I was wrong.]
Your post seems mostly about exchange of information. It doesn't matter which order you find your evidence, so ideal agents with shared priors that can exchange everything they've seen will always come to agree.
I don't think this requires understanding Aumann's theorem.
Is this wrong, or otherwise unimportant?
Thank you for responding.
It's possible for your team to lose five points, thereby giving the other team five points.
If the other team loses five points, then you gain five points.
Why is it not possible for the other team to lose five points without anything else happening? Where does the asymmetry come from?
It's
-25 -20 -5 0 20 25.
Why isn't it
-25 -20 -5 0 5 20 25?
- (-25) lose points and other team gains points
- (-20) other team gains points
- (-5) lose points and other team gets nothing
- (0) nobody gets anything
- (20) gain points
- (25) other team loses points and you gain points
Why no (+5)?
Maths is incomplete. Inconsistency isn't proven.
Is this wrong?
X is not a thing that can be other things
Y is not actually a thing that another thing can be
Why the "actually"?
I probably won't go to this.
I probably will go to the October 21st version. Is there some way I should formally communicate that?
Probably there should be a way to be more specific than "MAYBE".
I had to Google "RSVP".
Where should I complain these to?
I no longer think it makes sense to clam up when you can't figure out how you originally came around to the view which you now hold
Either you can say "I came to this conclusion at some point, and I trust myself", or you should abandon the belief.
You don't need to know how or why your brain happened to contain the belief; you just need to know your own justification for believing it now. If you can't sufficiently justify your belief to yourself (even through things like "My-memory-of-myself-from-a-few-minutes-ago thinks it's likely" or "First-order intuition thinks it's likely"), you should abandon it (unless you're bad at this, which is probably not the case for most people who might try it).
From my perspective, I just had an original thought. If there's any writing about something related, or if someone else has something to add or subtract, I would probably very much like to read it.
by far the best impact-to-community health ratio ever
What does this mean?
When I read "Extravert", I felt happy related to the uncommon spelling, which I also prefer.
Is this shared reality?
One-box only occurs in simulations, while two-box occurs in and out of simulations.
If I one-box in simulations, then Omega puts $0 in the first box, and I can't one-box.
If I two-box in simulations, then Omega puts $100 in the first box, so I may be in a simulation or not.
One-boxing kills me, so I two-box.
Either I've made a mistake, or you have. Where is it?
Thank you for the comparison.
Thank you.
Paul Graham says Robert Morris is never wrong.
He does this by qualifying statements (ex. "I think"), not by saying less things.
"Your loved one has passed on"
I'm not sure I've ever used a euphemism (I don't know what a euphemism is).
When should I?
Pretend-obfuscation prevents common knowledge.
I don't understand.
I dislike when fish suffer because I feel sad, and because other people want fish to not suffer for moral reasons.
A line is just a helix that doesn't curve. It works the same for any helix; it would be a great coincidence, to get a line.
So we can't have less geniuses. More people means more people above 5 standard deviations (by definition?).
I tried to solve (n+1)^4 visually. I spent about five minutes, and was unable to visualise well enough.
You might not survive as yourself, if you could see yourself.
Those who say "That which can be destroyed by the truth should be" may continue to walk the Path from there.
That's wonderful.
Adult IQ scores do too, I think.
You worded this badly, but I agree.
It is possible to read "you robbed a bank" without imagining robbing a bank. Just very hard, and maybe impossible if you're not readied.
No; I agree with you.
"it might be ten"
diary
should be "dairy".
disclaimer
This might be the least disclamatory disclaimer I've ever read.
I'd even call it a claimer.
I think that list would be very helpful for me.
Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.
If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.
This doesn't seem true; the data correlate pretty strongly, so more wouldn't provide much evidence.
Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects.
This is irrational, though.
The simulations you made are much more complicated than physics. I think almost any simulation would have to be, if it showed an apple with any reasonable amount of computing power (if there's room for an "unreasonable" amount, there's probably room for a lot of apples).
Edit: is this how links are supposed to be used?
I think it could deduce it's an image of a sparse 3D space with 3 channels. From there, it could deduce a lot, but maybe not that the channels are activated by certain frequencies.
Thank you for the consideration on center-of-mass.
You might need a very strong superintelligence, or one with a lot of time. But I think the correct hypothesis has extremely high evidence compared to others, and isn't that complicated. If it has enough thought to locate the hypothesis, it has enough to find it's better than almost any other.
Newtonian Mechanics or something a bit closer would rise very near the top of the list. It's possible even the most likely possibilities wouldn't be given much probability, but it would at least be somewhat modal. [Is there a continuous analogue for the mode? I don't know what softmax is.]
Thank you for the question. I understand better, now.
Anthropics seem very important here; most laws of physics probably don't form people; especially people who make cameras, and then AGI, then give it only a few images which don't look very optimized, or like they're of a much optimized world.
A limit on speed can be deduced; if intelligence enough to make AGI is possible, probably coordination's already taken over the universe and made it to something's liking, unless it's slow for some reason. The AI has probably been designed quite inefficiently; not what you'd expect from intelligent design.
I could see how an AI might deduce that “objects” exist, and that they exist in three dimensions, from 2 images where the apple has slightly rotated.
I'm pretty sure this one's deducible from one image; the apple has lots of shadows and refraction. The indentations have lighting the other way.
It could find that the light source is very far above and a few degrees in width, and therefore very large, along with some lesser light from the upper 180°. The apple is falling; universal laws are very common; the sun is falling. The water on the apple shows refraction; this explains the sky (this probably all takes place in a fluid; air resistance, wind).
The apple is falling, and the grass seems affected by gravity too; why isn't the grass falling the same way? It is.
The grass is pointing up, but all level with other grass; probably the upper part of the ground is affected by gravity, so it flattens.
The camera is aligned almost exactly with the direction the apple is falling.
In three frames, maybe it could see grass bounce off each other? It could at least see elasticity. I don't know much of the laws of motion it could find from this, but probably not none. Angular movement of the apple also seems important.
Light is very fast; laws of motion; light is very light (not massive) because it goes fast but doesn't visibly move things.
A superintelligence could probably get farther than this; very large, bright, far up object is probably evidence of attraction as well.
Simulation doesn't make this much harder; the models for apple and grass came from somewhere. Occam's Razor is weakened, not strengthened, because simulators have strong computational constraints, probably not many orders of magnitude beyond what the AI was given to think with.
Thank you.
Upon rereading to find where I didn't understand, I found I didn't lose much of the text, and all I had previously lost was unimportant.
My happiness is less, but knowing feels better.
And (10, 3) has top 3 tokens:
I think these are top 10 tokens.
I always feel happy when I read alignment posts I don't understand, for some reason.
I seem very similar to you.