Posts

Comments

Comment by qvalq (qv^!q) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-02T02:43:00.594Z · LW · GW

Scott Alexander wrote some rationalish music a decade ago. 
youtube.com/qraikoth

 

CronoDAS has uploaded a song, though it's not much rationalist.
youtube.com/CronoDAS

Comment by qvalq (qv^!q) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-02T00:45:17.614Z · LW · GW

Was over two years ago.

Comment by qvalq (qv^!q) on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-02T00:42:17.469Z · LW · GW

Scott Alexander wrote some music a decade ago.

youtube.com/qraikoth

 

"Mary's Room" and "Somewhere Prior To The Rainbow" are most likely to make you cry again. 
"Mathematical Pirate Shanty", if you can cry laughing.

Comment by qvalq (qv^!q) on Using axis lines for good or evil · 2024-03-21T02:42:52.167Z · LW · GW

Here, I'd plot difference from gravitation at sea level.

Comment by qvalq (qv^!q) on The Worst Form Of Government (Except For Everything Else We've Tried) · 2024-03-18T18:52:07.549Z · LW · GW

I've never heard the US civil war described this way.
Thank you.

Comment by qvalq (qv^!q) on Locating My Eyes (Part 3 of "The Sense of Physical Necessity") · 2024-03-13T23:17:30.268Z · LW · GW

=

Should be '≠'.

Comment by qvalq (qv^!q) on The Parable Of The Fallen Pendulum - Part 2 · 2024-03-13T18:41:12.605Z · LW · GW

taught

Should be 'taut'.

Comment by qvalq (qv^!q) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-17T15:25:09.296Z · LW · GW

I've learned the maths before.

 

I think maybe I have no idea what kinetic energy is.

Comment by qvalq (qv^!q) on An Actually Intuitive Explanation of the Oberth Effect · 2024-01-11T19:43:03.874Z · LW · GW

kinetic energy scales with the square of the speed

Why is this?

Comment by qvalq (qv^!q) on Is being sexy for your homies? · 2023-12-19T23:26:27.687Z · LW · GW

Ideally you'd try to have a separate bakery with reversed gender-roles.

Comment by qvalq (qv^!q) on Ann Arbor, Michigan, USA – ACX Meetups Everywhere Fall 2023 · 2023-10-05T03:29:42.904Z · LW · GW

I probably can't go to the October meetup, due to coincidence. How do I unRSVP on Meetup?

Unrelated, I still think I have a good chance of making it next time.

Comment by qvalq (qv^!q) on Aumann-agreement is common · 2023-09-28T08:06:58.600Z · LW · GW

Thank you. I was probably wrong.

In most examples, there's no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they've seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.

I think I don't understand what you mean. What's Aumann agreement? How's it a useful concept?

Comment by qvalq (qv^!q) on Aumann-agreement is common · 2023-09-28T07:29:59.951Z · LW · GW

I thought the surprising thing about Aumann agreement was that ideal agents with shared priors will come to agree even if they can't intentionally exchange information, and can see only the other's assigned probability. [I checked Wikipedia; with common knowledge of each other's probabilistic belief about something, ideal agents with shared priors have the same belief. There's something about dialogues, but Aumann didn't prove that. I was wrong.]

Your post seems mostly about exchange of information. It doesn't matter which order you find your evidence, so ideal agents with shared priors that can exchange everything they've seen will always come to agree.
I don't think this requires understanding Aumann's theorem.

 

Is this wrong, or otherwise unimportant?

Comment by qvalq (qv^!q) on Hertford, Sourbut (rationality lessons from University Challenge) · 2023-09-17T06:18:06.599Z · LW · GW

Thank you for responding.

It's possible for your team to lose five points, thereby giving the other team five points.
If the other team loses five points, then you gain five points.
Why is it not possible for the other team to lose five points without anything else happening? Where does the asymmetry come from?

It's
-25 -20 -5 0 20 25.
Why isn't it
-25 -20 -5 0 5 20 25?

Comment by qvalq (qv^!q) on Hertford, Sourbut (rationality lessons from University Challenge) · 2023-09-12T01:19:52.650Z · LW · GW
  • (-25) lose points and other team gains points
  • (-20) other team gains points
  • (-5) lose points and other team gets nothing
  • (0) nobody gets anything
  • (20) gain points
  • (25) other team loses points and you gain points

Why no (+5)?

Comment by qvalq (qv^!q) on Hard Questions Are Language Bugs · 2023-09-11T23:45:40.710Z · LW · GW

Maths is incomplete. Inconsistency isn't proven.

Is this wrong?

Comment by qvalq (qv^!q) on Hard Questions Are Language Bugs · 2023-09-11T23:41:03.464Z · LW · GW

X is not a thing that can be other things

Y is not actually a thing that another thing can be

Why the "actually"?

Comment by qvalq (qv^!q) on Ann Arbor, Michigan, USA – ACX Meetups Everywhere Fall 2023 · 2023-09-11T03:44:27.421Z · LW · GW

I probably won't go to this.
I probably will go to the October 21st version. Is there some way I should formally communicate that?

Probably there should be a way to be more specific than "MAYBE".
I had to Google "RSVP".
Where should I complain these to?

Comment by qvalq (qv^!q) on Mistakes with Conservation of Expected Evidence · 2023-09-11T03:08:08.697Z · LW · GW

I no longer think it makes sense to clam up when you can't figure out how you originally came around to the view which you now hold

Either you can say "I came to this conclusion at some point, and I trust myself", or you should abandon the belief.

You don't need to know how or why your brain happened to contain the belief; you just need to know your own justification for believing it now. If you can't sufficiently justify your belief to yourself (even through things like "My-memory-of-myself-from-a-few-minutes-ago thinks it's likely" or "First-order intuition thinks it's likely"), you should abandon it (unless you're bad at this, which is probably not the case for most people who might try it).

From my perspective, I just had an original thought. If there's any writing about something related, or if someone else has something to add or subtract, I would probably very much like to read it.

Comment by qvalq (qv^!q) on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong · 2023-08-30T02:24:57.474Z · LW · GW

by far the best impact-to-community health ratio ever

What does this mean?

Comment by qvalq (qv^!q) on Shared reality: a key driver of human behavior · 2023-08-29T13:23:08.906Z · LW · GW

When I read "Extravert", I felt happy related to the uncommon spelling, which I also prefer.

Is this shared reality?

Comment by qvalq (qv^!q) on Newcomb Variant · 2023-08-29T10:12:57.982Z · LW · GW

One-box only occurs in simulations, while two-box occurs in and out of simulations. 

If I one-box in simulations, then Omega puts $0 in the first box, and I can't one-box.

If I two-box in simulations, then Omega puts $100 in the first box, so I may be in a simulation or not.

One-boxing kills me, so I two-box.

 

Either I've made a mistake, or you have. Where is it?

Comment by qvalq (qv^!q) on Mnestics · 2023-08-08T13:16:09.907Z · LW · GW

Thank you for the comparison.

Comment by qvalq (qv^!q) on "Is There Anything That's Worth More" · 2023-08-07T14:53:07.315Z · LW · GW

Thank you.

Comment by qvalq (qv^!q) on Say Wrong Things · 2023-08-03T08:29:40.327Z · LW · GW

Paul Graham says Robert Morris is never wrong.

He does this by qualifying statements (ex. "I think"), not by saying less things.

Comment by qvalq (qv^!q) on Lack of Social Grace Is an Epistemic Virtue · 2023-08-03T08:15:10.857Z · LW · GW

"Your loved one has passed on"

I'm not sure I've ever used a euphemism (I don't know what a euphemism is).

When should I?

Comment by qvalq (qv^!q) on Lack of Social Grace Is an Epistemic Virtue · 2023-08-03T08:12:17.195Z · LW · GW

Pretend-obfuscation prevents common knowledge.

Comment by qvalq (qv^!q) on "Is There Anything That's Worth More" · 2023-08-03T07:48:06.904Z · LW · GW

I don't understand.

Comment by qvalq (qv^!q) on Underwater Torture Chambers: The Horror Of Fish Farming · 2023-07-27T17:37:52.125Z · LW · GW

I dislike when fish suffer because I feel sad, and because other people want fish to not suffer for moral reasons.

Comment by qvalq (qv^!q) on GPT-2's positional embedding matrix is a helix · 2023-07-27T17:13:24.277Z · LW · GW

A line is just a helix that doesn't curve. It works the same for any helix; it would be a great coincidence, to get a line.

Comment by qvalq (qv^!q) on Childhoods of exceptional people · 2023-07-27T14:40:16.828Z · LW · GW

So we can't have less geniuses. More people means more people above 5 standard deviations (by definition?).

Comment by qvalq (qv^!q) on What Are You Tracking In Your Head? · 2023-07-05T13:19:13.822Z · LW · GW

I tried to solve (n+1)^4 visually. I spent about five minutes, and was unable to visualise well enough.

Comment by qvalq (qv^!q) on The "Adults in the Room" · 2023-05-10T17:32:33.289Z · LW · GW

You might not survive as yourself, if you could see yourself.

Those who say "That which can be destroyed by the truth should be" may continue to walk the Path from there.

That's wonderful.

Comment by qvalq (qv^!q) on Crazy Ideas Thread · 2023-05-10T16:23:18.548Z · LW · GW

Adult IQ scores do too, I think.

Comment by qvalq (qv^!q) on Crazy Ideas Thread · 2023-05-10T16:19:51.333Z · LW · GW

You worded this badly, but I agree. 

It is possible to read "you robbed a bank" without imagining robbing a bank. Just very hard, and maybe impossible if you're not readied.

Comment by qvalq (qv^!q) on Geoff Hinton Quits Google · 2023-05-10T12:35:53.093Z · LW · GW

No; I agree with you.

Comment by qvalq (qv^!q) on Geoff Hinton Quits Google · 2023-05-03T14:00:53.275Z · LW · GW

"it might be ten"

Comment by qvalq (qv^!q) on Mini thoughts on mintheism · 2023-04-29T22:24:42.932Z · LW · GW

diary

should be "dairy".

Comment by qvalq (qv^!q) on $250 prize for checking Jake Cannell's Brain Efficiency · 2023-04-28T17:24:40.406Z · LW · GW

disclaimer

This might be the least disclamatory disclaimer I've ever read.

I'd even call it a claimer.

Comment by qvalq (qv^!q) on Tuning your Cognitive Strategies · 2023-04-28T16:51:47.975Z · LW · GW

I think that list would be very helpful for me.

Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.

Comment by qvalq (qv^!q) on Asch's Conformity Experiment · 2023-04-28T16:20:43.102Z · LW · GW

If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.

This doesn't seem true; the data correlate pretty strongly, so more wouldn't provide much evidence.

Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects.

This is irrational, though.

Comment by qvalq (qv^!q) on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-26T08:29:34.354Z · LW · GW

The simulations you made are much more complicated than physics. I think almost any simulation would have to be, if it showed an apple with any reasonable amount of computing power (if there's room for an "unreasonable" amount, there's probably room for a lot of apples).

Edit: is this how links are supposed to be used?

Comment by qvalq (qv^!q) on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-26T08:19:42.024Z · LW · GW

I think it could deduce it's an image of a sparse 3D space with 3 channels. From there, it could deduce a lot, but maybe not that the channels are activated by certain frequencies.

Comment by qvalq (qv^!q) on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-26T08:16:23.535Z · LW · GW

Thank you for the consideration on center-of-mass.

Comment by qvalq (qv^!q) on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-24T11:55:05.477Z · LW · GW

You might need a very strong superintelligence, or one with a lot of time. But I think the correct hypothesis has extremely high evidence compared to others, and isn't that complicated. If it has enough thought to locate the hypothesis, it has enough to find it's better than almost any other.

Newtonian Mechanics or something a bit closer would rise very near the top of the list. It's possible even the most likely possibilities wouldn't be given much probability, but it would at least be somewhat modal. [Is there a continuous analogue for the mode? I don't know what softmax is.]

Thank you for the question. I understand better, now.

Comment by qvalq (qv^!q) on Could a superintelligence deduce general relativity from a falling apple? An investigation · 2023-04-23T15:02:47.138Z · LW · GW

Anthropics seem very important here; most laws of physics probably don't form people; especially people who make cameras, and then AGI, then give it only a few images which don't look very optimized, or like they're of a much optimized world.

A limit on speed can be deduced; if intelligence enough to make AGI is possible, probably coordination's already taken over the universe and made it to something's liking, unless it's slow for some reason. The AI has probably been designed quite inefficiently; not what you'd expect from intelligent design.

I could see how an AI might deduce that “objects” exist, and that they exist in three dimensions, from 2 images where the apple has slightly rotated.

I'm pretty sure this one's deducible from one image; the apple has lots of shadows and refraction. The indentations have lighting the other way.

 

It could find that the light source is very far above and a few degrees in width, and therefore very large, along with some lesser light from the upper 180°. The apple is falling; universal laws are very common; the sun is falling. The water on the apple shows refraction; this explains the sky (this probably all takes place in a fluid; air resistance, wind).

The apple is falling, and the grass seems affected by gravity too; why isn't the grass falling the same way? It is. 

The grass is pointing up, but all level with other grass; probably the upper part of the ground is affected by gravity, so it flattens.

The camera is aligned almost exactly with the direction the apple is falling.

In three frames, maybe it could see grass bounce off each other? It could at least see elasticity. I don't know much of the laws of motion it could find from this, but probably not none. Angular movement of the apple also seems important.

Light is very fast; laws of motion; light is very light (not massive) because it goes fast but doesn't visibly move things.

 

A superintelligence could probably get farther than this; very large, bright, far up object is probably evidence of attraction as well.

 

Simulation doesn't make this much harder; the models for apple and grass came from somewhere. Occam's Razor is weakened, not strengthened, because simulators have strong computational constraints, probably not many orders of magnitude beyond what the AI was given to think with.

 

 

Thank you.

Comment by qvalq (qv^!q) on Mechanistically interpreting time in GPT-2 small · 2023-04-22T23:40:23.099Z · LW · GW

Upon rereading to find where I didn't understand, I found I didn't lose much of the text, and all I had previously lost was unimportant.

 

My happiness is less, but knowing feels better.

Comment by qvalq (qv^!q) on Mechanistically interpreting time in GPT-2 small · 2023-04-22T23:24:21.157Z · LW · GW

And (10, 3) has top 3 tokens:

I think these are top 10 tokens.

Comment by qvalq (qv^!q) on Mechanistically interpreting time in GPT-2 small · 2023-04-20T18:11:25.676Z · LW · GW

I always feel happy when I read alignment posts I don't understand, for some reason.

Comment by qvalq (qv^!q) on obvious epiphanies · 2023-04-17T17:02:23.208Z · LW · GW

I seem very similar to you.