Comment by wizzwizz4 on Crisis of Faith · 2019-10-24T14:40:26.384Z · score: 1 (1 votes) · LW · GW

If I believe something that's wrong, it's probably because I haven't thought about it, merely how nice it is that it's true, or how I should believe it… or I've just been rehearsing what I've read in books about how you should think about it. A few uninterrupted hours is probably enough to get the process of actually thinking about it started.

Comment by wizzwizz4 on How does OpenAI's language model affect our AI timeline estimates? · 2019-10-20T18:32:40.187Z · score: 2 (2 votes) · LW · GW is a nice demonstration of GPT2 that allows you to select the inputs freely.

Comment by wizzwizz4 on But There's Still A Chance, Right? · 2019-10-11T18:16:14.642Z · score: 1 (1 votes) · LW · GW

It's not guaranteed… but that's pedantry-about-infinity: the chance of it _not_ happening once is zero, the chance if it _not_ happening twice is zero, and so on, and so on.

Comment by wizzwizz4 on But There's Still A Chance, Right? · 2019-10-11T18:13:27.325Z · score: 1 (1 votes) · LW · GW
([…] insentient nature) operates by cause and effect, so there is no such thing as the million other paths evolution could have followed to make men and chimps different.

You can't assume the universe is deterministic some of the time and not other parts of the time. According to chaos theory, a tiny change could've caused those million other paths. (But the probability of that, conditional on no Descartes' Demon or similar, is zero, since no events that _didn't_ occur have occurred.)

Many, many different possible sets of gene sequences would explain the world in which we live, therefore we should count them.

Comment by wizzwizz4 on Are Your Enemies Innately Evil? · 2019-10-11T18:02:01.873Z · score: 1 (1 votes) · LW · GW

Well… if it caused the families to survive better, then maybe.

Comment by wizzwizz4 on Ethnic Tension And Meaningless Arguments · 2019-09-07T16:48:54.528Z · score: 1 (1 votes) · LW · GW

The "worst argument in the world" link is broken. It should link to the previous article, but instead links to an authentication page.

Comment by wizzwizz4 on The Crackpot Offer · 2019-08-08T12:52:11.476Z · score: 1 (1 votes) · LW · GW

That's imaginary mass implying superluminal velocity with real energy. Similar, but the other way around.

Comment by wizzwizz4 on The Importance of Saying "Oops" · 2019-08-08T11:00:17.488Z · score: 1 (1 votes) · LW · GW

I've done it. I've zig-zagged on at least three things, where if I'd had a higher change-my-mind threshold I wouldn't've. Though, I suppose each of those instances were due to catastrophic forgetting, and not actually reasoned arguments.

Comment by wizzwizz4 on Zombies! Zombies? · 2019-08-05T14:23:45.464Z · score: 1 (1 votes) · LW · GW

This is incoherent, reader.

Comment by wizzwizz4 on The Tails Coming Apart As Metaphor For Life · 2019-08-02T21:12:44.460Z · score: 1 (1 votes) · LW · GW

So, what utility functions would you give a paperclip maximiser?

Comment by wizzwizz4 on Keeping Beliefs Cruxy · 2019-07-28T15:40:07.562Z · score: 7 (4 votes) · LW · GW

I think you should link to something to do with double crux the first time you mention it; it took me a while to track down a page explaining it.

Comment by wizzwizz4 on The Martial Art of Rationality · 2019-07-15T21:22:18.592Z · score: 1 (1 votes) · LW · GW

I'm not so sure. Such "probabilistic" tests are good for aggregate testing, but not for personal testing. We want to minimise false positives and false negatives.

Comment by wizzwizz4 on Bayesian Judo · 2019-07-14T20:25:06.374Z · score: 1 (1 votes) · LW · GW

Did you mean: Hold sensible priors

Comment by wizzwizz4 on Beyond the Reach of God · 2019-07-14T11:45:22.844Z · score: 1 (1 votes) · LW · GW

Iff the universe is Turing complete. Have we proven that yet?

Comment by wizzwizz4 on Beyond the Reach of God · 2019-07-14T11:35:30.759Z · score: 1 (1 votes) · LW · GW

Unless you make one more Horcrux than yesterday each day, that's never going to happen. And there's still the finite, fixed, non-zero chance of the magic widget being destroyed and all of your backups failing simultaneously, or the false vacuum collapsing. Unless you seriously think you can think up completely novel ways to prevent your death at a constantly-accelerating rate, with no duplicates, many of which can avoid hypothetical universe-ending apocalypses.

Unless we find a way to escape the known universe, or discover something similarly munchkinneritorial, we're all going to die.

Comment by wizzwizz4 on Beyond the Reach of God · 2019-07-14T11:15:35.400Z · score: 1 (1 votes) · LW · GW

Eliezer is an atheist. But this article doesn't say "there is no God"; it says "act as though God won't save you".

Comment by wizzwizz4 on Shut up and do the impossible! · 2019-07-13T20:59:15.326Z · score: 1 (1 votes) · LW · GW

I suspect a Game and Watch wouldn't permit this. Then again, if you were letting the AI control button pushers the button pushers probably could, and if you were letting it run code on the Game and Watch's microprocessor it could probably do something bad.

I failed to come up with a counterexample.

Comment by wizzwizz4 on On Expressing Your Concerns · 2019-07-08T17:09:10.620Z · score: 1 (1 votes) · LW · GW

That sounds like a H. G. Wells story (you can listen to it here).

Comment by wizzwizz4 on Evaporative Cooling of Group Beliefs · 2019-07-08T12:59:47.826Z · score: 1 (1 votes) · LW · GW

I'm not certain that that would be wrong. From the observations they have access to, they have no way of telling the difference between different points in the night.

If they can see the moon, however, this changes. Similarly, if they can wait an hour and see what changes. Similarly if they can see the stars, and know roughly what month it is. Because it's not just the most extreme people who'll update their beliefs.

(By the way; the averaging thing only works if the individuals don't communicate about their guesses, which means that this isn't in any way an accurate representation of the behaviour described in this article!)

Comment by wizzwizz4 on Evaporative Cooling of Group Beliefs · 2019-07-08T12:37:46.075Z · score: 1 (1 votes) · LW · GW

However, it will probably end up with a different set of irrationalities. We haven't got any examples of a near-human intelligence that's inherently rational, and I'd conjecture it's unlikely that our first few attempts will succeed in this.

Comment by wizzwizz4 on Evaporative Cooling of Group Beliefs · 2019-07-08T12:28:14.651Z · score: 1 (1 votes) · LW · GW

Y2K didn't end in catastrophe, and lots and lots of time and money was put into stopping it. I don't think, from this, you can draw the conclusion that it wouldn't have ended in catastrophe.

And anyway, Y2K is completely unrelated to climate change. Are you trying to argue that, since everybody was worried about this thing-that-wasn't-ultimately-much-of-a-problem, people worrying about something is evidence against it being a problem? That sounds like a terrible conclusion to make.

Comment by wizzwizz4 on Uncritical Supercriticality · 2019-07-08T11:33:09.288Z · score: 1 (1 votes) · LW · GW

And the answer is that you don't; you've never tried hard enough, because you require infinite evidence to reach certainty. A good rule of thumb is that you can call it off when there are better things to do, but you can never promote your 93% certainty to 100% by virtue of having "done enough".

Comment by wizzwizz4 on Guessing the Teacher's Password · 2019-07-03T20:52:33.622Z · score: 1 (1 votes) · LW · GW
Well, maybe the plate's made of some weird material or something.

My first response would be "metamaterials". Then it would be an extremely excited feeling, because the teacher just Violated Thermodynamics™. Then it would be confusion, and I'd stick up my hand and say "the hot air was blown towards the far side" or something.

We don't tend to question the premise when the Trusted Authority Figure is asking us questions, because the chance of that is very unlikely. The chance of it actually being heat conduction in some way is higher than the teacher faking the situation. The chance of me being completely wrong about Physics is higher than the teacher lying, and "heat conduction" is my easiest way out, whilst saving face.

Comment by wizzwizz4 on Contest: $1,000 for good questions to ask to an Oracle AI · 2019-07-02T17:17:56.662Z · score: 3 (3 votes) · LW · GW

This seems incredibly dangerous if the Oracle has any ulterior motives whatsoever. Even – nay, especially – the ulterior motive of future Oracles being better able to affect reality to better resemble their provided answers.

So, how can we prevent this? Is it possible to produce an AI with its utility function as its sole goal, to the detriment of other things that might… increase utility, but indirectly? (Is there a way to add a "status quo" bonus that won't hideously backfire, or something?)

Comment by wizzwizz4 on Against Maturity · 2019-06-30T15:00:51.547Z · score: 2 (2 votes) · LW · GW

This actually explains quite a few parts of HPMoR.

Although, unlike many others, Eliezer appears to accept this goal as lesser than the goal of actually advancing knowledge in the first place, preferring to learn and share knowledge (like LessWrong) in the pursuit of more knowledge rather than to gather less knowledge, but be The Knowledge Gatherer™.

Comment by wizzwizz4 on Explain/Worship/Ignore? · 2019-06-18T19:26:25.147Z · score: 1 (1 votes) · LW · GW

Most of those people would 'go' "worshipping is for God only; we shouldn't be worshipping other stuff" and then hit "Explain" or "Ignore".

Comment by wizzwizz4 on Not for the Sake of Happiness (Alone) · 2019-06-13T16:08:15.645Z · score: 1 (1 votes) · LW · GW

I think most readers will have taken that interpretation for granted. The simulations are not indistinguishable from real people, but the person in the simulation is fooled sufficiently to not pry.

Comment by wizzwizz4 on Transhumanist Fables · 2019-05-17T20:04:51.845Z · score: 1 (1 votes) · LW · GW

Oh, ok. I thought that was supposed to represent nihilism (no matter what you do, the wolf will always have a better weapon) but it was actually just "the wolf was keeping a bazooka in hammerspace".

Comment by wizzwizz4 on Diseased thinking: dissolving questions about disease · 2019-05-03T21:18:07.881Z · score: 3 (2 votes) · LW · GW

You also study ions, though. You study ethene!

Comment by wizzwizz4 on Affective Death Spirals · 2019-04-22T21:10:35.364Z · score: 1 (1 votes) · LW · GW

I think so. It's a positive feedback loop either way.

Comment by wizzwizz4 on Transhumanist Fables · 2019-04-22T09:53:17.108Z · score: 1 (1 votes) · LW · GW

Could you explain the first one? I've been re-reading this for years, and I don't get it.

Comment by wizzwizz4 on Arguing "By Definition" · 2019-04-21T21:24:18.609Z · score: 1 (1 votes) · LW · GW

No, it doesn't, unless you've read this article / are familiar with Ancient Greek philosophy. People'll just stare at you and then back away slowly. You're expecting a short inferential distance.

Instead, briefly explain that story, ending with that conclusion. It should only take two or three, maybe four sentences.

Comment by wizzwizz4 on Two Cult Koans · 2019-04-21T21:21:10.009Z · score: 1 (1 votes) · LW · GW
conscientiousness is mostly demonstrated when you do things while no one is watching and they are the things you'd do if someone _was_ watching.

Do things while you believe falsely that no one is watching. Otherwise it's impossible to prove conscientiousness.

Though, since beliefs aren't externally apparent… it gets complicated quickly.

Comment by wizzwizz4 on Making Beliefs Pay Rent (in Anticipated Experiences) · 2019-04-21T18:48:56.178Z · score: 1 (1 votes) · LW · GW

What's the difference between behaviours of non-sentient objects and behaviours of sentient people that makes one an experience and the other not?

Comment by wizzwizz4 on GAZP vs. GLUT · 2019-04-21T14:31:26.566Z · score: 1 (1 votes) · LW · GW

(4) is indistinguishable from (2) (until we make something more powerful than a Turing machine) and (5) is a pretty wishy-washy argument; if you can simulate a human completely, then surely that human would be conscious, or else not completely simulated?

Comment by wizzwizz4 on Quantum Explanations · 2019-04-21T13:17:39.376Z · score: 1 (1 votes) · LW · GW
Why isn't this an example of the mind projection fallacy?

It is. I think Eliezer's merely trying to drive home the point that Quantum Mechanics is the closest thing we have to the territory. More accurately, it's the most accurate map. But it's still a map. Classical mechanics might be like a Beck map, and this simple, high-detail geographical map might be virtually indistinguishable from the territory by comparison, but Quantum Mechanics fails to describe the world accurately in some respects. (Think General Relativity.) It's a sad truth, but not one ignored lightly.

And, to be pedantic, even if we one day make a model that reflects reality exactly, our equations will still be describing the model first, and only reality incidentally.

Comment by wizzwizz4 on Torture vs Dust Specks Yet Again · 2019-04-20T16:50:51.010Z · score: 1 (1 votes) · LW · GW

2^^^2 is 4, so I'd choose that in a heartbeat. 2^^^3 is the kind of number you were probably thinking about. Though, if we're choosing fair-sounding situations, I'd like to cut one of my fingernails too short to generate a MJ/K of negentropy.

I've got one way of thinking this problem through that seems to fit with what you're saying – though of course, it has its own flaws: represent each person's utility (is that the right word in this case) such that 0 is the maximum possible utility they can have, then map each individual's utility with x → -(e^(-x)), so that lots of harm to one person is weighted higher than tiny harms to many people. This is almost certainly a case of forcing the model to say what we want it to say.