Posts

Meetup : Portland, OR: Improv for Rationalists 2014-10-01T05:45:53.789Z
Meetup : Portland Teachable Skills Discussion 2014-09-10T02:26:47.861Z
Rationality Quotes July 2014 2014-07-06T06:51:00.708Z
[LINK] Latinus rationalior ist. 2014-03-06T18:32:38.731Z
[LINK] Reinventing Explanation: Data Presentation as Intuition Pump 2014-02-05T01:00:44.457Z

Comments

Comment by VAuroch on The Cartoon Guide to Löb's Theorem · 2017-06-15T06:07:45.667Z · LW · GW

I don't think I've ever used a text that didn't. "We have" is "we have as a theorem/premise". In most cases this is an unimportant distinction to make, so you could be forgiven for not noticing, if no one ever mentioned why they were using a weird syntactic construction like that rather than plain English.

And yes, rereading the argument that does seem to be where it falls down. Though tbh, you should probably have checked your own assumptions before assuming that the question was wrong as stated.

Comment by VAuroch on Torture vs. Dust Specks · 2017-06-15T05:54:37.258Z · LW · GW

Why?

Comment by VAuroch on Bitcoins are not digital greenbacks · 2017-06-15T05:53:42.750Z · LW · GW

That would make it a terrible at being a medium of exchange or a store of value, though, wouldn't it? No one knows how much it's worth, and you have to acquire some, pass it off, and then (on their side) turn it into currency every time you use it.

Comment by VAuroch on OpenAI makes humanity less safe · 2017-04-07T05:49:25.878Z · LW · GW

Will only matters for green lanterns.

Comment by VAuroch on What's up with Arbital? · 2017-04-05T04:48:03.573Z · LW · GW

Inside View much?

Comment by VAuroch on What's up with Arbital? · 2017-03-30T03:04:17.764Z · LW · GW

If you can't succeed without first getting mass adoption, then you can't succeed. See the 'success' of Medium, and how it required losing everything they set out to do.

If Arbital has failed, Arbital has failed. Building neoTumblr and hoping to turn it into Arbital later won't make it fail any less, it will just produce neoTumblr.

Comment by VAuroch on What's up with Arbital? · 2017-03-30T01:18:45.487Z · LW · GW

Arbital has vague positive affect from being an attempt to solve a big problem in a potentially really impactful way.

Yet Another Blogging Platform, without the special features envisioned originally, is not solving a big problem (or actually any problem), and has a maximum plausible impact of "makes you a bunch of money and you donate that somewhere". Re-using the name is a self-serving attempt to redirect the positive affect from the ambitious, failed, altruistic project to the mundane, new, purely-capitalistic project.

Why aren't you just admitting defeat and going on to build something different?

Comment by VAuroch on What's up with Arbital? · 2017-03-29T20:13:24.542Z · LW · GW

It seems disingenuous to call this new project Arbital.

Comment by VAuroch on What's up with Arbital? · 2017-03-29T20:11:54.586Z · LW · GW

I agree with Christian. Did Arbital ever even come out of closed beta? My impression was that it did not, and you still needed to be whitelisted to have the chance to contribute.

Comment by VAuroch on I Want To Live In A Baugruppe · 2017-03-17T04:37:59.349Z · LW · GW

Absolutely, would move immediately. Inconveniently I am currently at the "impoverished App Academy student" level.

Comment by VAuroch on How to talk rationally about cults · 2017-01-11T08:42:02.454Z · LW · GW

If this set of criteria classify Leverage as a cult, they are probably correct to do so; they're seen as cultish already and I don't think anyone outside Leverage would be too surprised. There are startups that would be classified as such as well; for many that is accurate.

Comment by VAuroch on How to talk rationally about cults · 2017-01-09T04:50:14.556Z · LW · GW

What LW lingo did he use? I didn't see it.

Also, I know at least one person who wasn't born when the Jonestown cult panic ended and got into (and thankfully out of) a cult very much like the one described.

Comment by VAuroch on How to talk rationally about cults · 2017-01-09T04:49:10.439Z · LW · GW

From Funereal-disease on tumblr, in a previous discussion: It is usually better to talk about "spiritual abuse" rather than "being a cult". It emphasizes that the techniques of successful cults are techniques of successful abusers, and is better at being something that happens to a greater or lesser degree; cult is more binary.

I might prefer "social abuse" or "community abuse" to make clear that non-religious forms are possible.

Comment by VAuroch on Two Cult Koans · 2017-01-07T19:11:42.065Z · LW · GW

Eh, more the first than the second. Obedience to authority is something you can demonstrate by showing up and obeying; conscientiousness is mostly demonstrated when you do things while no one is watching and they are the things you'd do if someone was watching.

Comment by VAuroch on Zen and the Art of Rationality · 2017-01-01T23:03:34.959Z · LW · GW

I expect to find that random methods, which approach Bayes's Theorem in the limit of infinite computing resources but are different in finite cases, are superior for finite computing resources. Enough special cases of this are found to have speedups and nicer properties that a general-case proof seems to be true in the same way that P != NP seems to be true (though with lower confidence).

Comment by VAuroch on Two Cult Koans · 2017-01-01T22:32:26.908Z · LW · GW

If you want the group to acquire a collective reputation among other people, a uniform is useful. If Boy Scouts never chose a uniform, it would have been very hard for them to get their reputation for above-average conscientiousness and obedience to authority.

If you want to get a reputation as being good at solving problems (which Ougi's group may), it is useful to have a shared appearance.

Comment by VAuroch on Two Cult Koans · 2017-01-01T22:25:25.713Z · LW · GW

Why are almost all fire trucks red? They would work just as well if they were blue and yellow polka dots. But they are uniform because they are recognizable.The same with the blue-white-red lights on a police car and the sirens.

A nurse's uniform tells you that this is probably a nurse, even in contexts where the scrubs are not useful. A monk's or priest's robes tell you that this is a religious person who might give you religious advice. The act of picking a uniform for a group lets you begin to associate some properties of that group with the people in it, at a glance.

Comment by VAuroch on Even Odds · 2016-07-30T02:25:10.074Z · LW · GW

For an explicit derivation of why this is fair:

Say that you believe the event is likely with probability p, and your betting partner tells you that it will fail with probability q. Then I am going to modify my estimate by c before I tell it to the other person. So my expected value is:

p*(q^2-(1-(p+c))^2) - (1-p)((p+c)^2 - (1-q)^2)

Naturally I want to find the local maximum for variation in c, for a fixed value of p and assuming q is out of my control. So we take the derivative with respect to c. Using Wolfram Alpha shows this is -2c. So the only local maxima possible are telling someone 0, telling them 1, or telling them the true value of p.

Comment by VAuroch on Zombies Redacted · 2016-07-30T00:57:31.327Z · LW · GW

How are qualia different from experiences? If experiences are no different, why use 'qualia' rather than 'experiences'?

Comment by VAuroch on Zombies Redacted · 2016-07-30T00:53:40.548Z · LW · GW

I, also, still do not know what you're talking about. I expect to have experiences in the future. I do not really expect them to contain qualia, but I'm not sure what that would mean in your terms. Please describe the difference I should expect in terms of things I can verify or falsify internally.

Comment by VAuroch on Zombies Redacted · 2016-07-14T23:02:30.839Z · LW · GW

In this case, "description of how my experience will be different in the future if I have or do not have qualia" covers it. There are probably cases where that's too simplistic.

Comment by VAuroch on Soylent Orange - Whole food open source soylent · 2016-07-09T23:38:42.870Z · LW · GW

Yes: https://docs.google.com/spreadsheets/d/194Y6QoSda6Q6kx-9y9mrmfi3MTQR_KNgejlitB9jiJI/edit?usp=sharing

Comment by VAuroch on Zombies Redacted · 2016-07-09T23:18:57.993Z · LW · GW

No one defines qualia clearly. If they did, I'd have a conclusion one way or the other.

Comment by VAuroch on Zombies Redacted · 2016-07-04T22:10:13.971Z · LW · GW

I don't see any difference between me and other people who claim to have consciousness, but I have never understood what they mean by consciousness or qualia to an extent that lets me conclude that I have them. So I am sometimes fond of asserting that I have neither, mostly to get an interesting response.

Comment by VAuroch on Upcoming LW Changes · 2016-02-03T18:19:52.811Z · LW · GW

Nice to see someone taking the lead! I've been looking for something to work on, and I'd be proud to help rebuild LW. I'll send you a message.

Comment by VAuroch on The correct response to uncertainty is *not* half-speed · 2016-01-16T10:15:09.664Z · LW · GW

Huh. I think I've been doing this at my current (crappy, unlikely to lead anywhere, part-time remote contract programming) job. Timely!

Comment by VAuroch on LessWrong 2.0 · 2015-12-27T23:48:43.438Z · LW · GW

I have heard this discussed for at least the last year, well before Stuart started his series, and would be very surprised if it was not true. I'd put down $30 to your $10 on the matter, pending an agreed-upon resolution mechanism for the bet.

Comment by VAuroch on LessWrong 2.0 · 2015-12-11T02:05:42.134Z · LW · GW

Well, no posts are deleted. If you look at Main and sort chronologically, you can go through and count articles per time and what fraction of them are math-heavy (which should be easy to check from a once-over skim).

I think this is pretty much accepted wisdom in the rationalsphere. Several people, online and in person, have said things to the effect of "Tumblr is for socializing, private blogs are for commenting on whatever the blogger writes about, and LessWrong is for math-heavy things, quotes threads, and meetup scheduling." But if you doubt it, you can absolutely check.

Comment by VAuroch on LessWrong 2.0 · 2015-12-11T01:59:16.722Z · LW · GW

Yes, I agree completely. Honestly, I thought this line of reasoning was common knowledge in the rationalsphere, since I think I've seen it discussed a couple times on Tumblr and in person (IIRC, both in Portland, and in the Bay Area).

Comment by VAuroch on LessWrong 2.0 · 2015-12-06T21:52:16.041Z · LW · GW

Back when LW was more active, there was much lower math density in posts here.

Comment by VAuroch on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging · 2015-09-17T01:36:22.618Z · LW · GW

Point, but not a hard one to get around.

There is a theoretical lower bound on energy per computation, but it's extremely small, and the timescale they'll be run in isn't specified. Also, unless Scott Aaronson's speculative consciousness-requires-quantum-entanglement-decoherence theory of identity is true, there are ways to use reversible computing to get around the lower bounds and achieve theoretically limitless computation as long as you don't need it to output results. Having that be extant adds improbability, but not much on the scale we're talking about.

Comment by VAuroch on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging · 2015-09-16T19:07:41.425Z · LW · GW

It's easy if they have access to running detailed simulations, and while the probability that someone secretly has that ability is very low, it's not nearly as low as the probabilities Kaj mentioned here.

Comment by VAuroch on Beautiful Probability · 2015-09-02T00:27:40.498Z · LW · GW

Double-blind trials aren't the gold standard, they're the best available standard. They still don't replicate far too often, because they don't remove bias (and I'm not just referring to publication bias). Which is why, when considering how to interpret a study, you look at the history of what scientific positions the experimenter has supported in the past, and then update away from that to compensate for bias which you have good reason to think will show up in their data.

In the example, past results suggest that, even if the trial was double-blind, someone who is committed to achieving a good result for the treatment will get more favorable data than some other experimenter with no involvement.

And that's on top of the trivial fact that someone with an interest in getting a successful trial is more likely to use a directionally-slanted stopping rule if they have doubts about the efficacy than if they are confident it will work, which is not explicitly relevant in Eliezer's example.

Comment by VAuroch on Beautiful Probability · 2015-08-30T20:21:21.740Z · LW · GW

You can claim that it should have the same likelihood either way, but you have to put the discrepancy somewhere. Knowing the choice of stopping rule is evidence about the experimenter's state of knowledge about the efficacy. You can say that it should be treated as a separate piece of evidence, or that knowing about the stopping rule should change your prior, but if you don't bring it in somewhere, you're ignoring critical information.

Comment by VAuroch on Confidence levels inside and outside an argument · 2015-08-05T21:37:42.306Z · LW · GW

Read the Tiffany Aching ones. They're not just for children, but especially read them if you have or ever expect to have children. These are the stories on which baby rationalists ought to be raised.

Comment by VAuroch on Rational vs Reasonable · 2015-07-20T08:45:01.970Z · LW · GW

It's something Eliezer talks about in some posts; I associate it mainly with The Twelve Virtues and this:

Some people, I suspect, may object that curiosity is an emotion and is therefore "not rational". I label an emotion as "not rational" if it rests on mistaken beliefs, or rather, on irrational epistemic conduct: "If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm."

Comment by VAuroch on The Unfriendly Superintelligence next door · 2015-07-20T08:39:54.708Z · LW · GW

in-GovCo

un-GovCo, I believe?

Comment by VAuroch on The Unfriendly Superintelligence next door · 2015-07-20T08:36:38.633Z · LW · GW

Historically, this didn't work out well. You know, back when the snake oil salesmen were literal and selling real snake oil, cocaine, and various low-dose toxic extracts. (I believe similar things happen in China today, but it's more slanted toward traditional medicine and thus less likely to be toxic.)

Comment by VAuroch on Why you should attend EA Global and (some) other conferences · 2015-07-20T08:28:02.359Z · LW · GW

Most likely cannot onboard volunteers quickly enough to be useful at this point; Thursday was the last day for volunteer signups, I believe.

Comment by VAuroch on On Caring · 2015-05-10T05:48:30.187Z · LW · GW

Read that at the time and again now. Doesn't help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.

Comment by VAuroch on Bell's Theorem: No EPR "Reality" · 2015-04-08T14:49:20.304Z · LW · GW

Also not a QM expert, but this matches my understanding as well.

Comment by VAuroch on The genie knows, but doesn't care · 2015-03-21T02:47:00.339Z · LW · GW

Enjoy your war on straw, I'm out.

Comment by VAuroch on The genie knows, but doesn't care · 2015-03-14T16:13:58.578Z · LW · GW

A boxed AI won't be able to magically make it's creators forget about AI risks and unbox it.

The results of AI box game trials disagree.

t's trivial to propose an AI model which only cares about finite time horizons. Predict what actions will have the highest expected utility at time T, take that action.

And what does it do at time T+1? And if you said 'nothing', try again, because you have no way of justifying that claim. It may not have intentionally-designed long-term preferences, but just because your eyes are closed does not mean the room is empty.

Comment by VAuroch on The genie knows, but doesn't care · 2015-03-14T15:45:27.918Z · LW · GW

By that reasoning, there's no such thing as a Friendly human.

True. There isn't.

I suggest that most people when talking about friendly AIs do not mean to imply a standard of friendliness so strict that humans could not meet it.

Well, I definitely do, and I'm at least 90% confident Eliezer does as well. Most, probably nearly all, of people who talk about Friendliness would regard a FOOMed human as Unfriendly.

Comment by VAuroch on The genie knows, but doesn't care · 2015-03-03T11:40:53.037Z · LW · GW

A prerequisite for planning a Friendly AI is understanding individual and collective human values well enough to predict whether they would be satisfied with the outcome, which entails (in the logical sense) having a very well-developed model of the specific humans you interact with, or at least the capability to construct one if you so choose. Having a sufficiently well-developed model to predict what you will do given the data you are given is logically equivalent to a weak form of "control people just by talking to them".

To put that in perspective, if I understood the people around me well enough to predict what they would do given what I said to them, I would never say things that caused them to take actions I wouldn't like; if I, for some reason, valued them becoming terrorists, it would be a slow and gradual process to warp their perceptions in the necessary ways to drive them to terrorism, but it could be done through pure conversation over the course of years, and faster if they were relying on me to provide them large amounts of data they were using to make decisions.

And even the potential to construct this weak form of control that is initially heavily constrained in what outcomes are reachable and can only be expanded slowly is incredibly dangerous to give to an Unfriendly AI. If it is Unfriendly, it will want different things than its creators and will necessarily get value out of modeling them. And regardless of its values, if more computing power is useful in achieving its goals (an 'if' that is true for all goals), escaping the box is instrumentally useful.

And the idea of a mind with "no long term goals" is absurd on its face. Just because you don't know the long-term goals doesn't mean they don't exist.

Comment by VAuroch on Harry Potter and the Methods of Rationality discussion thread, January 2015, chapter 103 · 2015-01-29T09:49:04.286Z · LW · GW

As was first proposed on /r/rational (and EY has confirmed that he got the idea from that proposal)

Comment by VAuroch on Imperfect Voting Systems · 2015-01-27T03:24:31.036Z · LW · GW

No voting system can deal with people who have arbitrary preferences. I've lost track of the first time I looked into this, but I'm pretty sure that if you map preference space, impose a metric, and say that each candidate and voter choose a location in that space and the votes go in proportion to the distance by that metric, it gets around Arrow by imposing the requirement "voters may only express a preference that their representatives share their preferences", which is reasonable but still violates the theorem's preconditions.

Comment by VAuroch on Torture vs. Dust Specks · 2015-01-23T13:42:04.238Z · LW · GW

The ripple effect is real, but as in Pascal's Wager, for every possible situation where the timing is critical and something bad will happen if you are distracted for a moment, there's a counterbalancing situation where the timing is critical and something bad will happen unless you are distracted for a moment, so those probably balance out into noise.

Comment by VAuroch on Existential Risk and Existential Hope: Definitions · 2015-01-18T03:53:47.438Z · LW · GW

Yes, that's my issue with the paper; it doesn't distinguish that from actual catastrophes.

Comment by VAuroch on Existential Risk and Existential Hope: Definitions · 2015-01-17T10:30:32.665Z · LW · GW

When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of 'existential catastrophe'.