Posts
Comments
I don't think I've ever used a text that didn't. "We have" is "we have as a theorem/premise". In most cases this is an unimportant distinction to make, so you could be forgiven for not noticing, if no one ever mentioned why they were using a weird syntactic construction like that rather than plain English.
And yes, rereading the argument that does seem to be where it falls down. Though tbh, you should probably have checked your own assumptions before assuming that the question was wrong as stated.
That would make it a terrible at being a medium of exchange or a store of value, though, wouldn't it? No one knows how much it's worth, and you have to acquire some, pass it off, and then (on their side) turn it into currency every time you use it.
Will only matters for green lanterns.
Inside View much?
If you can't succeed without first getting mass adoption, then you can't succeed. See the 'success' of Medium, and how it required losing everything they set out to do.
If Arbital has failed, Arbital has failed. Building neoTumblr and hoping to turn it into Arbital later won't make it fail any less, it will just produce neoTumblr.
Arbital has vague positive affect from being an attempt to solve a big problem in a potentially really impactful way.
Yet Another Blogging Platform, without the special features envisioned originally, is not solving a big problem (or actually any problem), and has a maximum plausible impact of "makes you a bunch of money and you donate that somewhere". Re-using the name is a self-serving attempt to redirect the positive affect from the ambitious, failed, altruistic project to the mundane, new, purely-capitalistic project.
Why aren't you just admitting defeat and going on to build something different?
It seems disingenuous to call this new project Arbital.
I agree with Christian. Did Arbital ever even come out of closed beta? My impression was that it did not, and you still needed to be whitelisted to have the chance to contribute.
Absolutely, would move immediately. Inconveniently I am currently at the "impoverished App Academy student" level.
If this set of criteria classify Leverage as a cult, they are probably correct to do so; they're seen as cultish already and I don't think anyone outside Leverage would be too surprised. There are startups that would be classified as such as well; for many that is accurate.
What LW lingo did he use? I didn't see it.
Also, I know at least one person who wasn't born when the Jonestown cult panic ended and got into (and thankfully out of) a cult very much like the one described.
From Funereal-disease on tumblr, in a previous discussion: It is usually better to talk about "spiritual abuse" rather than "being a cult". It emphasizes that the techniques of successful cults are techniques of successful abusers, and is better at being something that happens to a greater or lesser degree; cult is more binary.
I might prefer "social abuse" or "community abuse" to make clear that non-religious forms are possible.
Eh, more the first than the second. Obedience to authority is something you can demonstrate by showing up and obeying; conscientiousness is mostly demonstrated when you do things while no one is watching and they are the things you'd do if someone was watching.
I expect to find that random methods, which approach Bayes's Theorem in the limit of infinite computing resources but are different in finite cases, are superior for finite computing resources. Enough special cases of this are found to have speedups and nicer properties that a general-case proof seems to be true in the same way that P != NP seems to be true (though with lower confidence).
If you want the group to acquire a collective reputation among other people, a uniform is useful. If Boy Scouts never chose a uniform, it would have been very hard for them to get their reputation for above-average conscientiousness and obedience to authority.
If you want to get a reputation as being good at solving problems (which Ougi's group may), it is useful to have a shared appearance.
Why are almost all fire trucks red? They would work just as well if they were blue and yellow polka dots. But they are uniform because they are recognizable.The same with the blue-white-red lights on a police car and the sirens.
A nurse's uniform tells you that this is probably a nurse, even in contexts where the scrubs are not useful. A monk's or priest's robes tell you that this is a religious person who might give you religious advice. The act of picking a uniform for a group lets you begin to associate some properties of that group with the people in it, at a glance.
For an explicit derivation of why this is fair:
Say that you believe the event is likely with probability p, and your betting partner tells you that it will fail with probability q. Then I am going to modify my estimate by c before I tell it to the other person. So my expected value is:
p*(q^2-(1-(p+c))^2) - (1-p)((p+c)^2 - (1-q)^2)
Naturally I want to find the local maximum for variation in c, for a fixed value of p and assuming q is out of my control. So we take the derivative with respect to c. Using Wolfram Alpha shows this is -2c. So the only local maxima possible are telling someone 0, telling them 1, or telling them the true value of p.
How are qualia different from experiences? If experiences are no different, why use 'qualia' rather than 'experiences'?
I, also, still do not know what you're talking about. I expect to have experiences in the future. I do not really expect them to contain qualia, but I'm not sure what that would mean in your terms. Please describe the difference I should expect in terms of things I can verify or falsify internally.
In this case, "description of how my experience will be different in the future if I have or do not have qualia" covers it. There are probably cases where that's too simplistic.
Yes: https://docs.google.com/spreadsheets/d/194Y6QoSda6Q6kx-9y9mrmfi3MTQR_KNgejlitB9jiJI/edit?usp=sharing
No one defines qualia clearly. If they did, I'd have a conclusion one way or the other.
I don't see any difference between me and other people who claim to have consciousness, but I have never understood what they mean by consciousness or qualia to an extent that lets me conclude that I have them. So I am sometimes fond of asserting that I have neither, mostly to get an interesting response.
Nice to see someone taking the lead! I've been looking for something to work on, and I'd be proud to help rebuild LW. I'll send you a message.
Huh. I think I've been doing this at my current (crappy, unlikely to lead anywhere, part-time remote contract programming) job. Timely!
I have heard this discussed for at least the last year, well before Stuart started his series, and would be very surprised if it was not true. I'd put down $30 to your $10 on the matter, pending an agreed-upon resolution mechanism for the bet.
Well, no posts are deleted. If you look at Main and sort chronologically, you can go through and count articles per time and what fraction of them are math-heavy (which should be easy to check from a once-over skim).
I think this is pretty much accepted wisdom in the rationalsphere. Several people, online and in person, have said things to the effect of "Tumblr is for socializing, private blogs are for commenting on whatever the blogger writes about, and LessWrong is for math-heavy things, quotes threads, and meetup scheduling." But if you doubt it, you can absolutely check.
Yes, I agree completely. Honestly, I thought this line of reasoning was common knowledge in the rationalsphere, since I think I've seen it discussed a couple times on Tumblr and in person (IIRC, both in Portland, and in the Bay Area).
Back when LW was more active, there was much lower math density in posts here.
Point, but not a hard one to get around.
There is a theoretical lower bound on energy per computation, but it's extremely small, and the timescale they'll be run in isn't specified. Also, unless Scott Aaronson's speculative consciousness-requires-quantum-entanglement-decoherence theory of identity is true, there are ways to use reversible computing to get around the lower bounds and achieve theoretically limitless computation as long as you don't need it to output results. Having that be extant adds improbability, but not much on the scale we're talking about.
It's easy if they have access to running detailed simulations, and while the probability that someone secretly has that ability is very low, it's not nearly as low as the probabilities Kaj mentioned here.
Double-blind trials aren't the gold standard, they're the best available standard. They still don't replicate far too often, because they don't remove bias (and I'm not just referring to publication bias). Which is why, when considering how to interpret a study, you look at the history of what scientific positions the experimenter has supported in the past, and then update away from that to compensate for bias which you have good reason to think will show up in their data.
In the example, past results suggest that, even if the trial was double-blind, someone who is committed to achieving a good result for the treatment will get more favorable data than some other experimenter with no involvement.
And that's on top of the trivial fact that someone with an interest in getting a successful trial is more likely to use a directionally-slanted stopping rule if they have doubts about the efficacy than if they are confident it will work, which is not explicitly relevant in Eliezer's example.
You can claim that it should have the same likelihood either way, but you have to put the discrepancy somewhere. Knowing the choice of stopping rule is evidence about the experimenter's state of knowledge about the efficacy. You can say that it should be treated as a separate piece of evidence, or that knowing about the stopping rule should change your prior, but if you don't bring it in somewhere, you're ignoring critical information.
Read the Tiffany Aching ones. They're not just for children, but especially read them if you have or ever expect to have children. These are the stories on which baby rationalists ought to be raised.
It's something Eliezer talks about in some posts; I associate it mainly with The Twelve Virtues and this:
Some people, I suspect, may object that curiosity is an emotion and is therefore "not rational". I label an emotion as "not rational" if it rests on mistaken beliefs, or rather, on irrational epistemic conduct: "If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm."
in-GovCo
un-GovCo, I believe?
Historically, this didn't work out well. You know, back when the snake oil salesmen were literal and selling real snake oil, cocaine, and various low-dose toxic extracts. (I believe similar things happen in China today, but it's more slanted toward traditional medicine and thus less likely to be toxic.)
Most likely cannot onboard volunteers quickly enough to be useful at this point; Thursday was the last day for volunteer signups, I believe.
Read that at the time and again now. Doesn't help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.
Also not a QM expert, but this matches my understanding as well.
Enjoy your war on straw, I'm out.
A boxed AI won't be able to magically make it's creators forget about AI risks and unbox it.
The results of AI box game trials disagree.
t's trivial to propose an AI model which only cares about finite time horizons. Predict what actions will have the highest expected utility at time T, take that action.
And what does it do at time T+1? And if you said 'nothing', try again, because you have no way of justifying that claim. It may not have intentionally-designed long-term preferences, but just because your eyes are closed does not mean the room is empty.
By that reasoning, there's no such thing as a Friendly human.
True. There isn't.
I suggest that most people when talking about friendly AIs do not mean to imply a standard of friendliness so strict that humans could not meet it.
Well, I definitely do, and I'm at least 90% confident Eliezer does as well. Most, probably nearly all, of people who talk about Friendliness would regard a FOOMed human as Unfriendly.
A prerequisite for planning a Friendly AI is understanding individual and collective human values well enough to predict whether they would be satisfied with the outcome, which entails (in the logical sense) having a very well-developed model of the specific humans you interact with, or at least the capability to construct one if you so choose. Having a sufficiently well-developed model to predict what you will do given the data you are given is logically equivalent to a weak form of "control people just by talking to them".
To put that in perspective, if I understood the people around me well enough to predict what they would do given what I said to them, I would never say things that caused them to take actions I wouldn't like; if I, for some reason, valued them becoming terrorists, it would be a slow and gradual process to warp their perceptions in the necessary ways to drive them to terrorism, but it could be done through pure conversation over the course of years, and faster if they were relying on me to provide them large amounts of data they were using to make decisions.
And even the potential to construct this weak form of control that is initially heavily constrained in what outcomes are reachable and can only be expanded slowly is incredibly dangerous to give to an Unfriendly AI. If it is Unfriendly, it will want different things than its creators and will necessarily get value out of modeling them. And regardless of its values, if more computing power is useful in achieving its goals (an 'if' that is true for all goals), escaping the box is instrumentally useful.
And the idea of a mind with "no long term goals" is absurd on its face. Just because you don't know the long-term goals doesn't mean they don't exist.
As was first proposed on /r/rational (and EY has confirmed that he got the idea from that proposal)
No voting system can deal with people who have arbitrary preferences. I've lost track of the first time I looked into this, but I'm pretty sure that if you map preference space, impose a metric, and say that each candidate and voter choose a location in that space and the votes go in proportion to the distance by that metric, it gets around Arrow by imposing the requirement "voters may only express a preference that their representatives share their preferences", which is reasonable but still violates the theorem's preconditions.
The ripple effect is real, but as in Pascal's Wager, for every possible situation where the timing is critical and something bad will happen if you are distracted for a moment, there's a counterbalancing situation where the timing is critical and something bad will happen unless you are distracted for a moment, so those probably balance out into noise.
Yes, that's my issue with the paper; it doesn't distinguish that from actual catastrophes.
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of 'existential catastrophe'.