Update on LW2.0: this week, Oliver and I are in Seattle at a CFAR workshop, so only Harmanas is working. This week is some final feature work and bugfixing, as well as user interviews. Next week we plan to start the closed beta. Rather than running for a predefined length of time, the plan is to run until we're happy with the user experience, and then do an open beta, which will probably have a defined length.
I just realized that Paul Graham's "Make something people want" can be flipped into "Make people want something", which is a better description of many businesses that make money despite creating zero or negative value to society. For example, you can sell something for $1 that gives people $2 in immediate value but also sneakily gives them $3 worth of desire to buy otherwise useless stuff from you. Or you can advertise to create desire where it didn't exist, leading to negative value for everyone who saw your ad but didn't buy your product. Or you can give away your product for free and then charge for the antidote.
This seems like a big loophole in markets which has no market-based solution. It also helps explain why rich people and countries aren't proportionally happier than poor ones, if they are mostly paying to make manufactured pains go away. People's criteria for happiness are too easily raised by what they buy, see or think, but hardly anyone pushes back against that.
My previous thoughts on this topic: 1, 2, 3, 4. I feel like the ideas are coming together into something bigger, but can't put a finger on it yet.
I feel like it's more general than that. For example, dreaming that an iPhone will make you happy is bad for you, but so is dreaming that becoming a great artist (or even joining an anti-consumerist revolution) will make you happy.
Yeah. Though as dreams go, romance isn't the worst. Many of its followers do end up happy, and many of the unhappy ones get over it. Compare with the dream of being a rockstar, where only a handful can ever succeed, and many of the failed ones never let go.
Yeah, I think a lot of entertainment leads to escapism which is partly a downside to me. When Roger Ebert said that video games can't be art, and someone told him that they provide much needed escapism, this was his reply:
I do not have a need "all the time" to take myself away from the oppressive facts of my life, however oppressive they may be, in order to go somewhere where I have control. I need to stay here and take control.
I keep hoping that voice activation features will be helpful. Up to now, they haven't been, at least for me. They just do not work consistently enough. Apparently they do for some people, and I expect that at some future time they will for me, but up to now they have consistently failed that expectation, even though I keep hoping it will work.
Yeah, but there is that very important part of dreams/hopes/expectations providing the much-needed motivatation for doing the work. Without them you are just stuck in the near mode and are slowly crawling towards the nearest local maxium (e.g. turnips -- warning, the link is NSFW).
I am reading a book Bring Up Genius (mentioned at SSC recently), and I am confused. I am still in the first part of the book, but seems like the author is alternating between "every healthy child can become a genius if educated properly" and describing reseach and observation of high-IQ children, without ever acknowledging the difference between "every" and "high-IQ". I am trying to write a summary for LW, but I fail to make a coherent explanation.
When I try hard, I could make a consistent hypothesis like: the behavior of high-IQ children gives us hints on the direction we should try to move all children; and the spectacular failures of educational system with regards to high-IQ children are an evidence that the education may be failing the average children in a similar way, only less visibly -- but here I suspect I am simply making up my own stuff instead of explaining the author's view. I suspect the author may have believed that IQ is largely determined by nurture, but he doesn't directly say or deny this; it's just a position I can imagine that would make the rest of the book sound coherently. (But it is obviously wrong.) A less charitable explanation is that the author simply didn't see high IQ as a privilege, because within his family it was a norm. But that would make his lessons less universal. Although still useful for e.g. the folks at Less Wrong.
Anyone else reading the book? Unfortunately, I can't find an English version; I am currently reading Eduku Geniulon in Esperanto, uhm, here.
We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s ... comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place.
What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the standard practice of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)
What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the [standard practice] of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)
I am following standard practice. I have only weakly considered the relevant norm, and agree that it's not pure good.
If the "legal issue" is what I think it is, then having a post about it here at LW is just worthless gossiping. Just because it might involve the real-world "rationality community" in some tangential way doesn't mean it has a place on this site. Many people here don't even care about what MIRI or CFAR might be working on!
I am considering ending my life because of fears related to AI risk. I am posting here because I want other people to review my reasoning process and help ensure I make the right decision.
First, this is not an emergency situation. I do not currently intend to commit suicide, nor have I made any plan for doing so. No matter what I decide, I will wait several years to be sure of my preference. I am not at all an impulsive person, and I know that ASI is very unlikely to be invented in less than a few decades.
I am not sure if it would be appropriate to talk about this here, and I prefer private conversations anyway, so the purpose of this post is to find people willing to talk with me through PMs. To summarize my issue: I only desire to live because of the possibility of utopia, but I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, please consider sending me a message with a brief explanation of your beliefs and your intent to help me. I especially urge you to contact me if you already have similar fears of AI, even if you are a lurker and are not sure if you should. Because of the sensitive nature of this topic, I may not respond unless you provide an appropriately genuine introduction and/or have a legitimate posting history.
Please do not reply/PM if you just want to tell me to call a suicide prevention hotline, tell me the standard objections to suicide, or give me depression treatment advice. I might take a long time to respond to PMs, especially if several people end up contacting me. If nobody contacts me I will repost this in the next discussion thread or on another website.
WTF is this? Please take a step back, and look at what you did here.
Your literally first words on this website are about suicide. Then you say no suicide, and then you explain in detail how people are not supposed to talk about your possible suicide. Half of your total contribution on this website is about your suicide-not-suicide. Thanks; now everyone can understand they are not supposed to think about the pink elephant in the room. So... why have you mentioned it, in the first place? Three times in a row, using a bold font once, just to be sure. Seems like you actually want people to think about your possible suicide, but also to feel guilty if they mention it. Because the same comment, without this mind game, could be written like this:
I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, I am interested in your opinions about this topic.
Much less drama, right?
Next, you provide zero information about yourself. You are a stranger here, and you use anonymized e-mail. And I guess we will not learn more about you here, because you prefer private conversations anyway. However, you "urge" people to contact you, and provide an "appropriately genuine introduction", a brief explanation of their beliefs, and their intent to help you. But they are not supposed to mention your suicide-not-suicide, right? But they are supposed to want to help you. But they are not allowed to suggest seeking expert help. And they are supposed to tell you things about themselves, without knowing anything about you. And this all is supposed to happen off-site, without any observers, inter alie because the word limit on LW messages is problematic. Right. How weird no one else has realized yet how much this problematic word limit prevents us from debating AI-related topics here.
More red flags than in China on Mao's birthday.
I don't think you are in a risk of suicide. Instead, I think that people who would contact you are in serious risk of being emotionally exploited (and reminded of your suicide-not-suicide, and their intent to help). Something like: "I told you that I am ready to die unless you convince me not to; and you promised you would help me; and you know that I will never seek expert help; and you don't know whether anyone else talks to me; so... if you stop interacting with me, you might be responsible for my death; is that really okay for you as a utilitarian?"
If anyone wants to play this game, go ahead. I have already seen my share of "suicidal" people giving others detailed instructions how to interact with them, and unsurprisingly, decades later all of them are still alive; and the people who interacted with them regret having that experience.
I corresponded with sad_dolphin. It added a little bit of gloom to my day, but I don't regret doing it: havingsuffered from similar psychological problems in the past, I want to be there with my hard-won expertise for people working through the same questions. I agree that most people who talk about suicide in such a manner are unlikely to go through with it, but that doesn't mean they're not being subjectively sincere. I'd rather such cries for help not be disincentivized here (as you seem to be trying to do); I'd rather people be able to seek and receive support from people who actually understand their ideas, rather than callously foisted off onto alleged "experts" who don't understand.
I am not sure how to even respond to this. I do not know what drives you to hatefully twist my words, depicting my cry for help as some kind of contrived attempt at manipulation, but you are obviously not acting with anything close to an altruistic intent.
Yes, I am entirely serious about this. Far more than you know. Perhaps if you had contacted me to have an intelligent discussion, instead of directly proceeding to accuse me with many critical generalizations, you would have realized that.
I have had several people message me already, and we are currently having civil discussions about potential future scenarios. I am certain they would all attest that they are not being 'emotionally exploited', as you seem to think is my goal. I publicly mentioned suicide because genuine consideration of the possibility was the entire point of the post, and I (correctly, for the most part) assumed that this community was mature enough to handle it without any drama.
You clearly have zero experience dealing with suicidal individuals, and would do well to stay away from this discussion. I had a hard enough time working up the courage to make that post, and I really do not want any drama from this. I hope you will do the mature thing and just leave me alone.
The mature way to handle suicidal people is to call professional help, as soon as possible. If the suicidal thinking is caused by some kind of hormonal imbalance -- which the person will report as "I have logically concluded that it is better for me to die", because that is how it feels from inside -- you cannot fix the hormonal imbalance by a clever argument; that would be magical thinking. Most likely, you will talk to the person until their hormonal spike passes, then the person will say "uhm, what you said makes a lot of sense, I already feel better, thanks!", and the next day you will find them hanging from the noose in their room, because another hormonal spike hit them later in the evening, and they "logically concluded" that life actually is meaningless and there is no hope and no reason to delay the inevitable, so they wouldn't even call you or wait until the morning, because that also would be pointless.
(Been there, failed to do the right thing, lost a friend.)
Sure, this seems like an unfalsifiable hypothesis "you believe it is not caused by hormones because that belief is caused by hormones". But that's exactly the reason to seek professional help instead of debating it; to measure your actual level of hormones, and if necessary, to fix it. Body and mind are connected more than most people admit.
That's all from my side. If you are sincere, I wish you luck. Any meaningful help I could offer is exactly what you refuse, so I have nothing more to add.
The mature way to handle suicidal people is to call professional help, as soon as possible.
It's worth noting that this creates an incentive to never talk about your problems.
My advice for people who value not being kidnapped and forcibly drugged by unaccountable authority figures who won't listen to reason is to never voluntarily talk to psychiatrists, for the same reason you should never talk to cops.
It would be great to have a service where you get your blood sample taken and tested anonymously, and then anonymously receive pills to fix the problem. But I guess most suicidal people would (1) refuse to use this service anyway, either because of some principle, or because they would "logically" conclude it is useless and cannot possibly help; and (2) even if a friend would push them to do so, at some moment they would find a reason to stop taking the pills, and when the effect of the pills wears off, conclude "logically" that the life is not worth living.
I've talked to some unstable folks who were really upset by ideas like AI blackmail, rescue sims, etc. Did my best to help them, which sometimes worked and sometimes didn't. If such ideas didn't exist, I suspect these folks would latch onto something more accessible with similar nightmare potential, like the Christian hell or the multiverse or simply the amount of suffering in our world. Mostly I agree with Viliam that fixing the mood with chemicals (or mindfulness, workouts, sunshine, etc) is a better idea than trying to reason through it.
Poetry, along with some other art forms, always struck me as inherently uninteresting to the point where I find it hard to believe anyone actually enjoys it. I see some people who are obviously moved by poetry, so clearly I'm just at one end of the spectrum. To each their own.
I only rarely find interesting or moving visual art. I can be loads more interested by a description of a picture, but seldom to the same extent as by a piece of poetry. One co-worker (boss, actually) of mine said she just did not get poetry, and I tried to see other differences in how we tick - I think she's more self-assured and appreciative of data drawn in tables, but that's all. Sometimes, I really wonder if aesthetics are partly genetics...
I wouldn't say I "like poetry" as such, but there are certainly poems I like; two that come to mind are If and Absolutely Nothing. Oh, and a lot of "lik the bred"s. I've sometimes listened to spoken poetry where I didn't follow the words very well but enjoyed the rhythm.
I think Brienne Yudkowsky has written about poetry.
Wanted to share this concept of a metaquiz with this community.
The primary goal is that participants do poorly on the “other side” section. Underestimating the other side’s knowledge raises the questions “maybe they’re not all stupid?”. Incorrectly stereotyping their beliefs raises the question “maybe they’re not all evil?”. As a secondary goal, if participants do poorly on the quiz itself, they may learn something about climate change. Any feedback on this idea? Links to related concepts?
The primary goal is that participants do poorly on the “other side” section.
You may want to re-formulate this sentence :-)
The obvious problem is that the "other side" is rarely uniform. You typically get a mix of smart and honest people (but with different values), people who are in there for power and money, the not-too-smart ones duped by propaganda, the edgelords who want attention (and/or the lulz), the social conformists, etc.
Some, but not all are stupid. Some, but not all, are evil.
Well, that link doesn't explain, since you start with these claims as axioms (that is, you assert them as self-evident and I'm not quite willing to assume that). And I still don't know what it the metric by which you measure the goodness of the political shape.
As an aside, your quiz requires me to log into Google. Any particular reason for that?
Can you point out the axiomatic assumptions I'm making? I explain why thinking "Those that disagree with me must be stupid, evil, or both." is bad: "It prevents finding common ground and encourages wild policy swings as power is transferred from one uncompromising faction to the next. The same facts can generate different viewpoints, each deserving of a spot in the marketplace of ideas, even if we personally disagree with them."
The quiz requires login only because I don't want the same person answering the quiz multiple times. Google account isn't visible to me unless you leave your email at the end.
Basically I'm trying to point out that things which you take as self-evident (e.g. finding common ground is good, wild policy swings are bad) are not necessarily so and the whole situation is quite complicated in reality. Consider, for example, whether you want to find common ground with tankies or, say, suicide bombers. Or take Eastern Europe around 1990 -- were the "policy swings" too wild?
You're making claims which sound universal, but which look to me to have much more restricted applicability (say, in stable Western-style democracies with respect to large groups the views of which fall within the Overton window).
Also, one of the big issues is that not an insignificant number of people are unable to understand more complicated theories and approaches. In crude terms, they are too dumb for that. What should they do?
As to the quiz, I expect that "too few people took it" is likely to be a bigger problem for you than "someone took it multiple times".
My analysis is more focused on the situation we have in the US today, with a still narrow (in the grand scheme) Overton window. I agree with you that in general there are failure modes, and the specific examples you bring (soviet collapse in '90, tankies, etc). I'll revise to make the claims sound less universal.
I agree that the "unable" / "too dumb" camp is problematic, but I think it's a relatively small fraction compared to the "unwilling" camp, which just has no real incentive to be informed.
And I've dropped the account requirement on the quiz since you're probably right. 100 data points at the moment, so pretty anecdotal but I'll start looking at the data soon.
the "unwilling" camp, which just has no real incentive to be informed
Ah, an excellent word -- "incentive".
I agree that there are large swathes of people who use "hurray us, boo them" rhetoric purely to signal virtue and allegiance to their tribe. The issue is that they do this precisely because they have appropriate incentives -- and providing them with additional information without changing the incentives is unlikely to do much.
In fact, stopping reciting the "our enemies are spawn of darkness" narrative is likely to be interpreted as a signal of disloyalty to the tribe with potentially dire social consequences.
After a quick Google - a 'to-do/doing/done' list made of sticky-notes seems like it'd be simple, inexpensive, and helpful. Unless someone comes up with a better suggestion by tomorrow, I expect I'm going to start giving this a try as soon as I hit the nearby dollar store. :)
The current setup is that the back of a dresser is facing my bed, with the corkboard on the back; do you know of any such screens that would be feasible to attach, in whatever manner? Or are you thinking more along the lines of grabbing an El Cheapo tablet, supported by a pile of pushpins?
The issue is size. A tablet might be too small for the purpose, though it has the big advantage of being "complete" out of the box. A computer monitor is going to be lager but it's just a display, you will still need an actual computer for it. You might be able to use your smartphone as that computer, but depending on the particulars you could still need additional hardware.
The simplest way of attaching the screen would be plain-vanilla velcro. It's not going to be that heavy.
I think that before I invest myself too heavily in any particular hardware, I should try to find out more about what sorts of software exist for such passive wall displays. For example, I wouldn't mind something like the custom channel used at my local coffee shop, with my own pick of RSS feeds, weather sources, GCalender items, and the like; but I don't know offhand any piece of software, either for Android or Linux, that does that.
Got another customer who wanted a book for a childof less than 1 y.o. Are there any simple things I can tell them besides "their vision is just developing, come back later"? Because I have the feeling this one didn't quite believe me.
Dr. Seuss has nice pictures. So do travel almanacs. Yeah, the kid probably isn't going to get a whole lot out of them, but you can hold the kid and turn the pages and maybe read a bit at them while they chew on a corner.
Or perhaps something that seems cute to the parent, and still functions as a large shape for the child. For example, you could make a big dark-green circle on white background, and add some extra lines to make it a frog, while knowing that the child will only see the big green circle on white background.
We don't really have large enough, detail-less books (in our shop). There's a cute series of books made from cloth, but we don't have it, either (I think I will try to change this), but really, there's nothing quite large enough (or maybe I am just wrong and it's alright? Seems not so.)
While I was searching for the manifesto, I noticed a strange incongruence between the English and the Italian Wikipedia. While the latter source is very similar to the former, there is this strange sentence:
il suo documento scritto in 35000 parole La Società Industriale e il Suo Futuro (meglio noto come La Pillola Rossa, chiamato anche "Manifesto di Unabomber")
which translates roughly as "his document 35000 words-long Industrial Society and Its Future (also known as The Red Pill, also called "Unabomber Manifesto"). Wait, what? The Red Pill? Since when? There's no trace of such name in the English version. Any source on that? Is it plausible? Is it some kind of fucked-up joke?
Wikipedia is a wiki. Anyone can (and does) edit it. There are constant efforts to keep it "clean", but it's not unusual to find, basically, Easter eggs, graffiti, random nonsense, etc. buried in the otherwise reasonable text of some article.
int a // a = 0
int b // b = 0
int c // c = 0
a++ // a = 1
a++ // a = 2
b = a * a // b = 4
c = a << a // c = 8
c = b * c // c = 32
c = c + a // c = 34
b = b >> a // b = 1
c << b // c = 1156
c++ // c = 1157
c = c * a // c = 2314
The optimal is to either minimize the energy or the time required, by my book. Or to minimize algorithmic steps. Doesn't really matter which one of those definitions you adopt, they are closely related.
It's like the Kolmogorov's complexity. Which program language to use as the reference? Doesn't really matter. Just use the one I gave, or modify it in any sensible way. Then find a very good solution for 23142314 - or any other interesting number. They are all interesting.