Open thread, June 5 - June 11, 2017
post by Elo · 2017-06-05T04:23:51.697Z · LW · GW · Legacy · 97 commentsContents
97 comments
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
97 comments
Comments sorted by top scores.
comment by Vaniver · 2017-06-08T00:04:07.553Z · LW(p) · GW(p)
Update on LW2.0: this week, Oliver and I are in Seattle at a CFAR workshop, so only Harmanas is working. This week is some final feature work and bugfixing, as well as user interviews. Next week we plan to start the closed beta. Rather than running for a predefined length of time, the plan is to run until we're happy with the user experience, and then do an open beta, which will probably have a defined length.
comment by cousin_it · 2017-06-09T22:58:37.995Z · LW(p) · GW(p)
I just realized that Paul Graham's "Make something people want" can be flipped into "Make people want something", which is a better description of many businesses that make money despite creating zero or negative value to society. For example, you can sell something for $1 that gives people $2 in immediate value but also sneakily gives them $3 worth of desire to buy otherwise useless stuff from you. Or you can advertise to create desire where it didn't exist, leading to negative value for everyone who saw your ad but didn't buy your product. Or you can give away your product for free and then charge for the antidote.
This seems like a big loophole in markets which has no market-based solution. It also helps explain why rich people and countries aren't proportionally happier than poor ones, if they are mostly paying to make manufactured pains go away. People's criteria for happiness are too easily raised by what they buy, see or think, but hardly anyone pushes back against that.
My previous thoughts on this topic: 1, 2, 3, 4. I feel like the ideas are coming together into something bigger, but can't put a finger on it yet.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-10T04:24:50.632Z · LW(p) · GW(p)
the ideas are coming together into something bigger
It's a wheel.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-10T06:43:58.708Z · LW(p) · GW(p)
I feel like it's more general than that. For example, dreaming that an iPhone will make you happy is bad for you, but so is dreaming that becoming a great artist (or even joining an anti-consumerist revolution) will make you happy.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2017-06-12T10:27:14.555Z · LW(p) · GW(p)
What exactly do you mean that people dream of an iPhone making them happy? What do you think people expect an iPhone to do, that it doesn't?
In general, products that don't deliver on expectations lead to customers complaining that the expectations aren't meet. How often do you see such complaints from people who have brought an iPhone?
Replies from: cousin_it, entirelyuseless↑ comment by cousin_it · 2017-06-12T10:42:17.601Z · LW(p) · GW(p)
That's why I chose the word "dreams" instead of product expectations amenable to customer complaints and such. Watch this ad to see what I mean.
Replies from: entirelyuseless, ChristianKl↑ comment by entirelyuseless · 2017-06-13T16:19:29.984Z · LW(p) · GW(p)
Romance seems like an example of a "natural" Marlboro country ad, implanted by nature in people with the intention of making more people, but falsely telling them this is happiness.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-13T16:53:07.984Z · LW(p) · GW(p)
Yeah. Though as dreams go, romance isn't the worst. Many of its followers do end up happy, and many of the unhappy ones get over it. Compare with the dream of being a rockstar, where only a handful can ever succeed, and many of the failed ones never let go.
↑ comment by ChristianKl · 2017-06-12T17:52:33.115Z · LW(p) · GW(p)
I don't see any iPhone in that ad.
If you generally believe that the dream of being a ranger is bad for people, do you also judge Winnetou for giving people dreams they use to evade their daily lives but they can't achieve?
Replies from: cousin_it↑ comment by cousin_it · 2017-06-12T18:15:01.027Z · LW(p) · GW(p)
Yeah, I think a lot of entertainment leads to escapism which is partly a downside to me. When Roger Ebert said that video games can't be art, and someone told him that they provide much needed escapism, this was his reply:
I do not have a need "all the time" to take myself away from the oppressive facts of my life, however oppressive they may be, in order to go somewhere where I have control. I need to stay here and take control.
↑ comment by entirelyuseless · 2017-06-12T13:17:30.606Z · LW(p) · GW(p)
I keep hoping that voice activation features will be helpful. Up to now, they haven't been, at least for me. They just do not work consistently enough. Apparently they do for some people, and I expect that at some future time they will for me, but up to now they have consistently failed that expectation, even though I keep hoping it will work.
↑ comment by Lumifer · 2017-06-11T01:44:20.554Z · LW(p) · GW(p)
Dreaming is a fuzzy word -- are you saying that desires are bad for you? Or hopes? Or maybe expectations of good things?
Replies from: Dagon, cousin_it↑ comment by Dagon · 2017-06-12T21:56:45.574Z · LW(p) · GW(p)
I take it as "far-mode, unspecific dreams/hopes/expectations are problematic if the agent doesn't do the work to tie it to near-mode specifics".
Replies from: Lumifer↑ comment by Lumifer · 2017-06-13T01:58:13.339Z · LW(p) · GW(p)
Yeah, but there is that very important part of dreams/hopes/expectations providing the much-needed motivatation for doing the work. Without them you are just stuck in the near mode and are slowly crawling towards the nearest local maxium (e.g. turnips -- warning, the link is NSFW).
Replies from: cousin_itcomment by Liron · 2017-06-06T05:42:04.709Z · LW(p) · GW(p)
If you use online dating, I just launched a site called WittyThumbs to analyze and improve your conversations, in order to get better dates. Let me know what you think!
comment by Viliam · 2017-06-05T12:46:21.269Z · LW(p) · GW(p)
I am reading a book Bring Up Genius (mentioned at SSC recently), and I am confused. I am still in the first part of the book, but seems like the author is alternating between "every healthy child can become a genius if educated properly" and describing reseach and observation of high-IQ children, without ever acknowledging the difference between "every" and "high-IQ". I am trying to write a summary for LW, but I fail to make a coherent explanation.
When I try hard, I could make a consistent hypothesis like: the behavior of high-IQ children gives us hints on the direction we should try to move all children; and the spectacular failures of educational system with regards to high-IQ children are an evidence that the education may be failing the average children in a similar way, only less visibly -- but here I suspect I am simply making up my own stuff instead of explaining the author's view. I suspect the author may have believed that IQ is largely determined by nurture, but he doesn't directly say or deny this; it's just a position I can imagine that would make the rest of the book sound coherently. (But it is obviously wrong.) A less charitable explanation is that the author simply didn't see high IQ as a privilege, because within his family it was a norm. But that would make his lessons less universal. Although still useful for e.g. the folks at Less Wrong.
Anyone else reading the book? Unfortunately, I can't find an English version; I am currently reading Eduku Geniulon in Esperanto, uhm, here.
comment by Lumifer · 2017-06-07T18:10:03.038Z · LW(p) · GW(p)
Heh.
We like to think that we’re hyper-rational, but when we have to choose a technology, we end up in a kind of frenzy — bouncing from one person’s ... comment to another’s blog post until, in a stupor, we float helplessly toward the brightest light and lay prone in front of it, oblivious to what we were looking for in the first place.
(source)
Replies from: MrMindcomment by Vaniver · 2017-06-10T19:42:43.950Z · LW(p) · GW(p)
I've moved a post about an ongoing legal issue to its author's drafts. They can return it to public discussion when the trial concludes.
Replies from: Zack_M_Davis, Elo↑ comment by Zack_M_Davis · 2017-06-11T19:58:20.627Z · LW(p) · GW(p)
What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the standard practice of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)
Replies from: Vaniver, bogus, ChristianKl↑ comment by Vaniver · 2017-06-12T18:26:40.433Z · LW(p) · GW(p)
What specific bad things would you expect to happen if the post was left up, with what probabilities? (I'm aware of the [standard practice] of not discussing ongoing legal cases, but have my doubts about whether allowing the legal system to operate under conditions of secrecy actually makes things better on net.)
I am following standard practice. I have only weakly considered the relevant norm, and agree that it's not pure good.
↑ comment by bogus · 2017-06-12T11:09:34.829Z · LW(p) · GW(p)
If the "legal issue" is what I think it is, then having a post about it here at LW is just worthless gossiping. Just because it might involve the real-world "rationality community" in some tangential way doesn't mean it has a place on this site. Many people here don't even care about what MIRI or CFAR might be working on!
↑ comment by ChristianKl · 2017-06-12T08:57:00.555Z · LW(p) · GW(p)
Actually talking about the specifics of what Vaniver expects in public might produce similar harm to letting the post be public.
comment by borismus · 2017-06-07T16:36:40.282Z · LW(p) · GW(p)
Wanted to share this concept of a metaquiz with this community.
The primary goal is that participants do poorly on the “other side” section. Underestimating the other side’s knowledge raises the questions “maybe they’re not all stupid?”. Incorrectly stereotyping their beliefs raises the question “maybe they’re not all evil?”. As a secondary goal, if participants do poorly on the quiz itself, they may learn something about climate change. Any feedback on this idea? Links to related concepts?
Here’s an example metaquiz on climate change: https://goo.gl/forms/ZqNQs3y1L1kpMPtF2
Replies from: Lumifer↑ comment by Lumifer · 2017-06-07T16:56:15.941Z · LW(p) · GW(p)
The primary goal is that participants do poorly on the “other side” section.
You may want to re-formulate this sentence :-)
The obvious problem is that the "other side" is rarely uniform. You typically get a mix of smart and honest people (but with different values), people who are in there for power and money, the not-too-smart ones duped by propaganda, the edgelords who want attention (and/or the lulz), the social conformists, etc.
Some, but not all are stupid. Some, but not all, are evil.
Replies from: borismus↑ comment by borismus · 2017-06-07T17:20:17.722Z · LW(p) · GW(p)
The nuance you articulate in the last sentence is kind of the point I'm trying to make. I think many on the fringes would disagree with you.
Further, if such metaquizzes can suggest that in this case "some" is more like "very few", and not "actually quite a lot", I think we'd be in better political shape!
Replies from: Lumifer↑ comment by Lumifer · 2017-06-07T17:46:02.277Z · LW(p) · GW(p)
I think many on the fringes would disagree with you.
Clearly they must be both stupid and evil :-D
I think we'd be in better political shape
I see no reason to believe so. Political adversity is NOT driven by misunderstandings.
Replies from: borismus↑ comment by borismus · 2017-06-07T20:03:48.137Z · LW(p) · GW(p)
Interesting perspective. So you think that both parties have an accurate understanding of one another's viewpoints? Can you provide any evidence for that?
Replies from: Lumifer↑ comment by Lumifer · 2017-06-07T20:12:02.235Z · LW(p) · GW(p)
I didn't say they have. I said that if they were to acquire such an accurate understanding, political conflict would not cease.
Replies from: borismus↑ comment by borismus · 2017-06-07T22:09:43.517Z · LW(p) · GW(p)
Ceasing political conflict is a ridiculously ambitious, unrealistic, maybe even undesirable goal. I'm talking about a slight decrease here.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-08T01:05:45.607Z · LW(p) · GW(p)
Right, you claimed that "we'd be in better political shape". Any evidence to back up that belief? Oh, and which political shape is "better"?
Replies from: borismus↑ comment by borismus · 2017-06-08T02:10:31.520Z · LW(p) · GW(p)
I attempt explain in this post: http://smus.com/viewpoint-tolerance-through-curiosity/. What do you think?
Replies from: Lumifer↑ comment by Lumifer · 2017-06-08T04:34:48.502Z · LW(p) · GW(p)
Well, that link doesn't explain, since you start with these claims as axioms (that is, you assert them as self-evident and I'm not quite willing to assume that). And I still don't know what it the metric by which you measure the goodness of the political shape.
As an aside, your quiz requires me to log into Google. Any particular reason for that?
Replies from: borismus↑ comment by borismus · 2017-06-08T17:47:54.001Z · LW(p) · GW(p)
Can you point out the axiomatic assumptions I'm making? I explain why thinking "Those that disagree with me must be stupid, evil, or both." is bad: "It prevents finding common ground and encourages wild policy swings as power is transferred from one uncompromising faction to the next. The same facts can generate different viewpoints, each deserving of a spot in the marketplace of ideas, even if we personally disagree with them."
The quiz requires login only because I don't want the same person answering the quiz multiple times. Google account isn't visible to me unless you leave your email at the end.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-08T18:34:29.266Z · LW(p) · GW(p)
Basically I'm trying to point out that things which you take as self-evident (e.g. finding common ground is good, wild policy swings are bad) are not necessarily so and the whole situation is quite complicated in reality. Consider, for example, whether you want to find common ground with tankies or, say, suicide bombers. Or take Eastern Europe around 1990 -- were the "policy swings" too wild?
You're making claims which sound universal, but which look to me to have much more restricted applicability (say, in stable Western-style democracies with respect to large groups the views of which fall within the Overton window).
Also, one of the big issues is that not an insignificant number of people are unable to understand more complicated theories and approaches. In crude terms, they are too dumb for that. What should they do?
As to the quiz, I expect that "too few people took it" is likely to be a bigger problem for you than "someone took it multiple times".
Replies from: borismus↑ comment by borismus · 2017-06-09T15:25:46.034Z · LW(p) · GW(p)
My analysis is more focused on the situation we have in the US today, with a still narrow (in the grand scheme) Overton window. I agree with you that in general there are failure modes, and the specific examples you bring (soviet collapse in '90, tankies, etc). I'll revise to make the claims sound less universal.
I agree that the "unable" / "too dumb" camp is problematic, but I think it's a relatively small fraction compared to the "unwilling" camp, which just has no real incentive to be informed.
And I've dropped the account requirement on the quiz since you're probably right. 100 data points at the moment, so pretty anecdotal but I'll start looking at the data soon.
Replies from: Lumifer↑ comment by Lumifer · 2017-06-09T15:46:31.138Z · LW(p) · GW(p)
the "unwilling" camp, which just has no real incentive to be informed
Ah, an excellent word -- "incentive".
I agree that there are large swathes of people who use "hurray us, boo them" rhetoric purely to signal virtue and allegiance to their tribe. The issue is that they do this precisely because they have appropriate incentives -- and providing them with additional information without changing the incentives is unlikely to do much.
In fact, stopping reciting the "our enemies are spawn of darkness" narrative is likely to be interpreted as a signal of disloyalty to the tribe with potentially dire social consequences.
By the way, are you familiar with the Ideological Turing Test? It's a related idea.
comment by sad_dolphin · 2017-06-06T13:35:32.963Z · LW(p) · GW(p)
I am considering ending my life because of fears related to AI risk. I am posting here because I want other people to review my reasoning process and help ensure I make the right decision.
First, this is not an emergency situation. I do not currently intend to commit suicide, nor have I made any plan for doing so. No matter what I decide, I will wait several years to be sure of my preference. I am not at all an impulsive person, and I know that ASI is very unlikely to be invented in less than a few decades.
I am not sure if it would be appropriate to talk about this here, and I prefer private conversations anyway, so the purpose of this post is to find people willing to talk with me through PMs. To summarize my issue: I only desire to live because of the possibility of utopia, but I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, please consider sending me a message with a brief explanation of your beliefs and your intent to help me. I especially urge you to contact me if you already have similar fears of AI, even if you are a lurker and are not sure if you should. Because of the sensitive nature of this topic, I may not respond unless you provide an appropriately genuine introduction and/or have a legitimate posting history.
Please do not reply/PM if you just want to tell me to call a suicide prevention hotline, tell me the standard objections to suicide, or give me depression treatment advice. I might take a long time to respond to PMs, especially if several people end up contacting me. If nobody contacts me I will repost this in the next discussion thread or on another website.
Edit: The word limit on LW messages is problematic, so please email me at sad_dolphin@protonmail.com instead.
Replies from: Viliam, Mitchell_Porter, MrMind↑ comment by Viliam · 2017-06-07T21:27:45.849Z · LW(p) · GW(p)
WTF is this? Please take a step back, and look at what you did here.
Your literally first words on this website are about suicide. Then you say no suicide, and then you explain in detail how people are not supposed to talk about your possible suicide. Half of your total contribution on this website is about your suicide-not-suicide. Thanks; now everyone can understand they are not supposed to think about the pink elephant in the room. So... why have you mentioned it, in the first place? Three times in a row, using a bold font once, just to be sure. Seems like you actually want people to think about your possible suicide, but also to feel guilty if they mention it. Because the same comment, without this mind game, could be written like this:
I have recently realized that ASI-provided immortal life is significantly likely to be bad rather than good. If you are very familiar with the topics of AI risk, mind uploading, and utilitarianism, I am interested in your opinions about this topic.
Much less drama, right?
Next, you provide zero information about yourself. You are a stranger here, and you use anonymized e-mail. And I guess we will not learn more about you here, because you prefer private conversations anyway. However, you "urge" people to contact you, and provide an "appropriately genuine introduction", a brief explanation of their beliefs, and their intent to help you. But they are not supposed to mention your suicide-not-suicide, right? But they are supposed to want to help you. But they are not allowed to suggest seeking expert help. And they are supposed to tell you things about themselves, without knowing anything about you. And this all is supposed to happen off-site, without any observers, inter alie because the word limit on LW messages is problematic. Right. How weird no one else has realized yet how much this problematic word limit prevents us from debating AI-related topics here.
More red flags than in China on Mao's birthday.
I don't think you are in a risk of suicide. Instead, I think that people who would contact you are in serious risk of being emotionally exploited (and reminded of your suicide-not-suicide, and their intent to help). Something like: "I told you that I am ready to die unless you convince me not to; and you promised you would help me; and you know that I will never seek expert help; and you don't know whether anyone else talks to me; so... if you stop interacting with me, you might be responsible for my death; is that really okay for you as a utilitarian?"
If anyone wants to play this game, go ahead. I have already seen my share of "suicidal" people giving others detailed instructions how to interact with them, and unsurprisingly, decades later all of them are still alive; and the people who interacted with them regret having that experience.
Replies from: Zack_M_Davis, sad_dolphin, Elo↑ comment by Zack_M_Davis · 2017-06-08T03:54:39.643Z · LW(p) · GW(p)
I corresponded with sad_dolphin. It added a little bit of gloom to my day, but I don't regret doing it: having suffered from similar psychological problems in the past, I want to be there with my hard-won expertise for people working through the same questions. I agree that most people who talk about suicide in such a manner are unlikely to go through with it, but that doesn't mean they're not being subjectively sincere. I'd rather such cries for help not be disincentivized here (as you seem to be trying to do); I'd rather people be able to seek and receive support from people who actually understand their ideas, rather than callously foisted off onto alleged "experts" who don't understand.
↑ comment by sad_dolphin · 2017-06-08T11:28:24.158Z · LW(p) · GW(p)
I am not sure how to even respond to this. I do not know what drives you to hatefully twist my words, depicting my cry for help as some kind of contrived attempt at manipulation, but you are obviously not acting with anything close to an altruistic intent.
Yes, I am entirely serious about this. Far more than you know. Perhaps if you had contacted me to have an intelligent discussion, instead of directly proceeding to accuse me with many critical generalizations, you would have realized that.
I have had several people message me already, and we are currently having civil discussions about potential future scenarios. I am certain they would all attest that they are not being 'emotionally exploited', as you seem to think is my goal. I publicly mentioned suicide because genuine consideration of the possibility was the entire point of the post, and I (correctly, for the most part) assumed that this community was mature enough to handle it without any drama.
You clearly have zero experience dealing with suicidal individuals, and would do well to stay away from this discussion. I had a hard enough time working up the courage to make that post, and I really do not want any drama from this. I hope you will do the mature thing and just leave me alone.
Replies from: Viliam, Lumifer↑ comment by Viliam · 2017-06-08T15:15:22.664Z · LW(p) · GW(p)
The mature way to handle suicidal people is to call professional help, as soon as possible. If the suicidal thinking is caused by some kind of hormonal imbalance -- which the person will report as "I have logically concluded that it is better for me to die", because that is how it feels from inside -- you cannot fix the hormonal imbalance by a clever argument; that would be magical thinking. Most likely, you will talk to the person until their hormonal spike passes, then the person will say "uhm, what you said makes a lot of sense, I already feel better, thanks!", and the next day you will find them hanging from the noose in their room, because another hormonal spike hit them later in the evening, and they "logically concluded" that life actually is meaningless and there is no hope and no reason to delay the inevitable, so they wouldn't even call you or wait until the morning, because that also would be pointless.
(Been there, failed to do the right thing, lost a friend.)
Sure, this seems like an unfalsifiable hypothesis "you believe it is not caused by hormones because that belief is caused by hormones". But that's exactly the reason to seek professional help instead of debating it; to measure your actual level of hormones, and if necessary, to fix it. Body and mind are connected more than most people admit.
That's all from my side. If you are sincere, I wish you luck. Any meaningful help I could offer is exactly what you refuse, so I have nothing more to add.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2017-06-08T17:54:05.301Z · LW(p) · GW(p)
The mature way to handle suicidal people is to call professional help, as soon as possible.
It's worth noting that this creates an incentive to never talk about your problems.
My advice for people who value not being kidnapped and forcibly drugged by unaccountable authority figures who won't listen to reason is to never voluntarily talk to psychiatrists, for the same reason you should never talk to cops.
Replies from: Viliam↑ comment by Viliam · 2017-06-08T18:16:24.182Z · LW(p) · GW(p)
It would be great to have a service where you get your blood sample taken and tested anonymously, and then anonymously receive pills to fix the problem. But I guess most suicidal people would (1) refuse to use this service anyway, either because of some principle, or because they would "logically" conclude it is useless and cannot possibly help; and (2) even if a friend would push them to do so, at some moment they would find a reason to stop taking the pills, and when the effect of the pills wears off, conclude "logically" that the life is not worth living.
↑ comment by Mitchell_Porter · 2017-06-08T19:10:19.699Z · LW(p) · GW(p)
If ASI-provided immortal life were possible, you would already be living it.
... because if you're somewhere in an infinite sequence, you're more likely to be in the middle than at the beginning.
↑ comment by MrMind · 2017-06-08T08:29:11.747Z · LW(p) · GW(p)
As an aside, against the most horrific version of UFAI, even suicide won't avoid dystopia. Heh.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-08T22:35:58.597Z · LW(p) · GW(p)
I've talked to some unstable folks who were really upset by ideas like AI blackmail, rescue sims, etc. Did my best to help them, which sometimes worked and sometimes didn't. If such ideas didn't exist, I suspect these folks would latch onto something more accessible with similar nightmare potential, like the Christian hell or the multiverse or simply the amount of suffering in our world. Mostly I agree with Viliam that fixing the mood with chemicals (or mindfulness, workouts, sunshine, etc) is a better idea than trying to reason through it.
comment by MaryCh · 2017-06-09T09:05:48.502Z · LW(p) · GW(p)
Yvain once wrote a cute (but, to my mind, rather pointless) post about "rational poetry" or some such; but do rationalists even like poetry as a form of expression? Empirically?
[pollid:1199]
If you want to say something in more detail, please leave a comment.
Replies from: btrettel, philh, Strangeattractor↑ comment by btrettel · 2017-06-09T16:11:27.312Z · LW(p) · GW(p)
Poetry, along with some other art forms, always struck me as inherently uninteresting to the point where I find it hard to believe anyone actually enjoys it. I see some people who are obviously moved by poetry, so clearly I'm just at one end of the spectrum. To each their own.
Replies from: MaryCh↑ comment by MaryCh · 2017-06-09T16:23:00.400Z · LW(p) · GW(p)
I only rarely find interesting or moving visual art. I can be loads more interested by a description of a picture, but seldom to the same extent as by a piece of poetry. One co-worker (boss, actually) of mine said she just did not get poetry, and I tried to see other differences in how we tick - I think she's more self-assured and appreciative of data drawn in tables, but that's all. Sometimes, I really wonder if aesthetics are partly genetics...
↑ comment by philh · 2017-06-09T09:53:28.902Z · LW(p) · GW(p)
I wouldn't say I "like poetry" as such, but there are certainly poems I like; two that come to mind are If and Absolutely Nothing. Oh, and a lot of "lik the bred"s. I've sometimes listened to spoken poetry where I didn't follow the words very well but enjoyed the rhythm.
I think Brienne Yudkowsky has written about poetry.
↑ comment by Strangeattractor · 2017-06-12T07:08:48.781Z · LW(p) · GW(p)
I like some poetry. Often in the form of song lyrics, or Shakespeare's plays.
comment by DataPacRat · 2017-06-08T18:44:58.878Z · LW(p) · GW(p)
Due to Life, I now have a 2x3-foot corkboard just above the foot of my bed. What should I pin to it?
Replies from: Elo, Lumifer↑ comment by Elo · 2017-06-08T20:28:11.469Z · LW(p) · GW(p)
Kanban board
Replies from: DataPacRat↑ comment by DataPacRat · 2017-06-08T21:07:28.268Z · LW(p) · GW(p)
After a quick Google - a 'to-do/doing/done' list made of sticky-notes seems like it'd be simple, inexpensive, and helpful. Unless someone comes up with a better suggestion by tomorrow, I expect I'm going to start giving this a try as soon as I hit the nearby dollar store. :)
Replies from: Elo↑ comment by Lumifer · 2017-06-08T18:53:33.341Z · LW(p) · GW(p)
A computer screen.
Replies from: DataPacRat↑ comment by DataPacRat · 2017-06-08T19:36:07.041Z · LW(p) · GW(p)
An interesting thought.
The current setup is that the back of a dresser is facing my bed, with the corkboard on the back; do you know of any such screens that would be feasible to attach, in whatever manner? Or are you thinking more along the lines of grabbing an El Cheapo tablet, supported by a pile of pushpins?
Replies from: Lumifer↑ comment by Lumifer · 2017-06-08T20:44:10.901Z · LW(p) · GW(p)
The issue is size. A tablet might be too small for the purpose, though it has the big advantage of being "complete" out of the box. A computer monitor is going to be lager but it's just a display, you will still need an actual computer for it. You might be able to use your smartphone as that computer, but depending on the particulars you could still need additional hardware.
The simplest way of attaching the screen would be plain-vanilla velcro. It's not going to be that heavy.
Replies from: DataPacRat↑ comment by DataPacRat · 2017-06-08T21:13:55.780Z · LW(p) · GW(p)
I think that before I invest myself too heavily in any particular hardware, I should try to find out more about what sorts of software exist for such passive wall displays. For example, I wouldn't mind something like the custom channel used at my local coffee shop, with my own pick of RSS feeds, weather sources, GCalender items, and the like; but I don't know offhand any piece of software, either for Android or Linux, that does that.
Replies from: Elocomment by MaryCh · 2017-06-07T17:10:38.750Z · LW(p) · GW(p)
Got another customer who wanted a book for a childof less than 1 y.o. Are there any simple things I can tell them besides "their vision is just developing, come back later"? Because I have the feeling this one didn't quite believe me.
Replies from: Screwtape, Viliam↑ comment by Viliam · 2017-06-07T21:36:36.711Z · LW(p) · GW(p)
Basic shapes, large?
Or perhaps something that seems cute to the parent, and still functions as a large shape for the child. For example, you could make a big dark-green circle on white background, and add some extra lines to make it a frog, while knowing that the child will only see the big green circle on white background.
Replies from: MaryCh↑ comment by MaryCh · 2017-06-08T05:07:36.881Z · LW(p) · GW(p)
We don't really have large enough, detail-less books (in our shop). There's a cute series of books made from cloth, but we don't have it, either (I think I will try to change this), but really, there's nothing quite large enough (or maybe I am just wrong and it's alright? Seems not so.)
Replies from: ChristianKl↑ comment by ChristianKl · 2017-06-12T18:04:31.480Z · LW(p) · GW(p)
If that's really the case that there's no existing book for this use-case, this looks like a market opportunity.
comment by ImmortalRationalist · 2017-06-07T00:47:54.421Z · LW(p) · GW(p)
Has anyone here read Industrial Society And Its Future (the Unabomber manifesto), and if so, what are your thoughts on it?
Replies from: MrMind↑ comment by MrMind · 2017-06-08T08:25:36.618Z · LW(p) · GW(p)
While I was searching for the manifesto, I noticed a strange incongruence between the English and the Italian Wikipedia. While the latter source is very similar to the former, there is this strange sentence:
il suo documento scritto in 35000 parole La Società Industriale e il Suo Futuro (meglio noto come La Pillola Rossa, chiamato anche "Manifesto di Unabomber")
which translates roughly as "his document 35000 words-long Industrial Society and Its Future (also known as The Red Pill, also called "Unabomber Manifesto").
Wait, what? The Red Pill? Since when?
There's no trace of such name in the English version. Any source on that? Is it plausible? Is it some kind of fucked-up joke?
↑ comment by Lumifer · 2017-06-08T15:13:04.611Z · LW(p) · GW(p)
Wikipedia is a wiki. Anyone can (and does) edit it. There are constant efforts to keep it "clean", but it's not unusual to find, basically, Easter eggs, graffiti, random nonsense, etc. buried in the otherwise reasonable text of some article.
comment by Thomas · 2017-06-05T08:12:41.151Z · LW(p) · GW(p)
Here is a new problem:
https://protokol2020.wordpress.com/2017/06/05/create-2314/
Replies from: ZankerH, ZankerH↑ comment by ZankerH · 2017-06-05T13:04:37.035Z · LW(p) · GW(p)
Preliminary solution based on random search
MakeIntVar A
Inc A
Shl A, 5
Inc A
Inc A
A=A*A
Inc A
Shl A, 1
I've hit on a bunch of similar solutions, but 2 * (1 + 34^2)
seems to be the common thread.
↑ comment by Lumifer · 2017-06-05T14:47:57.208Z · LW(p) · GW(p)
Let's rewrite this in something C-like:
int a // a = 0
int b // b = 0
int c // c = 0
a++ // a = 1
a++ // a = 2
b = a * a // b = 4
c = a << a // c = 8
c = b * c // c = 32
c = c + a // c = 34
b = b >> a // b = 1
c << b // c = 1156
c++ // c = 1157
c = c * a // c = 2314
13 lines.
Replies from: Thomas, Thomas↑ comment by Thomas · 2017-06-06T06:08:31.414Z · LW(p) · GW(p)
1 int a //a=0
2 int b //b=0
3 inc a //a=1
4 inc a //a=2
5 shl a,a //a=8
6 b=a*a //b=64
7 inc a //a=9
8 b=b+b //b=128
9 b=b+a //b=137
10 a=a*b //a=1233
11 a=a*b //a=168921
12 inc a //a=168922
13 a=b*a //a=23142314
Replies from: Lumifer↑ comment by Lumifer · 2017-06-06T17:31:46.811Z · LW(p) · GW(p)
Yep. Effectively you're just writing code in a very restricted subset of assembly language.
A more interesting exercise would be to write a program which would output such code with certain (I suspect, limited) guarantees of optimality .
Replies from: Thomas↑ comment by Thomas · 2017-06-06T19:17:16.083Z · LW(p) · GW(p)
Say a number (below 1 billion), I'll give you (optimal) code.
Replies from: Lumifer↑ comment by Thomas · 2017-06-05T13:13:42.974Z · LW(p) · GW(p)
You can't do
Shl A, 5
You must first create 5 in a variable, say B.
Replies from: ZankerH↑ comment by ZankerH · 2017-06-05T13:20:58.677Z · LW(p) · GW(p)
Well, that does complicate things quite a bit. I threw those lines out of my algorithm generator and the frequency of valid programs generated dropped by ~4 orders of magnitude.
Replies from: Thomas↑ comment by Thomas · 2017-06-05T13:28:39.766Z · LW(p) · GW(p)
You can't even shift by 1. You have to create 1 first, out of zero. Just like God.
Replies from: ZankerH↑ comment by ZankerH · 2017-06-05T14:21:54.189Z · LW(p) · GW(p)
In which case, best I can do is 10 lines
MakeIntVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
Replies from: Thomas↑ comment by Thomas · 2017-06-05T15:24:57.596Z · LW(p) · GW(p)
Good enough, congratulations!
The next (week) question might be, how to optimally produce an arbitrary large number out of zero. For example, 15 lines is enough to produce 23142314. But is this the minimum?
Replies from: ZankerH, Lumifer, Thomas↑ comment by ZankerH · 2017-06-05T17:46:32.654Z · LW(p) · GW(p)
Define "optimal". Optimizing for the utility function of min(my effort), I could misuse more company resources to run random search on.
Replies from: Thomas↑ comment by Thomas · 2017-06-05T17:57:49.230Z · LW(p) · GW(p)
The optimal is to either minimize the energy or the time required, by my book. Or to minimize algorithmic steps. Doesn't really matter which one of those definitions you adopt, they are closely related.
It's like the Kolmogorov's complexity. Which program language to use as the reference? Doesn't really matter. Just use the one I gave, or modify it in any sensible way. Then find a very good solution for 23142314 - or any other interesting number. They are all interesting.