[stub] 100-Word Unpolished Insights Thread (3/10-???)
post by Bound_up · 2017-03-10T14:12:12.627Z · LW · GW · Legacy · 25 commentsContents
25 comments
Ever had a minor insight, but didn't feel confident or interested enough to write a top-quality LW post? Personally, the high standards of LW and my own uncertainty have kept me from ever bothering to present a number of ideas.
Here, you can post quick little (preferably under 100 words) insights, with the explicit understanding that the idea is unpolished and unvetted, and hey, maybe nobody will be interested, but it's worth a shot.
Voters and commenters are invited to be especially charitable, given that we already know that these ideas are less-than-perfect in concept or presentation or ______, and we want people to be able to present their ideas quickly so that they can be understood, rather than having to craft them to be defensible.
If a crucial flaw is found in an insight, or if the idea is unoriginal, please point it out.
Alternatively, if enough interest is shown, the idea can be expanded on in a Discussion post, and you are invited to include in your post a disclaimer that, even if this idea is not of interest to everybody, some people have shown interest over in the Unpolished Insights Thread, and this post is targeted towards interested people like them.
Hopefully, this format lets us get the best of both worlds, easily filtering out unneeded content without anybody feeling punished, and calling the best ideas out into the light, without scaring anybody off with high standards.
(Post your ideas separately, so that they can be voted for or commented on separately)
25 comments
Comments sorted by top scores.
comment by Bound_up · 2017-03-10T14:26:16.017Z · LW(p) · GW(p)
I had an idea to increase average people’s rationality with 4 qualities:
It doesn’t seem/feel “rationalist” or “nerdy.”
It can work without people understanding why it works
It can be taught without understanding its purpose
It can be perceived as about politeness
A high school class, where people try to pass Intellectual Turing Tests. It's not POLITE/sophisticated to assert opinions if you can't show that you understand the people that you're saying are wrong.
We already have a lot of error-detection abilities when our minds criticize others' ideas, we just need to access that power for our own ideas.
Replies from: None, itaibn0, I_D_Sparse, Bound_up↑ comment by itaibn0 · 2017-03-17T22:01:30.273Z · LW(p) · GW(p)
I'm not sure to what extent you want people to criticize ideas in this thread, and I'm going to test the waters. Give me feedback on how well this matches the norms you envision.
An immediate flaw comes to mind, that any elaboration of this idea should respond to: Changing the high school curriculum is very difficult. If you've acquired the social capital to change the curriculum of a high school, you should not spend it by making such a small, marginal contribution, but rather you could probably find something with a larger effect with the same social capital.
↑ comment by I_D_Sparse · 2017-03-10T19:43:43.965Z · LW(p) · GW(p)
This is an interesting idea, although I'm not sure what you mean by
It can work without people understanding why it works
Shouldn't the people learning it understand it? It doesn't really seem much like learning otherwise.
Replies from: Bound_upcomment by Bound_up · 2017-03-11T14:53:44.041Z · LW(p) · GW(p)
On the Value of Pretending
Actors don't break down the individual muscle movements that go into expression; musicians don't break down the physical properties of the notes or series of notes that produce expression.
They both simulate feeling to express it. They pretend to feel it. If we want to harness confidence, amiability, and energy, maybe there's some value in pretending and simulating (what would "nice person" do?).
Cognitive Behavioral Therapy teaches that our self-talk strongly affects us, counseling us not to say "Oh, I suck" kind of things. Positive self-talk "I can do this" may be worth practicing.
I'm not sure why, but this feels not irrational, but highly not-"rational" (against the culture associated with "rationality."). This also intrigues me...
Replies from: gwillen↑ comment by gwillen · 2017-03-11T23:45:36.578Z · LW(p) · GW(p)
In this vein, I have had some good results from the simple expedient of internally-saying "I want to do this" instead of "I have to do this" with regards to things that system 2 wants to do (when system 1 feels reluctant), i.e. akratic things. I have heard this reframing suggested before but I feel like I get benefit from actually thinking the "I want" verbally.
comment by James_Miller · 2017-03-10T19:44:55.374Z · LW(p) · GW(p)
The leader of North Korea apparently used a VX nerve agent to kill his half-brother in a Malaysian airport, thus loudly signaling that he is an unhinged sociopath with WMDs. I think a first strike attack on North Korea might be justified.
Replies from: madhatter, Lumifercomment by dglukhov · 2017-03-10T21:40:47.033Z · LW(p) · GW(p)
Low-quality thought-vomiting, eh?
I'll try to keep it civil. I get the feeling the site is as far removed from the site's founding goals and members as a way to striate the site's current readership. Either pay into a training seminar through one of the institutions advertised above, or be left behind to bicker over minutia in an underinformed fashion. That said, nobody can doubt the usefulness in personal study, though it is slow and unguided.
I'm suspicious, of the current motives here, of the atmosphere this site provides. I guess it can't be helped since MIRI and CFAR are at the mercy of needing revenue just like any other institution. So where does one draw the line between helpful guidance and malevolent exploitation?
Replies from: gwillen, None↑ comment by gwillen · 2017-03-11T23:48:33.861Z · LW(p) · GW(p)
Can you please clarify whose motives you're talking about, and generally be a lot more specific with your criticisms? Websites don't have motives. CFAR and MIRI don't run this website although of course they have influence. (In point of fact I think it would be more realistic to say nobody runs this website, in the sense that it is largely in 'maintenance mode' and administrator changes/interventions tend to be very minimal and occasional.)
↑ comment by [deleted] · 2017-03-11T00:42:45.146Z · LW(p) · GW(p)
I think that what you say is true, although I'm unsure that the dichotomy you provide is correct.
Personally, I see great value in a Schelling point that tried to advance rationality. I don't think the current LW structure is optimal, and I also agree that there's not enough structure to help people learning ease into these ideas, or provide avenues of exploration.
I also don't think that CFAR/MIRI have been heavily using LW as a place for advertisement, outside of their fundraising goals, but I've also not been here too long to really say. Feel free to correct me with more evidence.
Towards the end of improving materials on rationality, I've been thinking about what a collective attempt to provide a more practical sequel to the Sequences might look like. CFAR's curriculum feels like it still only captures a small swath of all of rationality space. I'm thinking something like a more systematic long-form attempt to teach skills, where we could source quick feedback from people on this site.
comment by Bound_up · 2017-03-11T13:45:10.666Z · LW(p) · GW(p)
A charity is a business who sells feeling good about yourself and the admiration of others as its products.
To make a lucrative product, don't ask "what needs need filling," ask "what would help people signal more effectively."
Replies from: fubarobfusco↑ comment by fubarobfusco · 2017-03-11T19:59:13.638Z · LW(p) · GW(p)
Your claim seems to factor into two parts: "There exist charities that are just selling signaling", and "All charities are that kind of charity." The first part seems obviously true; the second seems equally obviously false.
Some things that I would expect from a charity that was just selling signaling:
- Trademarking or branding. It would need to make it easy for people to identify (and praise) its donors/customers, and resist imitators. (Example: the Komen breast-cancer folks, who have threatened lawsuits over other charities' use of the color pink and the word "cure".)
- Association with generic "admiration" traits, such as celebrity, athleticism, or attractiveness. (Example: the Komen breast-cancer folks again.)
- Absence of "weird" or costly traits that would correlate with honest interest in its area of concern. (For instance, a pure-signaling charity that was ostensibly about blindness might not bother to have a web site that was highly accessible to blind users.)
- In extreme cases, we would be hearing from ostensible beneficiaries of the charity telling us that it actually hurts, excludes, or frightens them. (Example: Autism Speaks.)
- Jealousy or competitiveness. It would try to exclude other charities from its area of concern. (A low-signaling charity doesn't care if it is responsible for fixing the thing; it just wants the thing fixed.)
comment by I_D_Sparse · 2017-03-11T09:17:08.064Z · LW(p) · GW(p)
Regarding instrumental rationality: I've been wondering for a while now if "world domination" (or "world optimization", as HJPEV prefers) is feasible. I haven't entirely figured out my values yet, but whatever they turn out to be, WD/WO sure would be handy for achieving them. But even if WD/WO is a ridiculously far-fetched dream, it would still be a very good idea to know one's approximate chances of success with various possible paths to achieving one's values. I have therefore come up with the "feasibility problem." Basically, a solution to the problem consists of an estimation of how much one can actually hope to influence the world, and to what extent one can actually fulfill one's values. I think it would be very wise to solve the feasibility problem before attempting to take over the world, or become the President, or lead a social revolution, or improve the rationality of the general populace, etc.
Solving the FP would seem to require a deep understanding of how the world operates (anthropomorphically speaking, if you get my drift; I'm talking about the hoomun world, not physics and chemistry).
I've even constructed a GPOATCBUBAAAA (general plan of action that can be used by any and all agents): first, define your utility function, and also learn how the world works (easier said than done). Once you've completed that, you can apply your knowledge to solve the FP, and then you can construct a plan to fulfill your utility function, and then put it into action.
This is probably a bit longer than 100 words, but I'm posting it here and not in the open thread because I have no idea if it's of any value whatsoever.
Replies from: Bound_up, Elo↑ comment by Elo · 2017-03-11T09:23:11.326Z · LW(p) · GW(p)
can you do me a favour and separate this into paragraphs, (or fix the formatting).
Thanks.
The lesswrong slack has a channel called #world_domination.
Replies from: I_D_Sparse↑ comment by I_D_Sparse · 2017-03-11T21:19:29.446Z · LW(p) · GW(p)
Fixed the formatting.
comment by sdr · 2017-03-14T16:34:10.030Z · LW(p) · GW(p)
FAI value alignment research, and cryonics are mutually inconsistent stances. Cryo resurrection will almost definitely happen by scanning & whole-brain-emulation. An EM/upload with a subjective timeline sped up to 1000x will be indistinguishable from an UFAI. Incremental value alignment results of today will be applied to your EM tomorrow.
For example, how would you feel with all your brilliant intellect, with all your inner motivational spark being looped into a rat race against 10000 copies of yours, performing work for & grounded to a baseline, where if you don't win against your own selves, all your current thoughts, and feeling, and emotions are to be permanently destroyed?
comment by Bound_up · 2017-03-14T00:49:12.289Z · LW(p) · GW(p)
The not-"rational" (read "not central to the rationalist concept cluster in the mind/not part of the culture of rationalists"), but rational things we need to do.
The value of pretending, self-talk, I mention in another comment. The value of being nice is another not strongly associated with "rationalism," but which is, I think, rational to recognize.
There are others. Certain kinds of communication. Why can't any "rationalists" talk? The best ones are so wrapped up in things that betray their nerd-culture association that they are only appealing to other nerds; you can practically identify people who aren't "rationalists" by checking if they sound nerdy or not. There's probably a place for sounding a lot more like Steve Harvey or a pastor or politician if there's any place for effectively communicating with people who aren't nerds.
There are other anti-rationalist-culture things we should probably look for and develop