Open Thread - January 2018
post by ChristianKl · 2018-01-03T01:06:48.371Z · LW · GW · 30 commentsContents
30 comments
Given that the last Open Thread that was intended to be weekly didn't get much traffic, let's try a monthly version.
30 comments
Comments sorted by top scores.
comment by ryjm · 2018-01-05T05:52:05.250Z · LW(p) · GW(p)
I'm feeling nostalgic.
Is there any interest in having a monthly thread where we re-post links to old posts/comments from LW? Possibly scoped to that month in previous years? i.e, each comment would look like
(2013) link
brief description / thoughts
or something.
It's pretty easy to go back and look through some of the older, more popular posts - but I think there were many open thread comments or frontpage posts not by Yvain / Eliezer that are starting to slip through the cracks of time. Would be nice to see what we all remember.
Replies from: Viliam↑ comment by Viliam · 2018-01-13T20:22:10.902Z · LW(p) · GW(p)
I think it would be great! Also, a lot of work. I wonder what would be the optimal number of links per post. Probably more than a dozen, but less than one hundred.
It would be even greater if the links in the same post would have something in common, for example "all historical highly upvoted LW posts about decision theory". But that would be even more work. I think it would be nice to have something like this in wiki, but "do it as an article first, wikify later" seems like a good strategy; it would also allow debate by focusing everyone's attention on the topic at the same time.
comment by ChristianKl · 2018-01-03T01:09:28.688Z · LW(p) · GW(p)
I'm remembering a post about unconscious problem solving in the LW diaspora. According to it, people often get good results if they look at a complex problem, do something else for a few hours and then come back to the problem. Sometimes that was supposed to even outperform a person who works consciously on the problem.
Unfortunately, I don't seem to find the right search terms to find the post. Can someone point me in the right direction?
Replies from: jamii, None, Raemon↑ comment by jamii · 2018-01-03T10:28:44.456Z · LW(p) · GW(p)
I didn't see the post itself, but it sounds like Unconscious Thought Theory. The experimental evidence is pretty weak, and imo the theory as it stands is just too poorly specified to really test experimentally.
There is some evidence that offline processing matters for eg motor learning or statistical learning. I haven't looked in enough detail to know whether to trust it or not.
Replies from: ChristianKl↑ comment by ChristianKl · 2018-01-03T12:01:10.972Z · LW(p) · GW(p)
Thanks, this seems to be the term I'm looking for.
As far as I can see from Wikipedia the core criticism questions the claim that Unconscious Thought outperforms Conscious Thought.
Even if that isn't true, Unconscious Thought could still play a major role in our thinking.
Replies from: Pattern↑ comment by Pattern · 2020-07-03T03:51:50.130Z · LW(p) · GW(p)
It could just be the effects of taking a break; i.e., the benefits of not consciously thinking about the problem, instead of a benefit stemming from unconscious thought.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-07-03T07:12:13.834Z · LW(p) · GW(p)
Could you expand on what you mean with benefits coming from not consciously thinking about a problem and how you think those benefits would accure in a way that presupposes that the are not created by another process being able to work in the absence of the conscious thinking?
Replies from: Pattern↑ comment by Pattern · 2020-07-03T15:15:37.164Z · LW(p) · GW(p)
Benefits:
Generally: Resting/Taking a break. (There might be more information on this in neuroscience, I'm not deeply familiar with that.)
Specifically: Starting over.
Normally if you stick with might be the benefits of:
1. if you're productive you can continue to be
2. Not having to spend time re-loading context, figuring out what you were thinking/doing before
Sometimes 2 is an advantage. (Particularly if you stop doing something that isn't productive.) If your approach isn't working, 'starting over' can be very beneficial without thinking about the problem in the mean time. For an extreme example, if you stop doing something for not just days but weeks or months, and don't think about it, and then you do it again, you have to figure things out again (knowledge, approach, etc.) to a fair extent (and restart context).
This can help in moving from 'this isn't working' equilibria to 'things working great'.
(Having a different perspective can also be related to 'unconscious' thought or making (random) connections. Over a larger time frame there's more things that connections can be made between. Above I emphasized losing context and starting afresh, rather than connecting/borrowing contexts.)
↑ comment by [deleted] · 2018-01-06T17:54:09.829Z · LW(p) · GW(p)
Barbara Oakley alludes to this in the "focused vs diffuse" model of thinking she uses, where she claims that spending time letting your thoughts simmer is good for problem solving. See [A Mind for Numbers](https://barbaraoakley.com/books/a-mind-for-numbers/).
comment by whpearson · 2018-01-04T16:09:08.047Z · LW(p) · GW(p)
Basically your cpu leaks information because it does things before it checks it should and then tries to erase the tracks of what happens when it finds it shouldn't have done that thing. It doesn't erase them well enough and data can leak.
Make sure all your AGI designs aren't reliant on hiding information between processes etc. Or change your server farms to ARM chips without out-of-order execution.
comment by Rafael Harth (sil-ver) · 2018-01-28T20:51:35.397Z · LW(p) · GW(p)
Um, I'm working on a message to someone who is smart but knows nothing about computer science. It includes a brief description of AI in regards to safety concerns, but I'm not an AI researcher, so I might not have done a good job. Can someone competent enough to judge this tell me whether this is misleading in an important way? (It might not be technically true to suggest that an AI doesn't just react to inputs, but I'm fine with technical inaccuracies, so long as they are minor.)
The real problem isn't easy to understand, but I'll try to summarize it as briefly as possible. Basically, an AI can be thought of as an agent that
– Learns and models things about its environment
– Has a utility function that specifies which goals are desirable or undesirable (anything else is neutral)
– Based on the above, runs a search for policy options (i.e. possible actions) that will maximize its utility function
The principal difference to a pocket calculator is that there a pocket calculator has explicitly programmed responses to every input. This doesn't work for an AI because there are too many possible scenarios. We can't address each one individually, so instead we only tell it what the goals are and let it figure out what to do to meet them.
comment by Rafael Harth (sil-ver) · 2018-01-03T16:00:34.351Z · LW(p) · GW(p)
- How do you do footnotes with the LW software? I can't figure it out or find an explanaiton online.
- Is there a way to delete posts? I published something by accident a few weeks ago, which I feel pretty bad about.
↑ comment by Ben Pace (Benito) · 2018-01-03T20:08:48.224Z · LW(p) · GW(p)
- Not currently, only if I’ve imported the html for you (which is not a scalable system and we will need to build native support for this soon)
- ...no, and yes, this is also a thing that should be added. For now you can turn them into drafts.
comment by Chris_Leong · 2018-01-03T14:06:41.037Z · LW(p) · GW(p)
Does anyone have any thoughts on how we could grow the LW community?
Replies from: ChristianKl, cousin_it, sil-ver, Viliam, AndHisHorse↑ comment by ChristianKl · 2018-01-04T02:23:54.472Z · LW(p) · GW(p)
In addition to advice about recruiting, there's also the task of actually making things better for the people already inside the community. Running local events is great for improving the LW community and running events also brings new people into the community.
In general communities who manage to get their internals right and provide a lot of value for their members don't have a problem with recruiting.
↑ comment by Rafael Harth (sil-ver) · 2018-01-03T20:00:40.383Z · LW(p) · GW(p)
I'm not sure if that's a positive goal. I think having a LW community is highly positive, but having the wrong people in it would take a lot of its value away.
I think you should invite people whom you know, who are intelligent and actually interested in improving the world, but other forms of advertising could be bad.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-01-03T20:18:35.865Z · LW(p) · GW(p)
Yup, slow growth is better than fast growth for a community that wants to preserve and improve it’s core competencies and values. Fast growth breaks infrastructure and becomes an uncontrollable ship.
Replies from: habryka4, John_Maxwell_IV, AndHisHorse↑ comment by habryka (habryka4) · 2018-01-03T21:17:52.006Z · LW(p) · GW(p)
Also, if your community is good, it will naturally attract a lot of people. Successful intellectual communities need to invest a lot more money in selection than they need to invest in advertising. (I.e. see almost all top universities, top research institutes and top companies)
Replies from: AndHisHorse↑ comment by AndHisHorse · 2018-01-03T21:26:17.879Z · LW(p) · GW(p)
Not necessarily; the three sorts of excellent organizations you mention are organizations whose excellence is recognized by the rest of the world in some way, granting its members prestige, opportunities, and money. I suspect this is what attracts people to a large extent, not a general ability to detect organizational goodness. This sort of recognition may be very difficult to get without being very good at whatever it is the organization does, but that does not imply that all good organizations are attractive in this way.
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-01-03T22:53:40.415Z · LW(p) · GW(p)
Agree that it's more nuanced, and think your objection is valid. I have some more thoughts on why I think that still applies to us, but it's definitely a more complicated and less straightforward thing that I would want to have more time to explain.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2018-01-04T19:20:45.642Z · LW(p) · GW(p)
FYI, when I proposed the greater karma gets you more upvotes mechanism, one of the main motivations was to let a site preserve its culture in the face of a big influx of new members.
↑ comment by AndHisHorse · 2018-01-04T12:58:57.608Z · LW(p) · GW(p)
Are we in any real danger of growing too quickly? If so, this is relevant advice; if not - if, for example, a doubling of our growth rate would bring no significant additional danger - I think this advice has negative value by making an improbable danger more salient.
Replies from: Benito, ChristianKl↑ comment by Ben Pace (Benito) · 2018-01-04T13:46:08.843Z · LW(p) · GW(p)
I think the standard human incentive in groups and tribes is to go big, fast; this is the direction of entropy, and must be pushed against (or at least as a first order factor).
In the world where we’re growing very slowly already and this advice will not be helpful, it’s at least true that growth should at least never be in our top 5 metrics for success (to be contrasted with measures of how much intellectual progress we are making e.g. how often we’re having valuable insights, how efficient our communication is, how easy it is to find the best rebuttals to arguments, etc).
Replies from: AndHisHorse↑ comment by AndHisHorse · 2018-01-04T17:25:59.384Z · LW(p) · GW(p)
I agree that growth shouldn't be a big huge marker of success (at least at this point), but even if it's not a metric on which we place high terminal value, it can still be a very instrumentally valuable metric - for example, if our insight rate per person is very expensive to increase, and growth is our most effective way to increase total insight.
So while growth should be sacrificed for impact on other metrics - for example, if growth is has a strong negative impact on insight rate per person - I would say it's still reasonable to assume it's valuable until proven otherwise.
↑ comment by ChristianKl · 2018-01-04T17:06:46.763Z · LW(p) · GW(p)
It's not a matter of growing too quickly but a matter of growing by getting the wrong kind of people.
↑ comment by Viliam · 2018-01-13T20:53:06.210Z · LW(p) · GW(p)
Remembering my first days on LW, a part of the reason I joined were the great articles, and another part was the feeling that "something great starts here, and I can participate and perhaps even contribute". Generalizing from one example, I think we should provide the same thing to new people.
Speaking of articles, I think we should promote the book version of Rationality A-Z. I guess I am just repeating what I already said hundred times here, but in general books are higher status than sequences of blog articles, and people are more likely to read a book from the beginning to the end, than they are to read a sequence of articles without getting distracted by comments, following a hyperlink outside, or just switching to another browser tab. Books are also easy to download to a reader. (I am arguing from the perspective of trivial inconveniences here. When I start reading a book on a reader, no matter how often I am interrupted, if it is good I usually finish it; but long series of articles I usually just bookmark and forget.)
Speaking of the "what are we doing here?" I think there are multiple answers. Some people actively participate on developing decision theory or rationality lessons. But most of us, I suppose, are here merely for self-improvement and the feeling of community. Could we somehow make this more explicit? Like, should we try providing some kind of "rationality coaching", and give it a prominent position on the website? We used to have the bragging threads etc. Or perhaps, some personal stories about how rationality changed your life; to provide a near-mode example of what this all is supposed to be about, because it sometimes feels too abstract; also to remind people that this is more than just a nice place to procrastinate at.
On the other hand, as long as good articles appear regularly, and as long as we succeed to keep our standards, things seem okay. (It's just, the whole thing "why sometimes we have many good articles, and sometimes barely anything" remains a mystery to me. Contrary to my predictions, LW2 actually made a great change; now I again feel great reading the website. I still have no idea which changes were critical to the success, and which were just a coincidence. So, while I feel happy about the current situation, I wish I would understand better what exactly is causing it... just in case some troubles would appear again in the future.)
Growing the community is highly desirable, but we need to grow the right kind of community, not just blindly increase the head count at the cost of quality. Not sure how to achieve that best. Perhaps writing good articles is enough; sooner or later they will get shared on other parts of internet. (That will inevitably also attract the wrong kind of people, and then we need to have our downvotes and moderating tools ready.)
EDIT: Some actionable points: (1) If you believe that LW would benefit from having a specific kind of article, and you think there is a chance you can do write it, go ahead and do it. Similarly, if you believe some kind of regular thread would improve stuff, go ahead and do the experiment. Maybe just not everyone at the same time. (2) If you believe some great content at LW deserves more visibility, maybe you could make a selection of your favorite articles, with some summaries and your thoughts. Later, when you will want to introduce your friends to LW, you can send them a link to that selection.
↑ comment by AndHisHorse · 2018-01-03T21:20:49.176Z · LW(p) · GW(p)
Having recently read The Craft & The Community: A Post-Mortem & Resurrection I think that its advice on recruiting makes a lot of sense: meet people in person, evaluate whom you think would be a good fit - especially those who cover skill or viewpoint gaps that we have - and bring them to in-person events.,