Posts
Comments
I don't think there is anything stopping you from trying to create a test LW2 account to see if you will be locked out
Have you seen the notifications up the top right? Does that do what you want?
How haven't they caught up to 90s-era newsreaders.
What are the plans for the Wiki? If the plan is to keep it the same, why doesn't Lesser Wrong have a link to it yet?
I agree that people should not be able to upvote or downvote an article without having clicked through to it.
I also find the comments hard to parse because the separation is less explicit than on either Reddit or here.
It works now.
It does not seem to be working.
Are there many communities that do that apart from meta-filter?
Firstly, well done on all your hard work! I'm very excited to see how this will work out.
Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.
I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.
I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremely difficult to get up and running/messy b) Because it wasn't clear who to talk to if you wanted to know if your changes were likely to be approved if you made them.
It looks like a) has been solved, if you also improve b), then I expect a bunch of people will want to contribute.
It's just an example.
Yes, they don't appear in the map, but when you see a mountain you think, "Hmm... this really needs to go in the map"
I think it is important to note that there are probably some ways in which this is adaptive. Us nerds probably spend far too much time thinking and trying to be consistent when it offers us very little benefit. It's also better socially in order to be more flexible - people don't like people who follow the rules too strictly as they are more likely to dob them in. It also much it much easier to appear sincere, but also come up with an excuse for avoiding your prior commitments.
Interesting post, I'll probably look more into some of these resources at some point. I suppose I'd be curious to know which concepts you really need to read the book for or which ones can be understood more quickly. Because reading through all of these books would be a very big project.
"I'm assuming you mean "new to you" ideas, not actually novel concepts for humanity as a whole. Both are rare, the latter almost vanishingly so. A lot of things we consider "new ideas" for ourselves are actually "new salience of an existing idea" or "change in relative weighting of previous ideas"." - well that was kind of the point. That if we want to help people coming up with new ideas is somewhat overrated vs. recommending existing resources or adapting existing ideas.
Hopefully the new LW has an option to completely delete a thread.
I can't see any option to report it :-(
I guess what I was saying that insofar as you require knowledge what you tend to need is usually a recommendation to read an existing resource or an adaption of ideas in an existing resource as opposed to new ideas. The balance of knowledge vs. practise is somewhat outside the scope of this article.
In particular, I wrote: "I'm not saying that this will immediately solve your problem - you will still need to put in the hard yards of experiment and practise - just that lack of knowledge will no longer be the limiting factor."
I wrote a post on a similar idea recently - self-conscious ideologies (http://lesswrong.com/r/discussion/lw/p6s/selfconscious_ideology/) - but I think you did a much better job of explaining the concept. I'm really glad that you did this because I consider it to be very important!
Link doesn't seem to be working: http://reason.com/blog/2017/07/06/red-teaming-climate-chang1
What did you do re: Captain Awkward advice?
Yeah, I have a lot of difficulty understanding Lou's essays as well. Nonetheless, there appear to be enough interesting ideas there that I will probably reread them again at some point. I suspect that attempting to write a summary as I go of the point that he is making might help clarify here.
"'Rationality gives us a better understanding of the world, except when it does not"
I provided this as an exaggerated example of how aiming for absolute truth can mean that you produce an ideology that is hard to explain. More realistically, someone would write something along the lines of, rationality gives us a better understanding of the world, except in cases a), b), c)... but if there are enough of these cases and these cases are complex enough, then in practise people round it off to "X is true, except when it is not", ie. they don't really understand what is going on as you've pointed out.
The point was that there are advantages of creating a self-conscious ideology that isn't literally true, but has known flaws, such as it becoming much easier to actually explain so that people don't end up being confused as above.
In other words, as far as I can tell, it doesn't seem that your comment isn't really responding to what I wrote.
Can you add any more detail of what precisely Continental Rationalism is? Or, even better, if you have time it's probably writing up a post on this.
Additionally, how come you posted here instead of on the Effective Altruism forum: http://effective-altruism.com/?
If you want casual feedback, probably the best location currently is: https://www.facebook.com/groups/eahangout/.
I definitely think it would be useful, the problem is that building such a platform would probably take significant effort.
There are a huge number of "ideas" startups out there. I would suggest taking a look at them for inspiration.
I think the reason why cousin_it's comment is upvoted so much is that a lot of people (including me) weren't really aware of S-risks or how bad they could be. It's one thing to just make a throwaway line that S-risks could be worse, but it's another thing entirely to put together a convincing argument.
Similar ideas have been in other articles, but they've framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don't think I saw the links for either of those articles before, but if I had, I probably wouldn't have read them.
I also think that the title helps as well. S-risks is a catchy name, especially if you already know x-risks. I know that this term has been used before, but it wasn't used in the title. Further, while being quite a good article, you can read the summary, introduction and conclusion without encountering the idea that the author believes that s-risks are much greater than x-risks, as opposed to being just yet another risk to worry about.
I think there's definitely an important lesson to be drawn here. I wonder how many other articles have gotten close to an important truth, but just failed to hit it out fo the park for some reason or another.
Thanks for writing this post. Actually, one thing that I really liked about CFAR is that they gave a general introduction at the start of the workshop about how to approach personal development. This meant that everyone could approach the following lectures with an appropriate mindset of how they were supposed to be understood. I like how this post uses the same strategy.
Part of the problem at the moment is that the community doesn't have a clear direction like it did when Elizier was in charge. There was talk about starting an organisation in charge of spreading rationality before, but this never actually seems to have happened. I am optimistic about the new site that is being worked on though. Even though content is king and I don't know how much any of the new features will help us increase the amount of content, I think that the psychological effect about having a new site will be massive.
I probably don't have time to be involved in this, but just commenting to note my approval for this project and appreciation for anyone who choses to contribute. One major advantage of this project is that any amount of effort here will provide value - it isn't like a spaceship that isn't useful half built.
The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.
The problem is that you have to always play when they want, whilst the other person only has to sometimes play.
So I'm not sure if this works.
Partial analysis:
Suppose David is willing to stake 100:1 odds against Trump winning the presidency (before the election). Assume that David is considered to be a perfectly rational agent who can utilise their available information to calculate odds optimally or at least as well as Cameron, so this suggests David has some quite significant information.
Now, Cameron might have his own information that he suspects that David does not and Cameron know that David has no way of knowing that he has this information. Taking this info into account, and the fact that Cameron offered to stake 100:1 odds, he might calculate 80:1 when his information is incorporated. So this would suggest that David should take the bet as the odds are better than Cameron thinks. Except, perhaps David suspected that Cameron had some inside info and actually thinks the true odds are 200:1 - he only offered 100:1 to fool David into thinking it was better that it was - meaning that the bet is actually bad for Cameron despite his inside info.
Hmm... I still can't get my head around this problem.
Thanks for posting this. I've always been skeptical of the idea that you should offer two sided bets, but I never broke it down in detail. Honestly, that is such an obvious counter-example in retrospect.
That said, "must either accept the bet or update their beliefs so the bet becomes unprofitable" does not work. The offering agent has an incentive to only ever offer bets that benefit them since only one side of the bet is available for betting.
I'm not certain (without much more consideration), but it seems that Oscar_Cunningham's solution of always taking one half of a two sided bet sounds more plausible.
What is Esalen?
What's Goodhart's Demon?
The biggest challenge with getting projects done within the Less Wrong community will always be that people have incredibly different ideas of what should be done. Everyone has their own ideas, few people want to join in other people's ideas. Will definitely be interested to see how things turn out after 3 months.
I like the idea of spreading popularity around when justified, ie. high status people pointing out when someone has a particular set of knowledge that people may not know that they could benefit from or giving them credit for interesting ideas. These seem important for a strong community and additionally provide benefits to the rest of the community by allowing them to take advantage of each other's skills.
"Seems fraught with philosophical gobbledygook and circular reasoning to specify what about "because the teacher said so" it is that isn't as "mathematical" as "because you're summing ones and tens separately"."
"Because you're summing ones and tens separately" isn't really a complete gears level explanation, but a pointer or hint to one. In particular, if you are trying to explain the phenomenon formally, you would begin by defining a "One's and ten's representation" of a number n as a tuple (a,b) such that 10a + b = n. We know that at least on such representation exists with a=0 and b=n.
Proof (warning, this is massive, you don't need to read the whole thing)
You then can define a "Simples one's and ten's representation" as such a representation such that 0<=b<=9. We want to show that each number has at least one such representation. It is easy to see that (a, b) = 10a + b = 10a +10 + b - 10 = 10(a+1) + (b-10) = (a+1, b-10). We can repeat this process x times to get (a+x, b-10x). We know that for some x, b-10x will be negative, ie. if x=b, b-10x = -9x. We can decide to look at the last value before it is negative. Let this representation be (m,n). We have defined that n>=0. We also know that n can't be >=10, else, (m+1, n-10) still has the second element of the tuple >=0. So any number can be written in the tuple form.
Suppose that there are two simple representations of a number (x, y) and (p, q). Then 10x+y = 10p + q. 10(x-p) =y-q. Now, since y and q are between 0 and 9 inclusive, we get that y-q is between 9 and -9, the only factor of 10 in this range is 0. So 10(x-p) =0 meaning x=p and y-q=0, meaning y=q. ie. both members of the tuple are the same.
It is then trivial to prove that (a1, b1) + (a2, b2) = (a1+a2, b1+b2). It is similarly easy to show 0<=b1+b2<=18, so that b1+b2 or b1+b2-10 is between 0 and 9 inclusive. It then directly follow that (a1+a2, b1+b2) or (a1+a2-1, b1+b2-10) is a simple representation (here we haven't put any restriction on the value of the a's).
Observations
So a huge amount is actually going on in something so simple. We can make the following observations:
"Because you're summing ones and tens separately" will seem obvious to many people because they've been doing it for so long, but I suspect that the majority of the population would not be able to produce the above proof. In fact, I suspect that the majority of the population would not even realise that it was possible to break down the proof to this level of detail - I believe many of them would see the above sentence as unitary. And even when you tell them that there is an additional level of detail, they probably won't have any idea what it is supposed to look like.
Part of the reason why it feels more gear like is because it provides you the first step of the proof (defining one's and ten's tuples). When someone has a high enough level of maths, they are able to get from the "hint" quite quickly to the full proof. Additionally, even if someone does not have the full proof in their head, they can still see that a certain step will be useful towards producing a proof. The hint of "summing the one's and tens separately" allows you to quite quickly construct a formal representation of the problem, which is progress even if you are unable to construct a full proof. Discovering that the sum will be between 0 and 18, let's you know that if you carry, you will only ever have to carry the one. This limits the problem to a more specific case. Any person attempting to solve this will probably have examples in the past where limiting the case in such a way made the proof either easier or possible, so whatever heurestic pattern matching which occurs within their brain will suggest that this is progress (though it may of course turn out later that the ability to restrict the situation does not actually make the proof any easier)
Another reason why it may feel more gear like is that it is possible to construct sentence of a similar form and use them as hints for other proofs. So, "Because you're summing ones and tens separately" is linguistically close to "Because you're summing tens and hundreds separately", although I don't want to suggest that people only perform a linguistic comparison. If someone has developed an intuitive understand of these phenomenon, this will also play a role.
I believe that part of the reason why it is so hard to define what is or what is not "gears-like" is because this isn't based on any particular statement or model just by itself, but in terms of how this interacts with what a person already knows and can derive in order to produce statements. Further, it isn't just about producing one perfect gears explanation, but the extent to which a person can produce certain segments of the proof (ie. a formal statement of the problem or restricting the problem to the sub-case as above) or the extent to which it allows the production of various generalisation (ie. we can generalise to (tens & hundreds, hundreds and thousands... ) or to (ones & tens & hundreds) or to binary or to abstract algebra). Further, what counts as a useful generalisation is not objective, but relative to the other maths someone knows or the situations that someone knows in which they can apply this maths. For example, imaginary numbers may not seem like a useful generalisation until a person knows the fundamental theorem of algebra or how it can be used to model phases in physics.
I won't claim that I've completely or even almost completely mapped out the space of gears-ness, but I believe that this takes you pretty far towards an understanding of what it might be.
I'm still confused about what Gear-ness is. I know it is pointing to something, but it isn't clear whether it is pointing to a single thing, or a combination of things. (I've actually been to a CFAR workshop, but I didn't really get it there either).
Is gear-ness:
a) The extent to which a model allows you to predict a singular outcome given a particular situation? (Ideal situation - fully deterministic like Newtonian physics)
b) The extent to which your model includes each specific step in the causation? (I put my foot on the accelerator -> car goes faster. What are the missing steps? Maybe -> Engine allows more fuel in -> Compressions have greater explosive force -> Axels spin faster -> Wheels spin faster ->. This could be broken down even further)
c) The extent to which you understand how the model was abstracted out from reality? (ie. You may understand the causation chain and have a formula for describing the situation, but still be unable to produce the proof)
d) The extent to which your understanding of each sub-step has gears-ness?
Out of:
1) "Hey, sorry to interrupt but this sounds like a tangent, maybe we can come back to this later during the followup conversation?"
and:
2) "Hey, just wanted to make sure some others got a chance to share their thoughts."
I would suggest that number 1) is better as 2) suggests that they are selfishly dominating the conversation.
You used the word umbrella and if I was going with a slightly less catchy, but more accurate summary, I would write, "Akrasia is an umbrella term". I think the word is still useful, but only if you remember this. The first step in solving an Akrasia problem is to notice that a problem falls within the Akrasia umbrella, the second step is to then figure out where it falls within that umbrella.
Because the whole point of these funds is that they have the opportunity to invest in newer and riskier ventures. On the other hand, Givewell tries to look for interventions with a strong evidence base.
They expect Givewell to update its recommendations, but they don't necessarily expect Givewell to evaluate just how wrong a previous past recommendation was. Not yet anyway, but maybe this post will change this.
A major proportion of the clients will be EAs
Because people expect this from funds.
To what extent is it expected that EAs will be the primary donors to these funds?
If you want to outsource your donation decisions, it makes sense to outsource to someone with similar values. That is, someone who at least has the same goals as you. For EAs, this is EAs.
No, because the fund managers will report on the success or failure of their investments. If the funds don't perform, then their donations will fall.
Wanting a board seat does not mean assuming that you know better than the current managers - only that you have distinct and worthwhile views that will add to the discussion that takes place in board meetings. This may be true even if you know worse than the current managers.
All I ever covered in university was taking the Scrodinger equation and then quantum physics did whatever that equation said.
Infinite sums/sequences are a particular area of me. I would love to know about how these sums appear in string theory - what's the best introduction/way into this? You said these sums appear all over physics. Where do they appear?
"This may also be somewhat pedantic, but in something like quantum physics, because of this gap in knowledge, it'd be very obvious who the professor was to an audience that doesn't know quantum physics, even if it wasn't made explicitely clear beforehand." - I met one guy who was pretty convincing about confabulating quantum physics to some people, even though it was obvious to me he was just stringing random words together. Not that I know even the basics of quantum physics. He could actually speak really fluently and confidently - just everything was a bunch of non-sequitors/new age mysticism. I can imagine a professor not very good at public speaking who would seem less convincing.