Meetup : San Antonio Meetup 2016-07-11T01:48:54.619Z · score: 0 (1 votes)


Comment by thealtar on Effective altruism is self-recommending · 2017-04-24T15:56:16.894Z · score: 2 (2 votes) · LW · GW

This has a long list of sound arguments in it which exist in tandem with a narrative that may not actually be true. Most of the points are valid regardless, but whether they have high importance in aggregate or whether any of the conclusions reached actually matter depends heavily on what lens we're looking through and what actually has been going on in reality at Open Phil and Open AI.

I can imagine a compelling and competing narrative where Open Phil has decided that AI safety is important and thinks that the most effective thing they can do with a ton of their money is to use it to make the world safer against that x-risk. They lack useful information on the topic (since it is a very hard topic) so they export the actual research of the thing and spending of the money to an organization that seems better suited to doing just that: Open AI. (Open AI may not be a good source for that, but that's a separate discussion). However, since they're donating so much money and don't really know what Open AI might do with it in practice in the future, they ensure that they get a person they trust business-wise on the board of directors to ensure that it ends up getting spent in ways that are in line with their original desires. (A good backup plan when there's open questions of whether any group working on AI is doing more to help or harm it.)

Gwern makes a quick fermi estimate on here about how much Open AI actually costs to run per year and reminds us that while $1 billion has been "committed" to Open AI, that's really just a press release social-statement about a pseduo-promise by people who are known to be flaky and aren't under any obligation to give them that money. If we're estimating Open AI to be running on $9 million per year, then $30 million is a very hefty donation which gives the company three years more runway to work on things. That's a big deal to Open AI being in existence or not in existence, and if they already have $9 million coming in per year from another source then that could potentially double their income per year and allow them to expand into lots of new areas as a result.


There are a number of inductive leaps going on within the large model presented in the original post that I think are worth pointing out and examining. I'll also stick what I think is the community affect/opinion on the end of them because I've been up all night and think it's worth denoting.

  1. Open Phil is now taking AI Safety as a serious threat to the world and pledged $30 million of money donated to them on it. (Yay! Finally!)
  2. Open Phil is giving that money to Open AI. (Boo! Give it to MIRI!)
  3. Holden is now going to be a board member at Open AI as part of the deal. (Boo! We don't like him because he screwed up #2 and we don't respect his judgments about AI. Someone better should be on the board instead!) (Yay! He didn't write the people we don't like a blank check. That's a terrible idea in this climate!)

These are the parts that actually matter. Whether the money is going to a place that is actually useful for reducing x-risk and whether Holden as board member is there to just ensure the money isn't be wasted on useless projects or whether he'll be messing with the distribution of funds larger than $30 million in ways that are harmful (or helpful!) to AI Safety. He could end up spending them wisely in ways that make the world directly safer, directly less safe, safer because it was spent badly versus alternatives that would have been bad, or less safe because they weren't spent on better options.

Insofar that I think any of us should particularly care about all of this it will have far more to do with these points than other things. They also sound nicely far more tractable since the other problems you mention about Open Phil sound pretty shitty and I don't expect a lot of those things to change much at this point.

Comment by thealtar on Wireheading Done Right: Stay Positive Without Going Insane · 2016-12-07T23:05:35.112Z · score: 0 (0 votes) · LW · GW

This is probably my favorite link post that's appeared on LW thus far. I'm kinda disappointed more people haven't checked it out and voted it upward.

Comment by thealtar on On the importance of Less Wrong, or another single conversational locus · 2016-11-28T23:25:44.529Z · score: 3 (5 votes) · LW · GW

Having the best posts be taken away from the area where people can easily see them is certainly a terrible idea architecture wise.

The solution to this is what all normal subreddit do: sticky and change the color of the title so that it both stands out and is in the same visual range as everything.

Comment by thealtar on On calling inconceivable what you've already conceived. · 2016-11-27T20:48:13.842Z · score: 0 (0 votes) · LW · GW

"You can deduce that verbally. But I bet you can’t predict it from visualizing the scenario and asking what you’d be suprised or not to see."

I like this.

In my mind, this plugs into Eliezer's recent facebook post regarding thinking about the world in mundane terms or in terms of what is merely-real or in terms of how you personally would go and fix a sink or how you go and buy groceries at the store VS. the way you think about everything else in the world. I think these methods of thought in which you are visualizing actual objects and physics in the real world, thinking of them in terms of bets, and checking your surprise at what you internally simulate all point at a mindset that is extremely important to learn and possess as a skill.

Comment by thealtar on A Return to Discussion · 2016-11-27T20:43:26.865Z · score: 1 (1 votes) · LW · GW

I hadn't sufficiently considered the long term changes of LW to have occurred within the context of the overall changes in the internet before. Thank you very much for pointing it out. Reversing the harm of Moloch on this situation is extremely important.

I remember posting in the old vbulletin days where a person would use a screenname, but anonymity was much higher and the environment itself felt much better to exist in. Oddly enough, the places I posted at back then were not non-hostile and had a subpopulation who would go out of their way to deliberately and intentionally insult people as harshly as possible. And yet... for some reason I felt substantially safer, more welcome, and accepted there than I have anywhere else online.

To at least some extent there was a sort of compartmentalization going on in those places where serious conversation was in one area while pure-fluffy, friendly, jokey banter-talk was going on in another. Attempting to use a single area for both sounds like a bad idea to me and is the sort of thing that LessWrong was trying to avoid (for good reason) in order to maintain high standards and value of conversation but places like Tumblr allow and possibly encourage. (I don't really know about tumblr since I avoid it, but that's what it looks like from the outside.) There may also have been a factor that I had substantially more in common with the people who were around at that time whereas the internet today is full of a far mroe diverse set of people who have far less interest in acculturating into strange new environments.

Short-term thinking, slight pain/fear avoidance, and trivial conveniences that shifted everyone from older styles like vbulletin or livejournal to places like reddit and tumblr ultimately pattern matches to Moloch in my mind if it leads to things like less common widescale discussion of rationality or decreased development of rationalist-beloved areas. Ending or slowing down open, long-term conversations on important topics is very bad and I hope that LW does get reignited to change the progression of that.

Comment by thealtar on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T08:06:52.364Z · score: 4 (4 votes) · LW · GW

A separate action that could be taken by bloggers who are interested in it (especially people just starting new blogs) is to continue posting where they do, but disable comments on their posts and link people to corresponding LW link post to comment on. This is far less ideal, but allows them to post elsewhere and to have the comments content appear here on LW.

Comment by thealtar on July 2016 Media Thread · 2016-07-14T15:42:07.704Z · score: 0 (0 votes) · LW · GW

I have visual snow from trying out a medication. I can confirm that it sucks and is annoying. It's not debilitating though and is mostly just inconvenient.

Then again, it may be slightly harming my ability to focus while reading books. Still checking that out.

Comment by thealtar on Review and Thoughts on Current Version of CFAR Workshop · 2016-06-08T18:11:40.028Z · score: 3 (3 votes) · LW · GW

I went through similar thought processes before attending and decided that it was extremely unlikely that I would ask for my money back even if I didn't think the workshop had been worth the cost. That made me decide that the offer wasn't a legitimate one for me to consider as real and I ignored it when making my final considerations of whether to go or not.

I ultimately went and thought it was fully worth it for me. I know 3+ people who follow that pattern who I spoke to shortly after the workshop and 1 who thought that it hadn't actually been worth it but did not ask for their money back.

Comment by thealtar on Open Thread May 16 - May 22, 2016 · 2016-05-18T14:49:56.221Z · score: 4 (4 votes) · LW · GW

Normally I say get plenty of sleep, but I think you asked a bit late to get that answer.

Comment by thealtar on Open Thread May 9 - May 15 2016 · 2016-05-13T22:59:13.084Z · score: 2 (2 votes) · LW · GW

This looks like it. Thank you!

Comment by thealtar on Open Thread May 9 - May 15 2016 · 2016-05-13T18:05:53.566Z · score: 2 (2 votes) · LW · GW

I saw a link in an open thread several months back about an organization in the past that was quite similar to the Rationality movement but eventually fell apart. It was science based self-improvement and people trying to make rational choices back in the 1920s or earlier. I've tried searching for the link again but can't find it. Does anyone know which one I'm referring to?

Comment by thealtar on Open Thread May 9 - May 15 2016 · 2016-05-11T13:56:05.507Z · score: 2 (2 votes) · LW · GW

I was reading through a link on an Overcoming Bias post about the AK Model and came across the idea that, " the Social return on many types of investments far exceed their private return". To rephrase this: there are investments you can make such as getting a college education which benefit others more than they benefit you. These seem like they could be some good skills to focus on which might be often ignored. Obvious examples I can think of would be the Heimlich maneuver, CPR, and various social skills.

Do you know of any good low hanging fruit in terms of skills or time investments a person can make which can provide a lot of benefit to the people around them (company, family, friends, etc.) but don't actually benefit themselves?

Comment by thealtar on May Outreach Thread · 2016-05-07T17:12:19.487Z · score: 1 (1 votes) · LW · GW

EY was attempting to spread his ideas since his first post on overcomingbias. This pattern was followed through entire Sequences. Do you regard this as different from then?

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-20T20:17:56.938Z · score: 0 (0 votes) · LW · GW

I have a similar aesthetic. What areas of weirdness are present in the people you like the most?

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-20T20:00:56.170Z · score: 0 (0 votes) · LW · GW

I think this is closest to what I thought Hanson was trying to say and it was close to what I was hoping people were interpreting his writing as saying. The way other people were interpreting his statements wasn't clear from some comments I've read I thought it was worth checking in to.

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-20T19:15:48.297Z · score: 0 (0 votes) · LW · GW

This is an example of why I'm curious about everyone else's parsing. I bet Robin Hanson does talk about status in the pursuit of status, however I bet he also enjoys going around examining social phenomenon in terms of status and that he is quite often on to something. These aren't mutually exclusive. People may have an original reason for doing something, but they may have multiple reasons that develop over time and their most strongly motivating reason can change.

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-20T19:07:16.594Z · score: 1 (1 votes) · LW · GW

Could you expand on this? Is this just an idea you generally hold to be true or are there specific areas you think people should conform far less in (most especially the LW crowd)?

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-20T18:03:10.974Z · score: 0 (0 votes) · LW · GW

This makes me wonder whether lots of people who are socially awkward or learning about socialization (read: many LWers) need not only social training but conformity coaches.

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-20T13:56:00.138Z · score: 1 (1 votes) · LW · GW

I've been reading a lot of Robin Hanson lately and I'm curious at how other people parse his statements about status. Hanson often says something along the lines of: "X isn't about what you thought. X is about status."

I've been parsing this as: "You were incorrect in your prior understanding of what components make up X. Somewhere between 20% and 99% of X is actually made up of status. This has important consequences."

Does this match up to how you parse his statements?


To clarify: I don't usually think anything is just about one thing. I think there are a list of motivations towards taking an action for the first person who does it and that one motivation is often stronger than the others. Additionally, new motivations are created or disappear as an action continues over time for the original person. For people who come later, I suspect factors of copying successful patterns (also for a variety of reasons including status matching) as well as the original possible reasons for the first person. This all makes a more complicated pattern and generational system than just pointing and yelling "Status!" (which I hope isn't the singular message people get from Hanson).

Comment by thealtar on What is the best way to read the sequences? · 2016-04-20T13:45:32.235Z · score: 0 (0 votes) · LW · GW

You're welcome to post in old threads since threads don't get bumped up to the top when replied to. However, you're likely to get more answers to a question like this one if you post in the current Open Thread.

Comment by thealtar on Open Thread April 11 - April 17, 2016 · 2016-04-20T01:17:00.710Z · score: 0 (0 votes) · LW · GW

there's also ya'all

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-19T15:40:25.184Z · score: 0 (0 votes) · LW · GW

This seems very useful. Thank you for posting it.

Out of all of the blogs, which ones do you prioritize in reading first? It seems like there are far too many to always read all of them.

Comment by thealtar on Open thread, Apr. 18 - Apr. 24, 2016 · 2016-04-19T15:16:38.377Z · score: 0 (0 votes) · LW · GW

What are the special rules involved that are mentioned in the thread? Are they the same as the Happiness Thread?

Comment by thealtar on Open Thread April 11 - April 17, 2016 · 2016-04-15T15:19:20.780Z · score: 1 (1 votes) · LW · GW

Counterfactual Diaspora Question:

If Eliezer had written on OvercomingBias and gotten enough activity to create LessWrong, but the population was filled with different personalities (no So8res, no AnnaSalamon, no Yvain, etc.) do you think the diaspora would have occurred in the same way and on the same general timeframe that it has?

I' m curious about what parts of LessWrong's development you think were inevitable and why.

Comment by thealtar on Open Thread April 11 - April 17, 2016 · 2016-04-15T13:23:40.107Z · score: 1 (1 votes) · LW · GW

V gubhtug gung gur zrgubq Uneel hfrq jnf fhssvpvragyl sne bhgfvqr gur obk gung ab bar jvgubhg n fhofgnagvny xabjyrqtr onfr bs obgu fpvrapr naq fpvrapr svpgvba jbhyq rire guvax bs vg be rkcrpg vg. Uneel unq hfrq cnegvny genafzhgngvba orsber, ohg arire hfvat gur zbyrphyrf sebz nve vgfrys (gung V erzrzore) be hfvat n zrgubq gung jnf jrncbavmrq va n jnl gung zhttyrf unira'g ernyyl jrncbavmrq vg orsber.

Comment by thealtar on Open Thread April 11 - April 17, 2016 · 2016-04-13T15:20:26.110Z · score: 0 (0 votes) · LW · GW

Fairly soon I imagine you'll get games that allow you to choose the pronouns used to address your character separate from their looks and a slider or more freeform body-sculpting ability rather than just two choices.

Comment by thealtar on Open Thread April 11 - April 17, 2016 · 2016-04-13T15:01:01.219Z · score: 2 (2 votes) · LW · GW

If I take a few dozen pictures of one person talking, I can find in them there most any microexpression you want including ridiculous ones. These expressions are not representative of anything.

Tabloid news are a great example of this. If you take thousands of pictures of the most gorgeous and breathtaking people in the world, you can find one where they look like deranged freaks.

Comment by thealtar on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-13T12:39:53.444Z · score: 0 (0 votes) · LW · GW

I came here to mention raindances. You do a raindance and nothing happens. You raindance for 12 more days and suddenly it rains. That must mean if you dance for 13 days straight (or dance until some other sort of requirement you Just So on the spot) it will rain!

If you don't add the idea of falsifiability to accept that raindances might not cause rain when you get negative results, then you will always get the conclusion that some amount of raindancing will cause rain.

Ideally you would add a parameter of audience interaction though if you really want everyone to feel the impact of their failed predictions on a gut level. That's the value of the 2-4-6 game and things like making predictions before learning about scope insensitivity.

Comment by thealtar on Open Thread April 11 - April 17, 2016 · 2016-04-12T14:01:41.081Z · score: 0 (0 votes) · LW · GW

It's also possible that people's perception of the landscape itself changed over time as Clarity posts often and has been here a while now. That, and if any votes were from Eugene's downvote brigades, then their removal would have helped. (I'm at 85% karma and i think almost all of the negative votes were from Eugene's accounts.)

Comment by thealtar on Positivity Thread :) · 2016-04-11T17:27:50.144Z · score: 2 (4 votes) · LW · GW

I think weird sun twitter is really great and any of you that are weird suns are really great.

Comment by thealtar on Open Thread April 4 - April 10, 2016 · 2016-04-06T21:34:31.693Z · score: 1 (1 votes) · LW · GW

I made a comment related to this on the SSC post about the rationalists I met in person in the Bay Area. I think it's the continued and extended version of what you stated above with some people in the Bay Area calling themselves rationalists while being in the 20% LW-ish (or lower) crowd. I primarily focused on the overcoming biases and getting stronger parts.

"I witnessed some trends in rationalists during a visit in the Bay Area recently that make far more sense to me now when seen through the lens of your generation descriptions. The instrumental rationalists seemed to fit into 3 Generation type groups.

Generation 1 agreed with 50% or greater of The Sequences and attempt to use the ideas from it, CFAR, and other sources in their daily lives to improve themselves. They seemed to take all of it quite seriously.

Generation 2 possessed a mild respect for CFAR, less respect for The Sequences themselves (and likely read next to none of it), made sure to make a comment of disdain for EY almost as if it was a prerequisite to confirm tribe membership (maybe part of the “i’m not one of THOSE rationalists”?), and had a larger interest in books that their friends recommended for overall self-improvement.

Generation 3 hadn’t read any of The Sequences, had read only a few blog posts, loosely understood some of the terms being regularly thrown around (near/far mode, far mode, object level, inside/outside view, map/territory etc.) but didn’t know the definitions well enough to actually use the mental actions of the techniques themselves, and considered themselves rationalists via group affiliation, showing up to events, and having friendships rather than being rationalists due to becoming more rational themselves and attempting to optimize their own lives and brains.

I had limited exposure to the Bay Area and would be very interested if anyone else thinks these categories actually match the territory there. This also leaves out epistemic rationalists (some of whom I met) who don’t fit into the three generations presented above."

Comment by thealtar on Consider having sparse insides · 2016-04-04T14:48:30.082Z · score: 0 (0 votes) · LW · GW

Generally I'd say: make a list of all things you do, and for each of them ask yourself a question: "Is this something I do because I got used to thinking about myself as 'the person who does this'? If I would right now magically reincarnate as someone else, who is 'not the person who does this', would I want to start doing it again?"

I like this technique. I like this a lot.

Happily, my friends do meet that criteria now. The Unattractive Person part is primarily a delayed updating. I'm working on those various skills but also haven't updated my internal impression of myself to reflect the improvements I've made. I expect to get a more realistic impression of myself after more time, getting better at reading people's attraction signals, and seeing social results

Comment by thealtar on Happy Notice Your Surprise Day! · 2016-04-04T13:09:30.694Z · score: 0 (0 votes) · LW · GW

I was a bit confused about how it's a prank on people at all. Ideally a prank is localized to one person and is set up so that it doesn't run out of control.

What happened to you?

Comment by thealtar on Open Thread March 28 - April 3 , 2016 · 2016-04-01T18:11:31.196Z · score: 1 (1 votes) · LW · GW

I'm not able to see the post Ultimate List of Irrational Nonsense on my Discussion/New/ page even though I have enabled the options to show posts that have extremely negative vote counts (-100) while signed in. I made a request in the past about not displaying those types of posts for people who are not signed in. I'm not sure if that's related to this or not.

Comment by thealtar on Consider having sparse insides · 2016-04-01T14:50:31.177Z · score: 0 (0 votes) · LW · GW

How exactly would a person burn an identity away?

Are there any non-obvious identities that people have which might be useful to burn away?

I recently noticed that I have an internal identity of Unattractive Person which may have been valid in the past but isn't any longer considering repeated signals in a variety of social interactions over the past few months.

Comment by thealtar on What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown? · 2016-03-28T12:52:38.088Z · score: 1 (1 votes) · LW · GW

They deleted the worst ones. Screenshots can be found on other websites.

Comment by thealtar on Lesswrong Potential Changes · 2016-03-21T14:41:35.800Z · score: 0 (0 votes) · LW · GW

Additional Suggestion 1: Regular reminders of places to send suggestions could be helpful. I occasionally come up with additional ones and usually just post them on whatever recent suggestion-related thread is new

Additional Suggestion 2: The search function would be massively improved if it ignored and didn't search the text in the sidebar. This was referenced and I was reminded of this by gjm from his comment here in the latest Open Thread.

Comment by thealtar on Open Thread March 21 - March 27, 2016 · 2016-03-21T14:35:12.264Z · score: 1 (1 votes) · LW · GW

I've run into this problem several times before. It would be very helpful if the search feature ignored the text in the sidebar.

Comment by thealtar on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-18T13:25:46.911Z · score: 0 (0 votes) · LW · GW

A trust app is going to end up with all the same issues credit ratings have.

Comment by thealtar on Posting to Main currently disabled · 2016-03-18T00:49:13.018Z · score: 1 (1 votes) · LW · GW

Is it possible for Main posts to also be listed on Discussion but have an added highlight effect around their title or something? Then people can tell they're Main while now having to check a rarely used side subreddit.

Comment by thealtar on How I infiltrated the Raëlians (and was hugged by their leader) · 2016-03-17T16:06:56.883Z · score: 0 (0 votes) · LW · GW

Why did the hug feel 100% fake to you? Do you think the other Japanese people give less fake hugs?

I generally know that Japan isn't too big on hugging as a culture, so I wonder whether very many Japanese people would be very skilled at this.

Comment by thealtar on Newsjacking for Rationality and Effective Altruism · 2016-03-17T14:45:52.080Z · score: 1 (1 votes) · LW · GW

How commonly do you think other groups do this and what ways would you suggest at stopping it? Your article seems fairly innocuous as far as spotlight stealing goes, but I'm sure other people's attempts might be far more harmful for the original news story obtaining appropriate attention.

Comment by thealtar on Open Thread March 7 - March 13, 2016 · 2016-03-11T15:04:31.521Z · score: 0 (0 votes) · LW · GW

A game like that could occur between humans and A.I. with online collectible card games. (I'm specifying online because the rules are streamlined and mass competition is far more available.)

Comment by thealtar on AlphaGo versus Lee Sedol · 2016-03-10T21:52:09.260Z · score: 2 (2 votes) · LW · GW

I was worried about something like this after the first game. I wasn't sure if expert Go players could discern the difference between AlphaGo playing slightly better than a 9dan versus playing massively better than a 9dan due to how the AI was set up and how difficult it might be to look at players better than the ones already at the top.

Comment by thealtar on AlphaGo versus Lee Sedol · 2016-03-10T16:32:01.466Z · score: 0 (0 votes) · LW · GW

Does anyone know the current odds being given of Lee Sedol winning any of the three remaining games against AlphaGo? I'm curious if at this point is AlphaGo likely possible to beat by a human player better than Sedol (assuming there are any) or if we're looking at an AI player that is better than a human can be.

Comment by thealtar on Open Thread March 7 - March 13, 2016 · 2016-03-07T19:44:06.265Z · score: 0 (0 votes) · LW · GW

Ah. Found it. Saw a different one that also matched but had 10 letters.

Comment by thealtar on Open Thread March 7 - March 13, 2016 · 2016-03-07T19:26:49.231Z · score: 0 (0 votes) · LW · GW

Should "pop slurper" be 10 letters?

Comment by thealtar on Open Thread March 7 - March 13, 2016 · 2016-03-07T16:47:03.137Z · score: 5 (5 votes) · LW · GW

Open Threads are already pretty crowded at around 200 posts per thread. Media threads also seem to have slightly different posting rules and are doing just fine as-is.

Comment by thealtar on Open Thread Feb 29 - March 6, 2016 · 2016-03-04T21:26:18.998Z · score: 0 (0 votes) · LW · GW

Aren't there wrist devices that can measure your heart rate over time? Not sure how well they work, but they might be cheaper than a gadget bed.

Comment by thealtar on Open Thread Feb 29 - March 6, 2016 · 2016-03-04T21:15:56.585Z · score: 1 (1 votes) · LW · GW

I worry that a lot of discussions about AI are all being done via metaphor or being based on past events while it's easy to make up a metaphor that matches any given future scenario and it shouldn't be easily assumed that building an artificial brain is (or isn't!) anything like past events.