Posts

"The Bitter Lesson", an article about compute vs human knowledge in AI 2019-06-21T17:24:50.825Z · score: 51 (18 votes)
thought: the problem with less wrong's epistemic health is that stuff isn't short form 2018-09-05T08:09:01.147Z · score: 1 (4 votes)
Hypothesis about how social stuff works and arises 2018-09-04T22:47:38.805Z · score: 31 (14 votes)
Events section 2017-10-11T16:24:41.356Z · score: 12 (2 votes)
Avoiding Selection Bias 2017-10-04T19:10:17.935Z · score: 35 (18 votes)
Discussion: Linkposts vs Content Mirroring 2017-10-01T17:18:56.916Z · score: 20 (6 votes)
Test post 2017-09-25T05:43:46.089Z · score: 1 (0 votes)
The Social Substrate 2017-02-09T07:22:37.209Z · score: 15 (16 votes)

Comments

Comment by lahwran on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T19:11:21.614Z · score: 29 (9 votes) · LW · GW

I thought you were threatening extortion. As it is, given that people are being challenged to uphold morality, this response is still an offer to throw that away in exchange for money, under the claim that it's moral because of some distant effect. I'd encourage you to follow Jai's example and simply delete your launch codes.

Comment by lahwran on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T18:17:52.293Z · score: 21 (8 votes) · LW · GW

This seems extremely unprincipled of you :/

Comment by lahwran on Causal Reality vs Social Reality · 2019-06-25T08:42:01.300Z · score: 2 (2 votes) · LW · GW

Agreed @ the differences not being that great. I've heard this model around for a while, and I feel like while it does describe a distinction, that distinction is not clean in the territory.

Comment by lahwran on Causal Reality vs Social Reality · 2019-06-25T08:28:20.306Z · score: 16 (4 votes) · LW · GW

I think a lot of people in the world in general actually live much more in a mindset where concrete physical thinking is real than it might seem! The problem as I see it is, people's causal calibration level varies, and people's impression of their own ability to have their own beliefs about a topic without it embarrassing them varies. The "social reality" case is what you get when someone focuses most or all of their attention on interacting with people and don't have anything hard in their lives, so they simply don't need to be calibrated about physics and must rely on others' skill in such topics.

But I don't think nearly any neuroplastic human is going to be so unfamiliar with [edit: hit submit while trying to put my cursor back! continuing writing...]

... unfamiliar with causal reality that they can't comprehend the necessity of basic tasks. They might feel comfortable and safe and therefore simply not think about the details of the physics that implements their lives, but it's not a case of there being a social reality that's a separate layer of existence. It's more like the social behavior is what you get when people don't have the emotional safety and spare time and thinking to explore learning about the physics of their lives.

does that seem accurate to y'all? what do you think?

Comment by lahwran on Should rationality be a movement? · 2019-06-21T20:35:11.665Z · score: 9 (6 votes) · LW · GW

I agree with this in some ways! I think the rationality community as it is isn't what the world needs most, since putting effort into being friendly and caring for each other in ways that try to increase people's ability to discuss without social risk is IMO the core thing that's needed for humans to become more rational right now.

IMO, the techniques are relatively quite easy to share once you have trust to talk about them, and merely require a lot of practice, but convincing large numbers of people that it's safe to think things through in public without weirding out their friends seems to me to be likely to require making it safe to think things through in public without weirding out their friends. I think that scaling a technical+crafted culture solution to creating emotional safety to discuss what's true, that results in many people putting regular effort into communicating friendliness toward strangers when disagreeing, would do a lot more than scaling discussion of specific techniques for humanity's rationality.

The problem as I see it right now is that this only works if it is seriously massively scaled. I feel like I see the reason CFAR got excited about circling now - seems like you probably need emotional safety to discuss usefully. But I think circling was an interesting thing to learn from, not a general solution. I think we need to design an internet that creates emotional safety for most of its users.

Thoughts on this balance, other folks?

Comment by lahwran on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2019-06-21T17:48:11.357Z · score: 8 (5 votes) · LW · GW

My own thoughts on the topic of ai, as related to this:

I currently expect that the first strongly general AI will be trained very haphazardly, using lots of human knowledge akin to parenting, and will still have very significant "idiot-savant" behaviors. I expect we'll need to use a similar approach to deepmind's starcraft AI for the first version: that is, reaching past what current tools can do individually or automatically, and hacking them together in a complex training system built for the specific purpose. However, I think at this point we're getting pretty close from the capabilities of individual components. If a transformer network was the only module in an system, but the training setup produced training data that required the transformer becoming a general agent, I currently think it would be capable of the sort of abstracted variable-based consequential planning that MIRI folks talk about being dangerous.

Comment by lahwran on Discourse Norms: Moderators Must Not Bully · 2019-06-21T17:38:33.775Z · score: 9 (2 votes) · LW · GW

I strongly agree with this point. This is the core reason I have mostly stopped using less wrong. I just made a post, and being able to set my own moderation standards is kind of cool. That might make less wrong worth of use as a blog, actually.

Comment by lahwran on Discourse Norms: Moderators Must Not Bully · 2019-06-15T05:09:17.185Z · score: 8 (8 votes) · LW · GW

eliezer's problem is what you have if your friend group is getting diluted. this problem is what you have if you're trying to dilute your friend group as much as you can.

Comment by lahwran on Steelmanning Divination · 2019-06-05T23:43:37.744Z · score: -7 (7 votes) · LW · GW

[deleted]

Comment by lahwran on Karma-Change Notifications · 2019-03-02T22:42:29.490Z · score: 17 (6 votes) · LW · GW

Hey cool. this is the sort of reward I need to enjoy a site enough to use it.

Comment by lahwran on Minimize Use of Standard Internet Food Delivery · 2019-02-12T03:54:46.661Z · score: 21 (14 votes) · LW · GW

I'm pretty uncomfortable with the tone of this article. The title is a command, and the "epistemic status" label is simply "confident", and yet the comments have many disagreements I feel are reasonable. Despite that its main point is reasonable as far as I can see, strong downvoted for what I perceive to be bad discourse.

Comment by lahwran on Alcor vs. Cryonics Institute · 2019-02-12T03:49:16.252Z · score: 3 (2 votes) · LW · GW

Any news on this? (hey yall front page comment readers)

Comment by lahwran on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-09T19:44:16.152Z · score: 3 (2 votes) · LW · GW

I'm going to steal this, I'll probably try to use a continuous relaxation of it and try to break it into causal parts and such

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-07T07:28:07.917Z · score: 4 (3 votes) · LW · GW

my metric of success: "get rationalists off of facebook". to do this you need to replace facebook. discord replaces part of it with a much healthier thing, but lesswrong like stuff is needed for the other part.

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-07T07:26:52.159Z · score: 1 (1 votes) · LW · GW

it's literally the only thing I use. I basically never click on the post list because they're all collapsed and on a different page. give me a feeeeeeed

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-06T20:59:56.351Z · score: 2 (2 votes) · LW · GW

because otherwise people don't read less wrong because the only things that happen there are people posting overthought crystallized ideas.

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-06T20:55:38.254Z · score: 1 (1 votes) · LW · GW
is it social if a human wants another human to be smiling because perception of smiles is good?

I wouldn't say so, no.

good point about lots of level 1 things being distorted or obscured by level 3. I think the model needs to be restructured to not have a privileged instrinsicness to level 1, but rather initialize moment to moment preferences with one thing, then update that based on pressures from the other things

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-06T20:51:54.324Z · score: 1 (1 votes) · LW · GW

so I'm very interested in anything you feel you can say about how this doesn't work to describe your brain.

with respect to economics - I'm thinking about this mostly in terms of partially-model-based reinforcement learning/build-a-brain, and economics arises when you have enough of those in the same environment. the thing you're asking about there is more on the build-a-brain end and is like pretty open for discussion, the brain probably doesn't actually have a single scalar reward but rather a thing that can dispatch rewards with different masks or something

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-06T02:26:28.449Z · score: 1 (1 votes) · LW · GW

this would have to take the form of something like, first make the agent as a slightly-stateful pattern-response bot, maybe with a global "emotion" state thing that sets which pattern-response networks to use. then try to predict the world in parts, unsupervised. then have preferences, which can be about other agents' inferred mental states. then pull those preferences back through time, reinforcement learned. then add the retribution and deservingness things on top. power would be inferred from representations of other agents, something like trying to predict the other agents' unobserved attributes.

also this doesn't put level 4 as this super high level thing, it's just a natural result of running the world prediction for a while.

the better version of this model probably takes the form of a list of the most important built-in input-action mappings.

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-06T02:11:58.758Z · score: 1 (1 votes) · LW · GW

yeahhhhhh missing TAP type reasoning is a really critical failure here, I think a lot of important stuff happens around signaling whether you'll be an agent that is level 1 valuable to be around, and I've thought before about how keeping your hidden TAP depth short in ways that are recognizeable to others makes you more comfortable to be around because you're more predictable. or something

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-05T22:25:29.696Z · score: 9 (6 votes) · LW · GW
Moreover, why should there be discussion? If a post is authoritative, well researched and obviously correct, then the only thing to do is upvote it and move on. A lengthy discussion thread is a sign that either the post is either unclear, incorrect, or has mindkilled its readers.

Uh.

A lengthy discussion thread is progress.

An authoritative post is one person doing all the work.

A discussion thread is getting help with your initial thoughts.

That is the reason I use Facebook as little as possible, and I would stop interacting with LessWrong entirely if it moved to this format.

I don't appreciate the participation-threat here and I don't think it's reasonable to decide what's good based on what current users would respond to by abandoning - don't negotiate with terrorists, etc. I also think you're conflating things with the endless discussion thing - endless scrolling is super addictive, I agree, but I didn't mean that, I mean how posts are short form/partially-finished-thoughts by default. I think crap like zvi and ben hoffman post make the bar too high and things need to be shorter and less of a Big Deal. I'd prefer if everything was on one page by default but you had to do explicit paging to prevent severe addictiveness.

Facebook uses a number of nasty evil tricks, such as carefully timing when you get notifications, outright lying about the number of new posts there are to read (on mobile, it always says 9+ EVEN WHEN THERE ARE EXACTLY ZERO BECAUSE YOU UNSUBSCRIBED FROM EVERYTHING), infinite scroll, not showing you all the things your friends post at once so you can only see everything by going back repeatedly, not propagating notification counts between different clients, showing new notification *counts* without refresh but not showing the new notifications themselves without refresh, etc etc. it's not hard to be less addictive than facebook - it's the default.

I don't want high pageviews. I don't want upvotes. I want discussion. I want a place where people can exchange ideas. I want to take what already exists on rationalist's facebook walls and move it to lesswrong.

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-05T22:07:52.351Z · score: 10 (2 votes) · LW · GW

My current thinking about how to implement this without having to build full sized agents is to make little stateful reinforcement learner type things in a really simple agent-world, something like a typed-message-passing type thing. possibly with 2d or 3d locations and falloff of action effects by distance? then each agent can take actions, can learn to map agent to reward, etc.

  • make other agent's reward states observable, maybe with a gating where an agent can choose to make its reward state non-observable to other agents, in exchange for that action being visible somehow.
  • make some sort of game of available actions - something like, agents have resources they need to live, can take them from each other, value being close to each other, value stability, etc etc. some sort of thing to make there be different contexts an agent can be cooperatey or defecty in.
  • hardcode or preinitialize-from-code level 3 stuff. hardcode into the world identification of which agent took an action at you? irl there's ambiguity about cause and without that some patterns probably won't arise

could use really small neural networks I guess, or maybe just linear matrices of [agents, actions] and then mcmc sample from actions taken and stuff?

I'm confused precisely how to implement deservingness... seems like deservingness is something like a minimum control target for others' reward, retribution is a penalty that supersedes it? maybe?

if using neural networks implementing the power thing on level 3 is a fairly easy prediction task, using bayesian mcmc whatever it's much harder. maybe that's an ok place to use NNs? trying to use NNs in a model like this feels like a bad idea unless the NNs are extremely regularized.... also the inference needed for level 4 is hard without NNs.

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-05T18:52:37.409Z · score: 3 (2 votes) · LW · GW

something that I realized bothers me about this model: I basically didn't include TAPs reasoning aka classical conditioning, I started from operant conditioning.

also, this explanation fails miserably at the "tell a story of how you got there in order to convey the subtleties" thing that eg ben hoffman was talking about recently.

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-05T08:58:10.335Z · score: 7 (4 votes) · LW · GW

no, I was thinking of facebook. it needs to be a discussion platform, so it does need length, but basically what I want is "endless comment thread" type deal - a feed of discussion, as you'd get if the home page defaulted to opening to an endless open thread. as it is, open threads quarantine freeform discussion in a way that doesn't get eyes.

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-05T08:47:39.358Z · score: 3 (4 votes) · LW · GW

man I'm kind of cranky tonight, sorry about that

Comment by lahwran on thought: the problem with less wrong's epistemic health is that stuff isn't short form · 2018-09-05T08:44:04.329Z · score: 3 (2 votes) · LW · GW

I posted it in meta in the first place

Comment by lahwran on Zetetic explanation · 2018-09-05T06:19:27.047Z · score: -5 (6 votes) · LW · GW
Get Less Wrong known as a site where ideas are taken seriously and bullshit is not tolerated

They should ban you for how you're interacting right now. I don't know why they're taking shit with your dodging the issue, but you either don't have the ability to figure out when someone is correctly calling you out, or aren't playing nice. Your brand of bullshit is a major reason I've avoided less wrong, and I want it gone. I want people to critique my ideas ruthlessly and not critique me as a person with Deservingness at all. if you think being an asshole is normal, go away. you don't have to hold back on what you think the problems are, but I sure as hell expect you to say what you think the problems are without implying I said them wrong.

Comment by lahwran on Hypothesis about how social stuff works and arises · 2018-09-05T01:35:20.338Z · score: 7 (4 votes) · LW · GW

0. start with blank file

1. add preference function

2. add time

3. add the existence of another agent

4. add the existence of networks of other agents

Comment by lahwran on Unrolling social metacognition: Three levels of meta are not enough. · 2018-08-26T00:44:06.072Z · score: 1 (1 votes) · LW · GW

I also find the specifics of the method unclear. When he shared it in a lightning talk a few years ago, the point that humans model each other recursively like this was the useful part for me.

Comment by lahwran on Unrolling social metacognition: Three levels of meta are not enough. · 2018-08-26T00:39:32.161Z · score: 14 (5 votes) · LW · GW

Thank you so much for sharing this concept with me a few years ago, it's still in the top ten of largest coherent boosts another human gave me.

I want to mention, for citation reasons - I am pretty sure a large amount of the discussion about this topic as an explicit, defineable human-common-knowledge came into my friend group through you, despite that I and others have written posts using the concept since you shared it, and that perhaps it was just your retelling of an existing thing. I really appreciate it.

Comment by lahwran on Isolating Content can Create Affordances · 2018-08-24T19:13:56.495Z · score: 27 (9 votes) · LW · GW

For the record, I think some of the examples you have of this were instances where it was in fact actually desired that that content be welcome, and you just have values disagreements with the communities in question. but it does seem true to me that the pattern exists, which is why I'm not creating quarantine channels on a community like this that I'm running.

Comment by lahwran on Preliminary thoughts on moral weight · 2018-08-15T02:42:19.714Z · score: 2 (5 votes) · LW · GW

hot take: utilitarianism is broken, the only way to fix it is to invent economics - you can't convert utility between agents, and when you try to do anything resembling that, you get something that works sort of like (but not exactly the same as) money.

Comment by lahwran on Algo trading is a central example of AI risk · 2018-07-28T21:18:06.862Z · score: 13 (7 votes) · LW · GW

Agreed, this is a good point. Here are some thoughts my contrarian comment generator had in response to this:

It's also not a particularly lucrative place to apply the upper end of powerful agent intelligence. While ultimately everything boils down to algorithmic trading, the most lucrative trades are made by starting an actual company around a world-changing product. As a trader, the agent would commonly want to actually start a company to make more money, and that's not an action that is available until you go far enough down the diminishing returns curve of world modeling that it starts being able to plan through manipulating stock price to communicate.

also, high frequency trading is not likely to use heavy ML any time soon, due to strict latency constraints, and longer term trading is competing against human traders who, while imperfect, are still some of the least inadequate in the world at predicting the future.

Comment by lahwran on Reflections on Berkeley REACH · 2018-06-13T23:33:36.861Z · score: 10 (3 votes) · LW · GW

I found reach very helpful the other day when I was feeling very lonely - I just went there, joined a conversation, and after some hanging out, invited some of the folks there to go grab dinner with me. I'll probably do this fairly often now that I have the sense that it's possible.

(Also, I donate to reach. This is actually something of a questionable financial decision on my part but just based on that one experience, reach is already doing unexpectedly well on its goals. I might try hosting some of the events I've been thinking about as open-community things at REACH.)

Comment by lahwran on introducing: target stress · 2018-01-15T05:45:06.358Z · score: -9 (12 votes) · LW · GW

just downvoted every reply in this subthread

Comment by lahwran on Bay Solstice 2017: Thoughts · 2017-12-19T02:35:12.442Z · score: 7 (2 votes) · LW · GW
Mentioning this did and honestly still does feel unspeakably rude because there's basically no way to have that discussion without it being a direct social attack on something intensely personal for her and her parental figures.

Right, that's why I wanted to not just say "kids bothered me"; no point in hiding it in subtext when it's just as awkward. I edited out name, though.

Comment by lahwran on Bay Solstice 2017: Thoughts · 2017-12-19T01:49:06.402Z · score: 25 (9 votes) · LW · GW

I was going to write my thoughts here, but I am tickled to find that I would simply be copy and pasting the main post. Agreements:

  • Clapping kills the mood, and is by far the #1 problem I'd name. People being uncomfortable not clapping is not significantly different from people being uncomfortable participating in a ritual; it's what solstice is there for, so let's actually do it.
  • Quiet children being welcome during the ceremony seems reasonable, but [specific child who hasn't consented to be named] is an unusually loud kid. I noticed that, having other children there, it wasn't children in general, it was this small person specifically. I like the proposal of a kids room for loud small beings, I don't want to just kick small folks out.
  • Having a "solve this!" challenge in the middle made me suddenly go from trying to be solemn to trying to do a thing, and worried that I might fail. Which combined badly with
  • Being in a large crowd in a highly acoustically reflective space made it very hard to have conversations, on top of the feeling of rushed socializing: everyone is trying to find the best person to talk to, and so I feel like I'm taking my friend's time up if I try to talk to them when they could be meeting people; and this doubly so if I try to talk to someone new, because they don't even know me. This makes solstice completely fail as a community gathering for me. I was only able to interact with people I already knew. If we were going to make a community gathering, why not optimize for reducing friction in getting to know people better?

Comment by lahwran on Next narrow-AI challenge proposal · 2017-11-26T00:47:08.505Z · score: 4 (1 votes) · LW · GW

define "altering its own code" very precisely. is it allowed to have internal state on which it branches? what is its code? how much state can this program have? note that there's no clear distinction between code and data in modern computers; there's a weak one in the form of the distinction between the cpu code and the stack/heap, but frequently you're running an interpreter that has interesting code on the stack/heap, and it's pretty easy to blend between these. I would classify neural networks as being initially-random programs implemented in a continuous vm. are they programs that alter "their own" code? I would only say that they alter their own code if metalearning is used.

Comment by lahwran on Civility Is Never Neutral · 2017-11-26T00:24:21.831Z · score: 5 (3 votes) · LW · GW

I dislike this and want to ban open threads and require that these ideas be made into posts. I don't use lesserwrong much now because this feature is missing.

Comment by lahwran on Security Mindset and Ordinary Paranoia · 2017-11-26T00:18:45.155Z · score: 20 (12 votes) · LW · GW

debug note: I've been regularly finishing about a third of your articles. I think they're systematically too long for people with valuable time.

this is a pretty good article overall, though. no non-meta/non-editing comments.

Comment by lahwran on The Craft & The Community - A Post-Mortem & Resurrection · 2017-11-19T07:55:53.081Z · score: 11 (3 votes) · LW · GW
"yeah just come to work whenever you feel like it, don't wory about picking up the phone or respond to emails, just do what you want and we will have to work around it I guess"

This has been the policy at all startups I've worked at. "Be here at 9am (or at my current place, 10:30am), except if you don't want to. Come to this meeting, except if you'd rather not. We're paying you to be value aligned for this amount of time a week because you passed our capability tests on hiring, everything else is up to you. Just be agenty about it."

Comment by lahwran on Open thread, November 13 - November 20, 2017 · 2017-11-12T21:56:01.178Z · score: 3 (4 votes) · LW · GW

I would prefer to not have open threads. this feels like a hack to work around the site not having an ongoing open section, and it clutters up the ui.

Comment by lahwran on Mosquito killing begins · 2017-11-09T07:26:20.607Z · score: 4 (1 votes) · LW · GW

How does this solve the immunity problem, though?

Comment by lahwran on Inadequacy and Modesty · 2017-10-30T01:43:57.705Z · score: 11 (3 votes) · LW · GW

The last day it was $37.70 was 2014-03-14

Comment by lahwran on Leaders of Men · 2017-10-30T01:39:38.296Z · score: 13 (5 votes) · LW · GW
He would go on to a 97-65 record in 2006, come in second in manager-of-the-year voting, get a contract extension, and only get fired after wearing out our starting pitchers so much that we experienced one of the most epic late season collapses in baseball history in 2007, followed by a horrible 2008.

I take it ... that that's doing quite well? I don't know what 97 - 65 means, so I'm not actually clear if this is a very manipulative person who is well loved but usually fails anyway, or if he's actually successful.

Comment by lahwran on Frequently Asked Questions for Central Banks Undershooting Their Inflation Target · 2017-10-30T01:37:01.487Z · score: 20 (18 votes) · LW · GW

Downvoted this for being very long, a sneering tone, and explanations that only work if you already know the topic. I worry that people will not vote sanely on this because they know who the author is.

Edit to clarify: I consider that worth downvoting in this instance because it reads as though it tried and failed to be more broadly accessible, in a way that turns me off as someone who is an interested MOP on the details of monetary policy.

Comment by lahwran on Distinctions in Types of Thought · 2017-10-25T03:24:57.461Z · score: 4 (1 votes) · LW · GW

That seems like weaseling out of the evidence to me. This is just another instance of neural networks being able to learn to do geometric computation to produce hard-edged answers, like alphago is; that they're being used to generate programs seems not super relevant to that. I certainly agree that it's not obvious exactly how to get them to learn the space of programs efficiently, but it seems surprising to expect it to be different in kind vs previous neural network stuff. This doesn't seem that different to me vs attention models in terms of what kind of problem learning the internal behavior presents.

Comment by lahwran on [deleted post] 2017-10-24T03:12:24.443Z

I'm not sure I conveyed what I meant, then; it feels badly wrong, like part of me is missing if I have a high magnitude of white and negligible magnitude in black or red. I feel very mixed, if we use these as the first axes.

Comment by lahwran on [deleted post] 2017-10-24T03:06:45.166Z

I feel like, compressing me in this vector space, I'd be best represented with codes

{white: 0.8, blue: 0.8, black: 0.2, red, 0.2: green: 0}

that energy is pretty spread between the axes, which makes me think this basis set isn't super great. I do like the concept of using spaces like this as compressors for people, though. (vaguely, compression is a more flexible view of the thing clustering is an instance of)

Comment by lahwran on Continuing the discussion thread from the MTG post · 2017-10-24T02:52:06.168Z · score: 14 (3 votes) · LW · GW

I don't understand how to navigate tumblr, is there more than one post there?