Meetup : London Social Meetup, 18/01/2015 2015-01-14T17:49:45.075Z
Meetup : London First 2015 Meetup, 04/01/2015 2014-12-31T18:01:57.256Z
Meetup : London Social Meetup, 21/12/2014 2014-12-15T23:54:42.764Z
Meetup : London Social Meetup, 07/12/2014 2014-12-01T15:58:30.302Z
Open thread, 9-15 June 2014 2014-06-09T13:07:20.908Z
Meetup : London Social Meetup (possibly) in the Sun 2014-06-06T11:37:16.899Z
Open Thread, May 5 - 11, 2014 2014-05-05T10:35:45.563Z
Open Thread April 16 - April 22, 2014 2014-04-16T07:05:36.020Z
Meetup : Socialising in the Sun, London 13/04 2014-04-09T16:22:12.973Z
Open Thread April 8 - April 14 2014 2014-04-08T11:11:50.069Z
Meetup : London Social Meetups - 23/03 and 30/03 2014-03-22T12:21:42.893Z
Meetup : London Games Meetup 09/03 [VENUE CHANGE: PENDEREL'S OAK!], + Social 16/02 2014-02-27T16:07:05.661Z
Identity and Death 2014-02-18T11:35:49.393Z
Meetup : London Practical Meetup - Calibration Training! 2013-12-01T12:46:18.061Z
Meetup : London Social Meetup, 01/12/2013 2013-11-27T16:45:28.355Z
Meetup : London Social Meetup, 24/11/2013 [Back to the Shakespeare's Head] 2013-11-18T10:38:17.739Z
Googling is the first step. Consider adding scholarly searches to your arsenal. 2013-05-07T13:30:38.019Z
Meetup : 18/11 London Meetup 2012-10-31T16:24:41.431Z


Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T21:25:19.189Z · LW · GW

Why are so many resources being sunk into this specifically? I just don't understand how it makes sense, what the motivation is and how they arrived at the idea. Maybe there is a great explanation and thought process which I am missing.

From my point of view, there is little demand for it and the main motivation might plausibly have been "we want to say we've published a book" rather than something that people want or need.

Having said that, I'd rather get an answer to my initial comment - why it makes sense to you/them - rather than me having to give reasons why I don't see how it makes sense.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T21:19:29.534Z · LW · GW

Thanks for the reply. That seems like a sensible position.

It sounds like maybe you were less involved in this than some of the 7(is that right?) other employees/admins so I'm very curious to hear their take, too.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T20:51:58.208Z · LW · GW

Printing costs are hardly the only or even main issue, and I hadn't even mentioned them. You are right though, those costs make the insistence on publishing a book make even less sense.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T17:48:19.744Z · LW · GW

I’m confused by this. Why would only voters be interested in the books?

Because I doubt there are all that much more people interested in these than the number of voters. Even at 1000 it doesn't seem like a book makes all that much sense. In fact, I still don't get why turning them into a book is even considered.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T14:57:03.246Z · LW · GW
  1. It seems like very few people voted overall if the average is "10-20" voters per post. I hope they are buying 50+ books each otherwise I don't see how the book part is remotely worth it.
  2. The voting was broken in multiple ways - you could spend as many points as possible, but instead of a cut-off, your vote was just cast out due to the organizers' mistake to allow it.
  3. The voting was broken in the way described in the post, too.
  4. People didn't understand how the voting worked (Look at the few comments here) so they didn't really even manage to vote in the way that satisfies their preferences best. The system and the explanation seem at fault.
  5. I note that a lot of promotion went into this - including emails to non-active users, a lot of posts about it, long extended reviews.

So, my question is - do the organizers think it was worth it? And if yes, do you think it is worth it enough for publishing in a book? And if yes to both - what would failure have looked like?

Comment by Tenoke on Bay Solstice 2019 Retrospective · 2020-01-18T08:07:43.876Z · LW · GW

If you don't like baking don't go to a baking show. If you hate dancing, don't take a Salsa class. If you don't like thinking about the implications of a Solstice don't go to a Solstice event. They are, thankfully, entirely optional.

If you are listing some specific problem with the implementation of the event, I'd get it, but you are complaining about a fundamental part of it.

Comment by Tenoke on Bay Solstice 2019 Retrospective · 2020-01-17T20:20:27.078Z · LW · GW

He has been trying to do it for years and failed. The first time I read his attempts at doing that, years ago, I also assigned a high probability of success. Then 2 years passed and he hadn't done it, then another 2 years..

You have to adjust your estimates based on your observations.

Comment by Tenoke on Bay Solstice 2019 Retrospective · 2020-01-17T10:34:07.749Z · LW · GW

I have a bunch of comments on this:

  1. I really liked the bit. Possibly because I've been lowkey following his efforts.
  2. He looks quite good, and I like the beard on him.
  3. ..

I've always thought that his failed attempts at researching weightloss and applying what he learned were a counter example of how applicable LW/EY rationality is. Glad to see he solved it when it became more important.

  1. Eliezer clearly gets too much flack in general, and especially in this case. It's not like I haven't criticised him but come on.
  2. several people’s reaction was, “Why is this guy talking to me like I’m his friend, I don’t even know him”

Really? Fine, you don't know him but if you don't know EY and are at a rationalist event why would you be surprised by not knowing a speaker? From the public's reaction to his openning it should've been clear most people did know him.

  1. I'm not against the concept of triggering - some stuff can be, including eating disorders, but like this? Can a person not talk at all about weight gain/loss? Is the solstice at all LW-related if things can't be discussed even at their fairly basic (and socially accepted) level? Please, if you hated it give a detailed response as to why. I'm genuinely curious.
Comment by Tenoke on CFAR Participant Handbook now available to all · 2020-01-08T09:12:03.800Z · LW · GW
I think of CFAR as having "forked the LW epistemological codebase", and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using 'the private beta branch' and people using the classic 'LessWrong 1.0 epistemological framework.'" 

This rings true, and I like the metaphor. However, you seem to imply that the Open Source original branch is not as good as the private fork, pushed by a handful of people with a high turnover rate, which could be true but is harder to agree with.

Comment by Tenoke on [deleted post] 2020-01-07T16:43:48.259Z

not a real error, comment, post or karma.

Comment by Tenoke on CFAR Participant Handbook now available to all · 2020-01-07T13:53:37.403Z · LW · GW

I assume that means you print them? Because I find pdfs to be the worst medium, compared to mobi, epub or html - mainly because I usually read from my phone.

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-22T00:29:06.411Z · LW · GW

All you were saying was "That’s not the question that was asked, so … no." so I'm sorry if I had to guess and ask. Not sure what I've missed by 'not focusing'.

I see you've added both an edit after my comment and then this response, as wellwhich is a bit odd.

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-21T10:45:53.943Z · LW · GW

while thinking about rationality / CFAR

for TEACHING rationality

You are saying those 2 aren't the same goal?? Even approximately? Isn't CFAR roughly a 'teaching rationality' organization?

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-21T10:42:06.475Z · LW · GW

Meditations on Moloch is top of the list by a factor of perhaps four

Is that post really that much more relevant than everything else for TEACHING rationality? How come?

Comment by Tenoke on Quadratic voting for the 2018 Review · 2019-12-21T10:04:55.921Z · LW · GW

"Current system < OP's system"

Comment by Tenoke on Quadratic voting for the 2018 Review · 2019-12-21T10:01:15.716Z · LW · GW
I think Tenoke things that we are talking about the usual post and comment vote system. 

Isn't that what you were going to use initially or at least the most relevant system here to compare to?

Comment by Tenoke on Quadratic voting for the 2018 Review · 2019-12-21T06:56:08.931Z · LW · GW

Seems better than the current system which as far as I can tell is just 10 if statements that someone chose without much reason to think it makes sense.

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-19T19:20:48.800Z · LW · GW

I know you do follow-ups with most/all CFAR attendees. Do you have any aggregate data from the questionnaires? How much do they improve on the outcomes you measure and which ones?

Comment by Tenoke on Book Recommendations for social skill development? · 2019-12-14T08:21:09.997Z · LW · GW

Are there that many social skills mentors who take on students for that to be a more realistic course of action than finding books? Wouldn't you need solid social skills to convince one to mentor you in the first place?

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T21:19:09.249Z · LW · GW

I mean, he uses the exact same phrase I do here but yes, I see your point.

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T17:45:22.873Z · LW · GW

Vaniver is saying that the personal stuff didn't come into account when banning him and that epistemic concerns were enough. From OP:

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.

but then the epistemic concerns seem to be purely based on stuff from the "other allegations" part.

And honestly, the quality of that post is (subjectively) higher than the quality of > 99% of current LW posts, yet the claim is that content is what he is banned for, which is a bit ridiculous. What I am asking is, why pretend it is the content and that the "other allegations" have no part?

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T16:54:43.312Z · LW · GW
it's not like there's anything I can do about it anyway.

It's sad it's gotten that bad with the current iteration of LW. Users here used to think they have a chance at influencing how things are done and plenty of things were heavily community-influenced despite having a benevolent dictator for life.

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T16:37:20.203Z · LW · GW

He is using this comment to show the 'epistemic concerns' side specifically, and claiming the personal stuff were separate .

This is the specific claim.

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.
Comment by Tenoke on ialdabaoth is banned · 2019-12-13T10:31:22.677Z · LW · GW
jimrandomh's comment, linked in the OP, is the current best explanation of the epistemic concerns.

Excluding the personal stuff, this comment is just a somewhat standard LW critique of a LW post (which has less karma than the original post fwiw). If this is the criteria for an 'epistemic concerns' ban, then you must've banned hundreds of people. If you haven't you are clearly banning him for the other reasons, I don't know why you insist on being dishonest about it.

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T08:54:35.201Z · LW · GW

I read this post where you keep claiming you are banning him for 'epistemic concerns' but then link to 0 examples and mostly talk about some unrelated real-life thing which you also give 0 real explanation for.

The comments here mention a sex crime, but OP doesn't. If that's what happened why vaguebook, stay silent for a year and lie the ban's for 'epistemic concerns'? Who else have you banned for 'epistemic concerns' - nobody?

Honestly, after reading everything here I do have major concerns about ialdabaoth's character, but the main epistemic concerns I have are about OP presenting this dishonestly after a year of silence.

Comment by Tenoke on Open & Welcome Thread - December 2019 · 2019-12-11T10:17:27.402Z · LW · GW

is there any explanation of the current Karma System? The main thing I can find is this. (you need to scroll to 'The karma system', for some reason you can click on subsections to go to them, but you can't link to them).

Also why do I see a massive message that says 'habryka's commenting guidelines' when I am writing this comment, but there are no guidelines or link? Is this just a weird extra ad for your own name?

Comment by Tenoke on Is Rationalist Self-Improvement Real? · 2019-12-10T12:07:01.299Z · LW · GW
Even if they only work in modern society, one of the millions of modern people who wanted financial, social, and romantic success before you would have come up with them.

Nobody is claiming that everything around rationalist circles is completely new or invented by them. It's often looked to me more like separating the more and less useful stuff with various combinations of bottom-up and top-down approaches.

Additionally, I'd like to also identify as someone who is definitely in a much much better place now because they discovered LW almost a decade ago even though I also struggle with akrasia and do less to improve myself than I'd like, I'm very sure that just going to therapy wouldn't have improved my outcomes in so many areas, especially financially.

Comment by Tenoke on The LessWrong 2018 Review · 2019-11-28T14:43:46.760Z · LW · GW

There are definitely some decent posts, but calling a couple of good posts a official LessWrong Sequence still seems to cheapen what that used to mean.

Not to mention that I read this on facebook, so I barely associate it with here.

Note also that you can view this on GreaterWrong.

Thanks, GreaterWrong seems to still be an improvement over the redesign for me. I'm back to using it.

Comment by Tenoke on The LessWrong 2018 Review · 2019-11-27T10:23:23.334Z · LW · GW

I got an email about this, so I decided to check if the quality of content here has really increased enough to claim to have material for a new Sequence (I stopped coming here after the in my opinion botched execution of lw2).

I checked the posts, and I don't see anywhere near enough quality content to publish something called a Sequence, without cheapening the previous essays and what 'The Sequences' means in a LessWrong context.

Comment by Tenoke on Open Thread August 2018 · 2018-08-16T18:06:19.130Z · LW · GW

Does the Quantum Physics Sequence hold up?

It's been the better part of a decade since I read it (and I knew a lot less back then), and recently I've been curious about getting a refresher. I am not going to pick up a textbook or spend too much time on this, but if it doesn't hold up what alternative/supplementary resources would you recommend (the less math-heavy the better, although obviously some of the math is inescapable)?

Comment by Tenoke on Leaving beta: Voting on moving to · 2018-03-12T16:13:20.032Z · LW · GW

I haven't gotten the voting link (I've now emailed to ask), but I am sadly already pretty negatively surprised at how has turned out (props to the maker of greaterwrong, though) and very much hope that it doesn't completely replace Even if is just killed and made read-only (since after all the efforts to migrate people here, it is even more unlikely that the original lesswrong will get any new use), that's a better outcome for me.

I wouldn't even post this, but I hear a lot more people sharing the same opinion (selection effects apply), but (selection effects again) few of them are here to actually say it.

Comment by Tenoke on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-15T09:21:08.840Z · LW · GW

Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.

I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably hurt a lot more than help - e.g. someone coming back in 4 months and seeing how we've went down to only 300 (!) responders in the annual survey is going to assume LW is even more dead than it really is. This reasoning goes beyond the survey.

Comment by Tenoke on LW2.0 now in public beta (you'll need to reset your password to log in) · 2017-09-24T11:04:36.636Z · LW · GW

So there's no way for us to login with our regular accounts before the launch? Is it scheduled for anytime soon?

I'd hate to keep seeing all the constant promotion for your site without being able to check it out (since I am not really up for using a temporary account).

Comment by Tenoke on Open thread, August 21 - August 27, 2017 · 2017-08-21T13:55:32.589Z · LW · GW

The fact that you engage with the article and share it, might suggest to the author that he did everything right.

True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It's mostly significant insofar as being one I saw today that prompted me to make a template email.

The idea that your email will discourage the author from writing similar articles might be mistaken.

I can see it having no influence on some journalist, but again

I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice..


Secondly, calling autonomous weapons killer robots isn't far of the mark.

It's still fairly misleading, although a lot less than in AGI discussions.

The policy question of whether or not to allow autonomous weapons is distinct from AGI.

I am not explicitly talking about AGI either.

Comment by Tenoke on Open thread, August 21 - August 27, 2017 · 2017-08-21T12:03:14.484Z · LW · GW

After reading yet another article which mentions the phrase 'killer robots' 5 times and has a photo of terminator (and robo-cop for a bonus), I've drafted a short email asking the author to stop using this vivid but highly misleading metaphor.

I'm going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice before referencing Terminator in AI Safety discussions, potentially improving the quality of the discourse a little.

The effect of this might be slightly larger if more people do this.

Comment by Tenoke on Open thread, August 7 - August 13, 2017 · 2017-08-10T00:35:57.394Z · LW · GW

At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research (e.g. new activation functions with better performance than sota, novel architectures etc.). And for going further in that direction, they just want more compute which they're going to be getting more and more of.

I mean, if we've entered the point where we AI research is a problem tackalable by (narrow) AI, which can further benefit from that research and apply it to make further improvements faster/wtih more accuracy.. then maybe there is something to potentially worry about .

Unless of course you think that AGI will be built in such a different way that no/very few DL findings are likely to be applicable. But even then I wouldn't be convinced that doing this completely separate AGI research wont also be the kind of problem that DL wont be able to handle - as AGI research is in the end a "narrow" problem.

Comment by Tenoke on Open thread, August 7 - August 13, 2017 · 2017-08-09T17:19:37.684Z · LW · GW

Karpathy mentions offhand in this video that he thinks he has the correct approach to AGI but doesnt say what it is. Before that he lists a few common approaches, so I assume it's not one of those. What do you think he suggests?

P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.

Comment by Tenoke on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-09T19:37:32.911Z · LW · GW

More quality content (either in terms of discussions or actual posts).

P.S. I do see how that might not be especially helpful.

Comment by Tenoke on European Community Weekend 2016 · 2015-12-20T12:20:22.544Z · LW · GW

What is the latest time that I can sign up and realistically expect that there'll be spaces left? I am interested, but I can't really commit 10 months in advance.

Comment by Tenoke on Open Thread, May 18 - May 24, 2015 · 2015-05-21T14:36:23.067Z · LW · GW

Apparently the new episode of Morgan Freeman's Through the Wormhole is on the Simulation Hypothesis.

Comment by Tenoke on We Should Introduce Ourselves Differently · 2015-05-19T20:17:38.782Z · LW · GW

If someone is going to turn away at the first sight an unknown term, then they have no chance in lasting here (I mean, imagine what'll happen when they see the Sequences).

Comment by Tenoke on Open thread, Mar. 16 - Mar. 22, 2015 · 2015-03-18T00:18:12.002Z · LW · GW

Relevant thread

Comment by Tenoke on Rationality: From AI to Zombies · 2015-03-13T11:25:23.595Z · LW · GW

Awesome! How large is it altogether (in words)?

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 117 · 2015-03-09T14:00:44.975Z · LW · GW

"Harry should've thought to preserve the Death Eaters' heads" pretty much all of those complaints were made after the fact.

Did you even go to r/hpmor after Chapter 114? A bunch of people were saying that he should at least cool them, or try to revive them after he uses his time-turner or то incapacitate instead of kill or anything. Given that it also occurred to me immediately and was discussed multiple times on ##HPMOR, I'm pretty sure there was no hindsight involved..

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T18:49:02.571Z · LW · GW

For many people it will look too easy, but only with the benefit of hindsight.

Only with the benefit of hindsight? I bet 3 people that the solution won't involve PT, 2 of them within hour(s) of chapter 113 coming out, as it was the most obvious (while impausible for some) solution for many people. Specifically transfiguring the tip of wand/leg/earth/air into nanowires was mentioned by so many people within minutes of posting the chapter.. There was no hindsight involved.

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T18:01:42.862Z · LW · GW

I'd like that slightly more, but such minor changes barely address the issue. Also, I am already suspecting that the way in which Harry will unparalyze himself after his improbable PT rampage is just going to involve some other unlikely feet.

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T13:51:13.725Z · LW · GW

Yes HPMOR has been generally more believable, except for the one scene that matters in the whole book. At any rate, I am not sure if defeating Voldemort by use of an artefact - the Elder Wand is any less believable than using transfigured nanowires in secret against a much smarter version of Voldemort who forgets to use shields/wards/attention in order to catch harry this one time, and lets him have his wand when he doesn't need it.

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 114 + chapter 115 · 2015-03-04T13:37:54.906Z · LW · GW

I didn't really check the LessWrong thread earlier, but I am happy to see that people here are a lot less willing to accept the unsatisfying solution than at r/HPMOR.

I used to really enjoy HPMOR, but it is now basically ruined for me - Voldemort holding the idiot ball, the one time where things really matter, and this is also when Harry's untested strategies work like a charm on the first try without him being noticed? I guess I was too quick to praise Eliezer on being able to write more believable scenes than Rowling.

What disappoints me almost as much is that the original answer was (from all that I can gather) to mainly just use the swerving hex. Hahahaha.

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 109 · 2015-02-24T15:46:14.307Z · LW · GW

As with the original horcrux spell, I would only be able to enter a victim who contacted the physical horcrux... and I had hidden my unnumbered horcruxes in places where nobody would ever find them.

"My remaining hope was the horcruxes I had hidden in the hopeless idiocy of my youth. Imbuing them into ancient lockets, instead of anonymous pebbles; guarding them beneath wells of poison in the center of a lake of Inferi, instead of portkeying them into the sea. If someone found one of those, and penetrated their ridiculous protections... but that seemed like a distant hope.

The text suggests that Riddle was stupider at a younger age, which is when he made v1 horcruxes, and used story-like hidding places like those mentioned above. Then later on when he was porbably at least 'twice [harry's] age' he grew wizer, made the horcrux v2, and started hiding them well. Then he dies, and finds out that his only hope is the horcruxes from his youth, which weren't hidden well, and it is suggested that Quirrel found one of those, so likely a v1 horcrux.

At any rate, even if we just focus on the 'one of my earliest horcruxes' part, that still heavily implies a v1 horcrux.

Comment by Tenoke on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 109 · 2015-02-24T13:53:42.191Z · LW · GW

The thought of making a better horcrux, of not being content with the spell I had already learned... this thought did not come to me until I had grasped the stupidity of ordinary people, and realised which follies of theirs I had imitated.

This apparently happened significantly later in his life. However

"Nine years and four months after that night, a wandering adventurer named Quirinus Quirrell won past the protections guarding one of my earliest horcruxes.

Doesn't this suggest that Quirrell stumbled upon a horcrux v1, given that it was one of the earliest horcruxes, and that it was 'hidden' by the less wise Riddle?

I suspect Eliezer just didn't notice this, or that the explanation is that after inventing v2, Riddle went back and upgraded all his old horcruxes or something. The alternative explanation is that all v1 horcruxes upgraded automatically when he went for the v2, however we know that Harry is also a v1 horcrux, so that wouldnt make sense.