Posts

Meetup : London Social Meetup, 18/01/2015 2015-01-14T17:49:45.075Z
Meetup : London First 2015 Meetup, 04/01/2015 2014-12-31T18:01:57.256Z
Meetup : London Social Meetup, 21/12/2014 2014-12-15T23:54:42.764Z
Meetup : London Social Meetup, 07/12/2014 2014-12-01T15:58:30.302Z
Open thread, 9-15 June 2014 2014-06-09T13:07:20.908Z
Meetup : London Social Meetup (possibly) in the Sun 2014-06-06T11:37:16.899Z
Open Thread, May 5 - 11, 2014 2014-05-05T10:35:45.563Z
Open Thread April 16 - April 22, 2014 2014-04-16T07:05:36.020Z
Meetup : Socialising in the Sun, London 13/04 2014-04-09T16:22:12.973Z
Open Thread April 8 - April 14 2014 2014-04-08T11:11:50.069Z
Meetup : London Social Meetups - 23/03 and 30/03 2014-03-22T12:21:42.893Z
Meetup : London Games Meetup 09/03 [VENUE CHANGE: PENDEREL'S OAK!], + Social 16/02 2014-02-27T16:07:05.661Z
Identity and Death 2014-02-18T11:35:49.393Z
Meetup : London Practical Meetup - Calibration Training! 2013-12-01T12:46:18.061Z
Meetup : London Social Meetup, 01/12/2013 2013-11-27T16:45:28.355Z
Meetup : London Social Meetup, 24/11/2013 [Back to the Shakespeare's Head] 2013-11-18T10:38:17.739Z
Googling is the first step. Consider adding scholarly searches to your arsenal. 2013-05-07T13:30:38.019Z
Meetup : 18/11 London Meetup 2012-10-31T16:24:41.431Z

Comments

Comment by Tenoke on Daniel Kahneman has died · 2024-03-28T22:31:27.526Z · LW · GW

I own only ~5 physical books now (prefer digital) and 2 of them are Thinking, Fast and Slow. Despite not being on the site I've always thought of him as something of a founding grandfather of LessWrong.

Comment by Tenoke on My Interview With Cade Metz on His Reporting About Slate Star Codex · 2024-03-26T17:51:27.274Z · LW · GW

He comes out pretty unsympathetic and stubborn.

Did any of your views of him change?

Comment by Tenoke on My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" · 2023-03-21T10:14:53.816Z · LW · GW

I'm sympathetic to some of your arguments but even if we accept that the current paradigm will lead us to an AI that is pretty similar to a human mind, and even in the best case I'm already not super optimistic that a scaled up random almost human is a great outcome. I simply disagree where you say this:

>For example, humans are not perfectly robust. I claim that for any human, no matter how moral, there exist adversarial sensory inputs that would cause them to act badly. Such inputs might involve extreme pain, starvation, exhaustion, etc. I don't think the mere existence of such inputs means that all humans are unaligned. 

Humans aren't that aligned at the extreme and the extreme matters when talking about the smartest entity making every important decision about everything.

 

Also, your general arguments about the current paradigms being not that bad are reasonable but again, I think our situation is a lot closer to all or nothing - if we get pretty far with RLHF or whatever, scale up the model until it's extremely smart and thus eventually making every decision of consequence then unless you got the alignment near perfectly the chance that the remaining problematic parts screw us over seems uncomfortably high to me. 

Comment by Tenoke on Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky · 2023-02-21T11:46:23.748Z · LW · GW

I can't even get a good answer of "What's the GiveWell of AI Safety" so I can quickly donate to a very reputable and agreed upon option with little thinking without at best getting old lists to a ton of random small orgs and giving up. I'm not very optimistic ordinary less convinced people who want to help are having an easier time.

Comment by Tenoke on Is the AI timeline too short to have children? · 2022-12-14T21:56:09.299Z · LW · GW

It seems quite different. Tha main argument in that article is that Climate Change wouldn't make the lives of readers' children much worse or shorter and that's not the case for AI.

Comment by Tenoke on [deleted post] 2022-11-27T12:42:19.238Z

Do you have any evidence for this?

My prior is that other things are less effective and you need evidence to show they are more effective not vice versa.

Not all EA's are longtermists.

Of course. I'm saying it doesn't even get to make that argument which can sometimes muddy the waters enough to make some odd-seeming causes look at least plausibly effective.

Comment by Tenoke on [deleted post] 2022-11-27T11:10:15.740Z

I'm impressed how modern EAs manage to spin any cause into being supposedly EA.

There's just no way that things like this are remotely as effective as say GiveWell causes (though it wouldn't even meet a much lower bar) and it barely even has longtermist points for it that can make me see why there's at least a chance it could be worth it.

EA's whlole brand is massively diluted by all these causes and I don't think they are remotely as effective as other places where your money can go, nor that they help the general message.

It's like people get into EA, realize it's a good idea but then want to participate in the community and not just donate so everyone tries to come up with new clearly ineffective (compared to alternatives) causes and spin them as EA.

Comment by Tenoke on Semi-conductor/AI Stock Discussion. · 2022-11-26T08:36:37.301Z · LW · GW

While NVDA is naively the most obvious play - the vast majority of GPU-based AI systems use them, I fail to see why you'd expect it will outperform the market, at least in the medium term. Even if you don't believe in the EMH, I assume you acknowledge things can be more or less priced-in? Well, NVDA's such an obvious choice that it does seem like all the main arguments for it are priced-in which has helped get it to a PE ratio of 55.

 

I also don't see OpenAI making a huge dent on MSFT's numbers anytime soon. Almost all of MSFT's price is going to be determined by the rest of their business. Quick googling suggests revenue of 3m for OpenAI, and 168b total for MSFT for 2021. If OpenAI was already 100 times larger I still wouldn't see how a bet on MSFT just because of it is justified. It seems like this was chosen just because OpenAI is popular and not out of any real analysis beyond it. Can you explain what I'm missing?

 

I do like your first 3 choices of TSM, Google and Samsung (is that really much of an AI play though).

Comment by Tenoke on philh's Shortform · 2022-11-21T12:02:40.493Z · LW · GW

No, it's the blockchain Terra (with Luna being its main token).

 

https://en.wikipedia.org/wiki/Terra_(blockchain)

Comment by Tenoke on Will we run out of ML data? Evidence from projecting dataset size trends · 2022-11-15T23:04:37.468Z · LW · GW

There is little reason to think that's a big issue. A lot of data is semi-tagged, some of the ML-generated data can be removed either that way or by being detected by newer models. And in general as long as the 'good' type of data is also increasing model quality will also keep increasing even if you have some extra noise.

Comment by Tenoke on Open & Welcome Thread - Oct 2022 · 2022-10-24T21:02:38.902Z · LW · GW

What's the GiveWell/AMF of AI Safety? I'd like to occasionally donate. In the past I've only done so for MIRI a few times. A quick googling fails to return anything useful in the top results which is odd given how much seems to be written in LW/EA and other forums on the subject every week.

Comment by Tenoke on Writing Russian and Ukrainian words in Latin script · 2022-10-23T21:23:58.253Z · LW · GW

In Bulgaria (where cyrilic was invented) writing in Latin is common (especially before cyrilic support was good) but frowned upon as it is considered uneducated and ugly. The way we do it is just replace each letter with the equivalent latin letter one to one and do whatever with the few which don't fit (eg just use y for ъ but some might use a, ч is just ch etc). So молоко is just moloko. Водка is vodka. Стол is stol etc. This is also exactly how it works on my keyboard with the phonetic layout.

Everyone else who uses cyrilic online seems to get it when you write like that in my experience though nowadays it's rarer.

Comment by Tenoke on Resurrecting all humans ever lived as a technical problem · 2022-10-22T13:47:34.736Z · LW · GW

I've been considering for years that I should write more, and save more of my messages and activities purely so I can constrain the mindspace for a future AI to recreate a version of me as approximate to my current self as years ago me is. As far as I can tell, this is fairly low effort, and the more information you have the closer you can get. 

I just don't see an obvious refutation for why an advanced AI optimizing for creating a person that would write/do/etc. all the things I have with the highest probability it can would be that different from me.

Comment by Tenoke on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T12:57:08.536Z · LW · GW

A lot of people take a lot of drugs on big events like Burning Man with little issue. In my observation, it's typically the overly frequent and/or targeted psychedelic use that causes such big changes at least in those that start of fairly stable.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T21:25:19.189Z · LW · GW

Why are so many resources being sunk into this specifically? I just don't understand how it makes sense, what the motivation is and how they arrived at the idea. Maybe there is a great explanation and thought process which I am missing.

From my point of view, there is little demand for it and the main motivation might plausibly have been "we want to say we've published a book" rather than something that people want or need.

Having said that, I'd rather get an answer to my initial comment - why it makes sense to you/them - rather than me having to give reasons why I don't see how it makes sense.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T21:19:29.534Z · LW · GW

Thanks for the reply. That seems like a sensible position.

It sounds like maybe you were less involved in this than some of the 7(is that right?) other employees/admins so I'm very curious to hear their take, too.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T20:51:58.208Z · LW · GW

Printing costs are hardly the only or even main issue, and I hadn't even mentioned them. You are right though, those costs make the insistence on publishing a book make even less sense.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T17:48:19.744Z · LW · GW

I’m confused by this. Why would only voters be interested in the books?

Because I doubt there are all that much more people interested in these than the number of voters. Even at 1000 it doesn't seem like a book makes all that much sense. In fact, I still don't get why turning them into a book is even considered.

Comment by Tenoke on 2018 Review: Voting Results! · 2020-01-24T14:57:03.246Z · LW · GW
  1. It seems like very few people voted overall if the average is "10-20" voters per post. I hope they are buying 50+ books each otherwise I don't see how the book part is remotely worth it.
  2. The voting was broken in multiple ways - you could spend as many points as possible, but instead of a cut-off, your vote was just cast out due to the organizers' mistake to allow it.
  3. The voting was broken in the way described in the post, too.
  4. People didn't understand how the voting worked (Look at the few comments here) so they didn't really even manage to vote in the way that satisfies their preferences best. The system and the explanation seem at fault.
  5. I note that a lot of promotion went into this - including emails to non-active users, a lot of posts about it, long extended reviews.

So, my question is - do the organizers think it was worth it? And if yes, do you think it is worth it enough for publishing in a book? And if yes to both - what would failure have looked like?

Comment by Tenoke on Bay Solstice 2019 Retrospective · 2020-01-18T08:07:43.876Z · LW · GW
Comment by Tenoke on Bay Solstice 2019 Retrospective · 2020-01-17T20:20:27.078Z · LW · GW

He has been trying to do it for years and failed. The first time I read his attempts at doing that, years ago, I also assigned a high probability of success. Then 2 years passed and he hadn't done it, then another 2 years..

You have to adjust your estimates based on your observations.

Comment by Tenoke on Bay Solstice 2019 Retrospective · 2020-01-17T10:34:07.749Z · LW · GW

I have a bunch of comments on this:

  1. I really liked the bit. Possibly because I've been lowkey following his efforts.
  2. He looks quite good, and I like the beard on him.
  3. ..

I've always thought that his failed attempts at researching weightloss and applying what he learned were a counter example of how applicable LW/EY rationality is. Glad to see he solved it when it became more important.

  1. Eliezer clearly gets too much flack in general, and especially in this case. It's not like I haven't criticised him but come on.
  2. several people’s reaction was, “Why is this guy talking to me like I’m his friend, I don’t even know him”

Really? Fine, you don't know him but if you don't know EY and are at a rationalist event why would you be surprised by not knowing a speaker? From the public's reaction to his openning it should've been clear most people did know him.

  1. I'm not against the concept of triggering - some stuff can be, including eating disorders, but like this? Can a person not talk at all about weight gain/loss? Is the solstice at all LW-related if things can't be discussed even at their fairly basic (and socially accepted) level? Please, if you hated it give a detailed response as to why. I'm genuinely curious.
Comment by Tenoke on CFAR Participant Handbook now available to all · 2020-01-08T09:12:03.800Z · LW · GW
I think of CFAR as having "forked the LW epistemological codebase", and then going on to do a bunch of development in a private branch. I think a lot of issues from the past few years have come from disconnects between people who having been using 'the private beta branch' and people using the classic 'LessWrong 1.0 epistemological framework.'" 

This rings true, and I like the metaphor. However, you seem to imply that the Open Source original branch is not as good as the private fork, pushed by a handful of people with a high turnover rate, which could be true but is harder to agree with.

Comment by Tenoke on [deleted post] 2020-01-07T16:43:48.259Z

not a real error, comment, post or karma.

Comment by Tenoke on CFAR Participant Handbook now available to all · 2020-01-07T13:53:37.403Z · LW · GW

I assume that means you print them? Because I find pdfs to be the worst medium, compared to mobi, epub or html - mainly because I usually read from my phone.

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-22T00:29:06.411Z · LW · GW

All you were saying was "That’s not the question that was asked, so … no." so I'm sorry if I had to guess and ask. Not sure what I've missed by 'not focusing'.

I see you've added both an edit after my comment and then this response, as wellwhich is a bit odd.

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-21T10:45:53.943Z · LW · GW

while thinking about rationality / CFAR

for TEACHING rationality

You are saying those 2 aren't the same goal?? Even approximately? Isn't CFAR roughly a 'teaching rationality' organization?

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-21T10:42:06.475Z · LW · GW

Meditations on Moloch is top of the list by a factor of perhaps four

Is that post really that much more relevant than everything else for TEACHING rationality? How come?

Comment by Tenoke on Quadratic voting for the 2018 Review · 2019-12-21T10:04:55.921Z · LW · GW

"Current system < OP's system"

Comment by Tenoke on Quadratic voting for the 2018 Review · 2019-12-21T10:01:15.716Z · LW · GW
I think Tenoke things that we are talking about the usual post and comment vote system. 

Isn't that what you were going to use initially or at least the most relevant system here to compare to?

Comment by Tenoke on Quadratic voting for the 2018 Review · 2019-12-21T06:56:08.931Z · LW · GW

Seems better than the current system which as far as I can tell is just 10 if statements that someone chose without much reason to think it makes sense.

Comment by Tenoke on We run the Center for Applied Rationality, AMA · 2019-12-19T19:20:48.800Z · LW · GW

I know you do follow-ups with most/all CFAR attendees. Do you have any aggregate data from the questionnaires? How much do they improve on the outcomes you measure and which ones?

Comment by Tenoke on Book Recommendations for social skill development? · 2019-12-14T08:21:09.997Z · LW · GW

Are there that many social skills mentors who take on students for that to be a more realistic course of action than finding books? Wouldn't you need solid social skills to convince one to mentor you in the first place?

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T21:19:09.249Z · LW · GW

I mean, he uses the exact same phrase I do here but yes, I see your point.

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T17:45:22.873Z · LW · GW

Vaniver is saying that the personal stuff didn't come into account when banning him and that epistemic concerns were enough. From OP:

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.

but then the epistemic concerns seem to be purely based on stuff from the "other allegations" part.

And honestly, the quality of that post is (subjectively) higher than the quality of > 99% of current LW posts, yet the claim is that content is what he is banned for, which is a bit ridiculous. What I am asking is, why pretend it is the content and that the "other allegations" have no part?

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T16:54:43.312Z · LW · GW
it's not like there's anything I can do about it anyway.

It's sad it's gotten that bad with the current iteration of LW. Users here used to think they have a chance at influencing how things are done and plenty of things were heavily community-influenced despite having a benevolent dictator for life.

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T16:37:20.203Z · LW · GW

He is using this comment to show the 'epistemic concerns' side specifically, and claiming the personal stuff were separate .

This is the specific claim.

We think that ialdabaoth poses a substantial risk to our epistemic environment due to manipulative epistemic tactics, based on our knowledge and experience of him. This is sufficient reason for the ban, and holds without investigating or making any sort of ruling on other allegations.
Comment by Tenoke on ialdabaoth is banned · 2019-12-13T10:31:22.677Z · LW · GW
jimrandomh's comment, linked in the OP, is the current best explanation of the epistemic concerns.

Excluding the personal stuff, this comment is just a somewhat standard LW critique of a LW post (which has less karma than the original post fwiw). If this is the criteria for an 'epistemic concerns' ban, then you must've banned hundreds of people. If you haven't you are clearly banning him for the other reasons, I don't know why you insist on being dishonest about it.

Comment by Tenoke on ialdabaoth is banned · 2019-12-13T08:54:35.201Z · LW · GW

I read this post where you keep claiming you are banning him for 'epistemic concerns' but then link to 0 examples and mostly talk about some unrelated real-life thing which you also give 0 real explanation for.

The comments here mention a sex crime, but OP doesn't. If that's what happened why vaguebook, stay silent for a year and lie the ban's for 'epistemic concerns'? Who else have you banned for 'epistemic concerns' - nobody?

Honestly, after reading everything here I do have major concerns about ialdabaoth's character, but the main epistemic concerns I have are about OP presenting this dishonestly after a year of silence.

Comment by Tenoke on Open & Welcome Thread - December 2019 · 2019-12-11T10:17:27.402Z · LW · GW

is there any explanation of the current Karma System? The main thing I can find is this. (you need to scroll to 'The karma system', for some reason you can click on subsections to go to them, but you can't link to them).


Also why do I see a massive message that says 'habryka's commenting guidelines' when I am writing this comment, but there are no guidelines or link? Is this just a weird extra ad for your own name?

Comment by Tenoke on Is Rationalist Self-Improvement Real? · 2019-12-10T12:07:01.299Z · LW · GW
Even if they only work in modern society, one of the millions of modern people who wanted financial, social, and romantic success before you would have come up with them.

Nobody is claiming that everything around rationalist circles is completely new or invented by them. It's often looked to me more like separating the more and less useful stuff with various combinations of bottom-up and top-down approaches.

Additionally, I'd like to also identify as someone who is definitely in a much much better place now because they discovered LW almost a decade ago even though I also struggle with akrasia and do less to improve myself than I'd like, I'm very sure that just going to therapy wouldn't have improved my outcomes in so many areas, especially financially.

Comment by Tenoke on The LessWrong 2018 Review · 2019-11-28T14:43:46.760Z · LW · GW

There are definitely some decent posts, but calling a couple of good posts a official LessWrong Sequence still seems to cheapen what that used to mean.

Not to mention that I read this on facebook, so I barely associate it with here.

Note also that you can view this on GreaterWrong.

Thanks, GreaterWrong seems to still be an improvement over the redesign for me. I'm back to using it.

Comment by Tenoke on The LessWrong 2018 Review · 2019-11-27T10:23:23.334Z · LW · GW

I got an email about this, so I decided to check if the quality of content here has really increased enough to claim to have material for a new Sequence (I stopped coming here after the in my opinion botched execution of lw2).

I checked the posts, and I don't see anywhere near enough quality content to publish something called a Sequence, without cheapening the previous essays and what 'The Sequences' means in a LessWrong context.

Comment by Tenoke on Open Thread August 2018 · 2018-08-16T18:06:19.130Z · LW · GW

Does the Quantum Physics Sequence hold up?

It's been the better part of a decade since I read it (and I knew a lot less back then), and recently I've been curious about getting a refresher. I am not going to pick up a textbook or spend too much time on this, but if it doesn't hold up what alternative/supplementary resources would you recommend (the less math-heavy the better, although obviously some of the math is inescapable)?

Comment by Tenoke on Leaving beta: Voting on moving to LessWrong.com · 2018-03-12T16:13:20.032Z · LW · GW

I haven't gotten the voting link (I've now emailed to ask), but I am sadly already pretty negatively surprised at how lesserwrong.com has turned out (props to the maker of greaterwrong, though) and very much hope that it doesn't completely replace LessWrong.com. Even if LessWrong.com is just killed and made read-only (since after all the efforts to migrate people here, it is even more unlikely that the original lesswrong will get any new use), that's a better outcome for me.

I wouldn't even post this, but I hear a lot more people sharing the same opinion (selection effects apply), but (selection effects again) few of them are here to actually say it.

Comment by Tenoke on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-15T09:21:08.840Z · LW · GW

Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.

I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably hurt a lot more than help - e.g. someone coming back in 4 months and seeing how we've went down to only 300 (!) responders in the annual survey is going to assume LW is even more dead than it really is. This reasoning goes beyond the survey.

Comment by Tenoke on LW2.0 now in public beta (you'll need to reset your password to log in) · 2017-09-24T11:04:36.636Z · LW · GW

So there's no way for us to login with our regular accounts before the launch? Is it scheduled for anytime soon?

I'd hate to keep seeing all the constant promotion for your site without being able to check it out (since I am not really up for using a temporary account).

Comment by Tenoke on Open thread, August 21 - August 27, 2017 · 2017-08-21T13:55:32.589Z · LW · GW

The fact that you engage with the article and share it, might suggest to the author that he did everything right.

True, but this is one of the less bad articles that have Terminator references (as it makes a bit more sense in this specific context) so I mind less that I am sharing it. It's mostly significant insofar as being one I saw today that prompted me to make a template email.

The idea that your email will discourage the author from writing similar articles might be mistaken.

I can see it having no influence on some journalist, but again

I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice..

..

Secondly, calling autonomous weapons killer robots isn't far of the mark.

It's still fairly misleading, although a lot less than in AGI discussions.

The policy question of whether or not to allow autonomous weapons is distinct from AGI.

I am not explicitly talking about AGI either.

Comment by Tenoke on Open thread, August 21 - August 27, 2017 · 2017-08-21T12:03:14.484Z · LW · GW

After reading yet another article which mentions the phrase 'killer robots' 5 times and has a photo of terminator (and robo-cop for a bonus), I've drafted a short email asking the author to stop using this vivid but highly misleading metaphor.

I'm going to start sending this same email to other journalists that do the same from now on. I am not sure how big the impact will be, but after the email is already drafted sending it to new people is pretty low effort and there's the potential that some journalists will think twice before referencing Terminator in AI Safety discussions, potentially improving the quality of the discourse a little.

The effect of this might be slightly larger if more people do this.

Comment by Tenoke on Open thread, August 7 - August 13, 2017 · 2017-08-10T00:35:57.394Z · LW · GW

At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research (e.g. new activation functions with better performance than sota, novel architectures etc.). And for going further in that direction, they just want more compute which they're going to be getting more and more of.

I mean, if we've entered the point where we AI research is a problem tackalable by (narrow) AI, which can further benefit from that research and apply it to make further improvements faster/wtih more accuracy.. then maybe there is something to potentially worry about .

Unless of course you think that AGI will be built in such a different way that no/very few DL findings are likely to be applicable. But even then I wouldn't be convinced that doing this completely separate AGI research wont also be the kind of problem that DL wont be able to handle - as AGI research is in the end a "narrow" problem.