Posts

How could AIs 'see' each other's source code? 2023-06-02T22:41:20.107Z
[Link] "Improper Nouns" by siderea 2022-09-29T13:28:42.268Z
Wolfram Research v Cook 2022-07-31T13:35:21.925Z
[Link] On the paradox of tolerance in relation to fascism and online content moderation – Unstable Ontology 2022-07-01T16:43:50.739Z
What's the information value of government hearings? 2022-06-18T17:13:48.820Z
The best 'free solo' (rock climbing) video 2022-06-18T15:29:37.915Z
[Link] "The madness of reduced medical diagnostics" by Dynomight 2022-06-16T19:20:36.556Z
Security analysis of 'cloud chemistry labs'? 2022-06-16T16:06:25.989Z
Stephen Wolfram's ideas are under-appreciated 2022-06-07T20:09:37.636Z
[Link] Evidence of Fabricated Data in a Vitamin C trial by Paul E Marik et al in CHEST 2022-04-27T06:48:06.597Z
'Good enough' way to clean an O2 Curve respirator? 2021-09-16T14:51:44.014Z
Is anyone else frustrated with 'un-informative' post titles? 2021-05-28T20:11:20.063Z
What's a good way to test basic machine learning code? 2021-03-11T21:27:15.535Z
Are UFOs just drones? 2021-01-08T20:51:26.068Z
[Link] Faster than Light in Our Model of Physics: Some Preliminary Thoughts—Stephen Wolfram Writings 2020-10-04T20:26:51.611Z
[Link] Where did you get that idea in the first place? | Meaningness 2020-09-25T15:38:00.092Z
Link: Vitamin D Can Likely End the COVID-19 Pandemic - Rootclaim Blog 2020-09-18T17:07:22.953Z
The Peter Attia Drive podcast episode #102: Michael Osterholm, Ph.D.: COVID-19—Lessons learned, challenges ahead, and reasons for optimism and concern 2020-04-04T05:19:38.304Z
"Preparing for a Pandemic: Stage 3: Grow Food if You Can [COVID-19, hort, US, Patreon]" 2020-04-03T17:57:58.826Z
How much do we know about how brains learn? 2020-01-24T14:46:47.185Z
[Link] "Doing being rational: polymerase chain reaction" by David Chapman 2019-12-13T23:54:45.189Z
Link: An exercise: meta-rational phenomena | Meaningness 2019-10-21T16:56:24.443Z
Paper on qualitative types or degrees of knowledge, with examples from medicine? 2019-06-15T00:31:56.912Z
Flagging/reporting spam *posts*? 2018-05-23T16:14:11.515Z

Comments

Comment by Kenny on Linear White · 2024-03-15T00:53:11.571Z · LW · GW

Was this not originally tagged “personal blog”?

I’m not sure what the consensus is on how to vote on these posts, but I’m sad that this post’s poor reception might be why its author deactivated their account.

Comment by Kenny on Feedly Breaks MathML · 2023-09-28T22:02:44.149Z · LW · GW

I just reported this to Feedly.

Comment by Kenny on Lies Told To Children · 2023-06-26T13:29:34.614Z · LW · GW

Thanks for the info! And no worries about the (very) late response – I like that people fairly often reply at all (beyond same-day or within a few days) on this site; makes the discussions feel more 'timeless' to me.

The second "question" wasn't a question, but it was due to not knowing that Conservative Judaism is distinct from Orthodox Judaism. (Sadly, capitalization is only relatively weak evidence of 'proper-nounitude'.)

Comment by Kenny on How could AIs 'see' each other's source code? · 2023-06-05T20:15:28.026Z · LW · GW

Some of my own intuitions about this:

  1. Yes, this would be 'probabilistic' and thus this is an issue of evidence that AIs would share with each other.
  2. Why or how would one system trust another that the state (code+data) shared is honest?
  3. Sandboxing is (currently) imperfect, tho perhaps sufficiently advanced AIs could actually achieve it? (On the other hand, there are security vulnerabilities that exploit the 'computational substrate', e.g. Spectre, so I would guess that would remain as a potential vulnerability even for AIs that designed and built their own substrates.) This also seems like it would only help if the sandboxed version could be 'sped up' and if the AI running the sandboxed AI can 'convince' the sandboxed AI that it's not' sandboxed.
  4. The 'prototypical' AI I'm imagining seems like it would be too 'big' and too 'diffuse' (e.g. distributed) for it to be able to share (all of) itself with another AI. Another commenter mentioned an AI 'folding itself up' for sharing, but I can't understand concretely how that would help (or how it would work either).
Comment by Kenny on How could AIs 'see' each other's source code? · 2023-06-05T17:23:11.306Z · LW · GW

I think my question is different, tho that does seem like a promising avenue to investigate – thanks!

Comment by Kenny on How could AIs 'see' each other's source code? · 2023-06-05T15:59:52.526Z · LW · GW

That's an interesting idea!

Comment by Kenny on How could AIs 'see' each other's source code? · 2023-06-05T15:55:49.912Z · LW · GW

An oscilloscope

I guessed that's what you meant but was curious whether I was right!

If the AI isn't willing or able to fold itself up into something that can be run entirely on single, human-inspectable CPU in an airgapped box, running code that is amenable to easily proving things about its behavior, you can just not cooperate with it, or not do whatever else you were planning to do by proving something about it, and just shut it off instead.

Any idea how a 'folded-up' AI would imply anything in particular about the 'expanded' AI?

If an AI 'folded itself up' and provably/probably 'deleted' its 'expanded' form (and all instances of that), as well as any other AIs or not-AI-agents under its control, that does seem like it would be nearly "alignment-complete" (especially relative to our current AIs), even if, e.g. the AI expected to be able to escape that 'confinement'.

But that doesn't seem like it would work as a general procedure for AIs cooperating or even negotiating with each other.

Comment by Kenny on How could AIs 'see' each other's source code? · 2023-06-04T02:18:12.043Z · LW · GW

What source code and what machine code is actually being executed on some particular substrate is an empirical fact about the world, so in general, an AI (or a human) might learn it the way we learn any other fact - by making inferences from observations of the world.

This is a good point.

But I'm trying to develop some detailed intuitions about how this would or could work, in particular what practical difficulties there are and how they could be overcome.

For example, maybe you hook up a debugger or a waveform reader to the AI's CPU to get a memory dump, reverse engineer the running code from the memory dump, and then prove some properties you care about follow inevitably from running the code you reverse engineered.

In general though, this is a pretty hard, unsolved problem - you probably run into a bunch of issues related to embedded agency pretty quickly.

(What do you mean by "waveform" reader"?)

Some practical difficulties with your first paragraph:

  1. How can AI's credibly claim that any particular CPU is running their code, or that a debugger connected to it isn't being subverted via, e.g. MITM?
  2. How can AI's credibly claim that whatever the contents of a 'CPU's' memory is at some point, it won't be replaced at some later point?
  3. How could one AI safely execute code given to it by another (e.g. via "memory dump")?
  4. How could one AI feasibly run another's code 'fast enough' to be able to determine that it could (probably) trust it now (even assuming [1], [2], and [3] are solved)?

[1] points to what I see as a big difficulty, i.e. AIs will probably (or could) be very distributed computing systems and there might not be any practical way to 'fit into a single box' for, e.g. careful inspection by others.

Comment by Kenny on "Corrigibility at some small length" by dath ilan · 2023-05-05T20:42:04.664Z · LW · GW

This is a nice summary!

fictional role-playing server

As opposed to all of the non-fictional role playing servers (e.g. this one)?

I don't think most/many (or maybe any) of the stories/posts/threads on the Glowfic site are 'RPG stories', let alone some kind of 'play by forum post' histories, there's just a few that use the same settings as RPGs.

Comment by Kenny on AI #5: Level One Bard · 2023-04-05T16:17:09.874Z · LW · GW

I suspect a lot of people, like myself, learn "content-based writing" by trying to communicate, e.g. in their 'personal life' or at work. I don't think I learned anything significant by writing in my own "higher forms of ['official'] education".

Comment by Kenny on More information about the dangerous capability evaluations we did with GPT-4 and Claude. · 2023-03-29T15:23:33.162Z · LW · GW

I would still like to see political pressure for truly open independent audits, though.

I think that would be a big improvement. I also think ARC is, at least effectively, working on that or towards it.

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T04:05:05.898Z · LW · GW

Damning allegations; but I expect this forum to respond with minimization and denial.

This is so spectacularly bad faith that it makes me think the reason you posted this is pretty purely malicious.

Out of all of the LessWrong and 'rationalist' "communities" that have existed, how many are ones for which any of the alleged bad acts occurred? One? Two?

Out of all of the LessWrong users and 'rationalists', how many have been accused of these alleged bad acts? Mostly one or two?

My having observed extremely similar dynamics about, e.g. sexual harassment, in several different online and in-person 'communities', the 'communities' of or affiliated with 'rationality', LessWrong, and EA have been, far and away, the most diligent about actually effectively mitigating, preventing, and (reasonably) punishing bad behavior.

It is really unclear what standards the 'communities' are failing to meet and that makes me very suspicious that those standards are unreasonable.

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T03:34:58.675Z · LW · GW

Please don't pin the actions of others on me!

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T03:33:18.114Z · LW · GW

No, it's not, especially given that 'whataboutism' is a label used to dismiss comparisons that don't advance particular arguments.

Writing the words "what about" does not invalidate any and all comparisons.

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T03:28:47.217Z · LW · GW

I think some empathy and sympathy is warranted to the users of the site that had nothing to do with any of the alleged harms!

It is pretty tiresome to be accused-by-association. I'm not aware of any significant problems with abuse "in LessWrong". And, from what I can tell, almost all of the alleged abuse happened in one particular 'rationalist community', not all, most, or even many of them.

I'm extremely skeptical that the article or this post were inspired by compassion towards anyone.

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T02:16:14.381Z · LW · GW

I think the quoted text is inflammatory and "this forum" (this site) isn't the same as wherever the alleged bad behavior took place.

Is contradicting something you believe to be, essentially, false equivalent to "denial"?

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T02:11:19.127Z · LW · GW

It is anomalous that people are quite uninterested in optimizing this as it seems clearly important.

I have the opposite sense. Many people seem very interested in this.

"This community" is a nebulous thing and this site is very different than any of the 'in-person communities'.

But I don't think there's strong evidence that the 'communities' don't already "have much lower than average levels of abuse". I have an impression that, among the very-interested-in-this people, any abuse is too much.

Comment by Kenny on Abuse in LessWrong and rationalist communities in Bloomberg News · 2023-03-28T01:58:07.478Z · LW · GW

What kind of more severe punishment should "the rationalist community" mete out to X and how exactly would/should that work?

Comment by Kenny on More information about the dangerous capability evaluations we did with GPT-4 and Claude. · 2023-03-28T01:46:13.696Z · LW · GW

You seem to be describing something that's so implausible it might as well be impossible.

Given the existing constraints, I think ARC made the right choice.

Comment by Kenny on More information about the dangerous capability evaluations we did with GPT-4 and Claude. · 2023-03-27T19:15:57.645Z · LW · GW

Do you think ARC should have traded publicizing the lab's demands for non-disclosure instead of performing the exercise they did?

I think that would have been a bad trade.

I also don't think there's much value to them whistleblowing about any kind of non-disclosure that the lab's might have demanded. I don't get the sense there's any additional bad (or awful) behavior – beyond what's (implicitly) apparent from the detailed info ARC has already publicly released.

I think it's very useful to maintain sufficient incentives for the lab's to want to allow things like what ARC did.

... it wouldn't necessarily be unreasonable to shut the "labs" down or expropriate them at right about this point in the story.

Sure, tho I'd be much more terrified were they expropriated!

Comment by Kenny on More information about the dangerous capability evaluations we did with GPT-4 and Claude. · 2023-03-24T18:05:37.529Z · LW · GW

Wouldn't it be better to accept contractual bindings and then at least have the opportunity to whistleblow (even if that means accepting the legal consequences)?

Or do you think that they have some kind of leverage by which the labs would agree to NOT contractually bind them? I'd expect the labs to just not allow them to evaluate the model at all were ARC to insist on or demand this.

Comment by Kenny on Bing Chat is blatantly, aggressively misaligned · 2023-02-21T21:38:39.131Z · LW · GW

I'm definitely not against reading your (and anyone else's) blog posts, but it would be friendlier to at least outline or excerpt some of the post here too.

Comment by Kenny on Bing Chat is blatantly, aggressively misaligned · 2023-02-21T21:35:19.208Z · LW · GW

It looks like you didn't (and maybe can't) enter the ASCII art in the form Bing needs to "decode" it? For one, I'd expect line breaks, both after and before the code block tags and also between each 'line' of the art.

If you can, try entering new lines with <kbd>Shift</kbd>+<kbd>Enter</kbd>. That should allow new lines without being interpreted as 'send message'.

Comment by Kenny on [linkpost] Better Without AI · 2023-02-21T18:31:34.821Z · LW · GW

I really like David's writing generally but this 'book' is particularly strong (and pertinent to us here on this site).

The second section, What is the Scary kind of AI?, is a very interesting and (I think) useful alternative perspective on the risks that 'AI safety' do and (arguably) should focus on, e.g. "diverse forms of agency".

The first chapter of the third ('scenarios') section, At war with the machines, provides a (more) compelling version of a somewhat common argument, i.e. 'AI is (already) out to get us'.

The second detailed scenario, in the third chapter, triggered my 'absurdity heuristic' hard. The following chapter points out that absurdity was deliberate – bravo David!

The rest of the book is a surprisingly comprehensive synthesis of a lot of insights from LW, the greater 'Modern Rationalist' sphere, and David's own works (much of which is very much related to and pertinent to the other two sets of insights). I am not 'fully sold' on 'Mooglebook AI doom', but I have definitely updated fairly strongly towards it.

Comment by Kenny on Movie Review: Megan · 2023-02-08T22:44:50.380Z · LW · GW

This seems like the right trope:

Comment by Kenny on Quantum Suicide, Decision Theory, and The Multiverse · 2023-02-08T20:32:20.332Z · LW · GW

That's why I used a fatal scenario, because it very obviously cuts all future utility to zero

I don't understand why you think a decision resulting in some person's or agent's death "cuts all future utility to zero". Why do you think choosing one's death is always a mistake?

Comment by Kenny on Amazon closing AmazonSmile to focus its philanthropic giving to programs with greater impact · 2023-02-07T18:56:36.637Z · LW · GW

I think I'd opt to quote the original title in a post here to indicate that it's not a 'claim' being made (by me).

Comment by Kenny on Iron deficiencies are very bad and you should treat them · 2023-01-28T18:23:56.395Z · LW · GW

IIRC, RDIs (and I would guess EARs) vary quite significantly among the various organizations that calculate/estimate/publish them. That might be related to the point ChristianKI seemed to be trying to make. (Tho I don't know whether 'iron' is one of the nutrients for which this is, or was, the case.)

Comment by Kenny on ChatGPT: First Impressions · 2022-12-05T02:08:35.007Z · LW · GW

I can't tell what's the output of ChatGPT or your prompts or commentary.

Comment by Kenny on [Link] "Improper Nouns" by siderea · 2022-11-14T00:08:24.264Z · LW · GW

I don't think 'chronic fatigue syndrome' is a great example of what the post discusses because 'syndrome' is a clear technical (e.g. medical) word already. Similarly, 'myalgic encephalitis' is (for most listeners or readers) not a phrase made up of common English words. Both examples seem much more clearly medical or technical terms. 'chronic fatigue' would be a better example (if it was widely used) as it would conflate the unexplained medical condition with anything else that might have the same effects (like 'chronic overexertion').

Comment by Kenny on [deleted post] 2022-11-12T19:57:40.768Z

The only benefit of public schools anymore, from what I can tell, is that very wise and patient parents can use it to support their children in mastering Defense Against the Dark Arts.

Well, that and getting to play with other kids. Which is still pretty cool.

This may be, perhaps, an under-appreciated function of (public) school schooling!

Comment by Kenny on What it's like to dissect a cadaver · 2022-11-10T21:04:55.498Z · LW · GW

I would think the title is itself a content warning.

I guess someone might think this post is or could be far more abstract and less detailed about the visceral realities than it is (or maybe even just using the topic as a metaphor at most).

What kind of specific content warning do you think would be appropriate? Maybe "Describes the dissection of human bodies in vivid concrete terms."?

Comment by Kenny on Dath Ilan's Views on Stopgap Corrigibility · 2022-10-02T14:54:44.345Z · LW · GW

I was going to share it with you if you didn't have it, but thanks!

Comment by Kenny on Dath Ilan's Views on Stopgap Corrigibility · 2022-10-01T04:09:11.975Z · LW · GW

Has anyone shared the link with you yet?

Comment by Kenny on [deleted post] 2022-09-24T18:17:11.256Z

After a long day of work, you can kick back with projectlawful for a few hours, and then go to sleep. You can read projectlawful on the weekend. You can read projectlawful on vacation. It's rest and rejuvenation and recharging ...

I did NOT find this to be the case – I found it way TOO engaging and that it therefore, e.g. actively disrupted my ability to go to sleep. I also found the story to be extremely upsetting, i.e. NOT restful or rejuvenating. As-of now, it's extremely bleak.

I very much DO like it and I am perfectly happy that it's a glowfic. (There are some mildly confusing parts, probably because of it being a glowifc, but nothing too bad.)

This is also the first story for which I viscerally felt the utility of providing a 'trigger warning'.

I don't know what in particular makes you think the story is useful as "policy experience". I'm skeptical that much 'real world' policy in any way resembles the story.

Comment by Kenny on Appendix: Jargon Dictionary · 2022-09-13T12:51:01.752Z · LW · GW

I thought it might be a reference to this:

Comment by Kenny on Againstness · 2022-09-01T21:37:29.028Z · LW · GW

I think 'againstness' is nearly perfect :)

I didn't think anything was confusing!

'Againstness' felt like a nearly self-defining word to me.

Your course had a rough/sketched/outlined model based on other models at various levels and there's a few example techniques based on it (in the course).

"againstness control" is totally sensible – just like, e.g. 'againstness management' and 'againstness practice', are too.

I think there's an implied (and intriguing) element of using SNS arousal/dominance for, e.g. motivation. I think there are some times or circumstances in which 'SNS flow' is effective, e.g. competitive sports.

I don't think your example techniques need a special term. I understood this post as 'gesturing' at something more than presenting anything comprehensive.

Comment by Kenny on Wolfram Research v Cook · 2022-08-01T21:54:08.856Z · LW · GW

I think I'm missing a LOT of context you have about this. I very well could be – probably am – missing some point, but I also feel like you're discouraging me from voicing anything that doesn't assume whatever your point is. Is it just that "Stephen Wolfram is bad and everyone should ignore him."? I honestly tried to investigate this, however poorly I might have done that, but this comment comes across as pretty hostile. Is it your intention to dissuade me from writing about this at all?

They bring it up because it is a shocking violation of norms, even commercial ones because zero money was at stake, for a rich entrepreneur like Wolfram to so punitively use the court system to go after an employee like that for something so utterly academic & a truly microscopic detail of CS theory where he knows full well that it will cost them a ton of money to fight it even for just a settlement and where the lawsuit could accomplish no good (in the tiny world of CAs, surely most people interested heard from the conference or were there). This was not "a literary convention"; this was cruel and vindictive.

I don't what norms you're referring to!

But, AFAIK, some commercial norms do in fact condone or allow for aggressive enforcement of NDAs, even where there are is (obviously) "zero money at stake". I also guessed that the point of the lawsuit was maybe (partially) about 'enforcing NDAs generally' too, which is also one way it could 'accomplish good' even tho it didn't prevent dissemination of the info in the paper itself.

If a member of any other research team, in defiance of their team leader ('lead/principle investigator'), published a paper describing work done on a team project, is there no recourse or punishment expected?

Do researchers have a blanket right to publish at any time, on their own, about work they've done as an employee? That's surprising if true! I wouldn't think that's true even in purely academic organizations.

How do you know that Wolfram's 'original plan' wasn't to allow Cook to publish the paper of his proof after the release of NKS? Is there some reason why making Cook wait was unconscionable? I'm sincerely curious! What's the big deal? Why was Cook waiting so bad that of course he should have been able to publish his paper whenever he wanted? Or was making him wait a year fine, but three (or more) not?

Why didn't Cook just retract his paper and settle with Wolfram immediately? I couldn't find any relevant details about the suit and it appeared to me that the records were sealed, so I'd guess only the people involved could answer.

I don't understand your theory of Cook. Why did he publish the paper when apparently Wolfram, his employer, didn't want him to do that? Why didn't he just retract or withdraw his paper? Why was he in the right in all of this? (Was he in the right about this?) Should Wolfram have just ignored Cook publishing the paper? Why shouldn't he have asserted his legal right to control when the paper was published? What's different about this specific paper than others, in both academia and commercial research? What am I missing?

I am very sympathetic to the lawsuit being "cruel and vindictive"! But I'm also sympathetic to being "cruel and vindictive". If a (hypothetical) employee of mine openly defied my orders, e.g. released info about work I'd paid them to do and told them not to publish – I'd be pissed! If they had signed a legal agreement to NOT do that kind of thing too, I suspect I'd think of suing them myself. That the project was a non-fiction book doesn't seem like it'd make this that much different. If nothing else, Cook deprived Wolfram of being able to market NKS as announcing that particular result.

As for the citation/credit generally, NKS is a large book and, AFAICT, it was NOT intended to be an 'academic' work. Had it been written in that 'style', I'd guess it would have been WAY too big to have been published at all.

Is there in fact a reasonable way that Wolfram could have cited other's works, and described the contributions to the book itself, that would work not only for the work done by Matthew Cook, but everyone else involved?

I still don't know very many details of all the contributions to just the Rule 110 proof paper!

(I don't even know what academic paper authorship means. AFAIK, it is somewhat specific to the field

I am happy to admit that it's certainly very likely that Wolfram failed to properly cite other's work, or fairly credit their contributions. I am very sure that he does in fact have a reputation for that.

He nevertheless wrote NKS as it is: you can almost hear the teeth being pulled as he grudgingly mentions Cook at all instead of what was obviously the original plan (done with all the others) of just a brief mention of unspecified work, sort of like when a PI mentions a grad student in the acknowledgment section for 'coding' or 'experiments', using omission to take all the credit. NDA or no, that is not kosher citation.

I can't "almost hear the teeth being pulled" but I'm even more confused – is Wolfram's own citations 'merely' as bad as that done by many other PIs? I can't tell what standard you're judging him by, or how well I should expect him to fare by it, especially given his reputation for doing poorly in this area.

What does the distribution of how well others do in this regard look like? Are, e.g. 95% of researchers, or authors of 'pop science' books, basically 'perfect'? Is Wolfram uniquely terrible?

I also can't see what you mean by Wolfram "jamming himself in" in any of the quotes you provided.

I also don't know that Cook didn't do other "technical content" besides the Rule 110 proof.

I thought Wolfram was clear about NKS being a 'team effort' so I still think it's (somewhat) reasonable that a much longer and more detailed description of more precise contributions were omitted.

Comment by Kenny on Stephen Wolfram's ideas are under-appreciated · 2022-07-31T13:39:31.280Z · LW · GW

I now think it is plausible that Wolfram sued "over literary conventions":

I suspect that Wolfram just wanted to reveal the relevant proof himself, first, in his book NKS (A New Kind of Science), and that Matthew Cook probably was contractually obligated to allow Wolfram to do that.

Given that the two parties settled, and that Cook published his paper about his proof in Wolfram's own journal (Complex Systems), two years after NKS was published, seems to mostly confirm my suspicions.

Comment by Kenny on It’s Probably Not Lithium · 2022-07-26T19:53:12.909Z · LW · GW

The 'components' of our diet, e.g. meat, potatoes, etc., are very different now than earlier, and more different over the last 100 years than prior periods too.

I suspect people that are doing diets like this tho are much less obese, e.g. the Amish.

Comment by Kenny on Dath Ilani Rule of Law · 2022-07-11T05:13:33.150Z · LW · GW

I've weirdly been less and less bothered since my previous comment! :)

I think "planecrash" is a better overall title still, so thanks for renaming all of the links.

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:48:21.088Z · LW · GW

Huh – I wonder if this has helped me since I made a concerted effort to eat leafy greens regularly (basically every day).

I always liked the 'fact' that celery has net-negative calories :)

I do also lean towards eating fruit raw versus, e.g. blended in a smoothy. Make-work for my gastrointestinal system!

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:46:34.227Z · LW · GW

I think you're making an unsupported inferential leap in concluding "they seem oddly uninterested in ...".

I would not expect to know why they haven't responded to my comments, even if I did bring up a good point – as you definitely have.

I don't know, e.g. what their plans are, whether they even are the kind of blogger that edits posts versus write new follow-up posts instead, how much free time they have, whether they interpreted a comment as being hostile and thus haven't replied, etc..

You make good points. But I would be scared if you 'came after me' as you seem to be doing to the SMTM authors!

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:43:38.821Z · LW · GW

It just seems to me that the SMTM authors are doing a very bad job at actually pursuing the truth

I think – personally – you're holding them to an unrealistically high standard!

When I compare SMTM to the/a modal person or even a modal 'rationalist', I think they're doing a fantastic job.

Please consider being at least a little more charitable and, e.g. 'leaving people a line of retreat'.

We want to encourage each other to be better, NOT to discourage them from trying at all! :)

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:40:48.618Z · LW · GW

I was, and still am, tho much less, excited about the contamination theory – much easier to fix!

But I think I'm back to thinking basically along the lines you outlined.

I'm currently losing weight and my model of why is:

  • I'm less stressed, and depressed, than recently, and I've been able to better stop eating when I'm satiated.
  • I'm exercising regularly and intensely; mainly rock climbing and walking (with lots of decent elevation changes). It being sunnier and warmer with spring/summer has made this much more appealing.
  • I'm maybe (hypo)manic (or 'in that direction', i.e. 'hypohypomanic'; or maybe even 'euthymic'). I'm guessing recent sunlight-in-my-home changes triggered this (as well as the big recent drop in stress/depression).

I would love to see a study of weight gain in modern hunter-gatherer people that provides an experimental group with 'very palatable' food. I think I would be willing to bet that they would gain some weight.

I do also suspect that hunter-gatherers engage in a LOT of fairly strenuous physical activity. Walking – and living in a dense urban walkable city (in my case NYC) – does seem like maybe one of the most feasible ways to try to match that much higher level of overall physical activity. (Rock climbing is also pretty strenuous!)

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:31:49.893Z · LW · GW

I also thought it was (plausibly) a 'friendly challenge' – we should be willing to bet on our beliefs!

And we should be willing to bet and also trust each other to not defect from our common good.

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:30:10.334Z · LW · GW

The challenge did specify [emphasis mine]:

up to $1000

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:29:04.284Z · LW · GW

I think they're a proponent of the 'too palatable food' theory.

Comment by Kenny on It’s Probably Not Lithium · 2022-07-02T15:28:27.585Z · LW · GW

Thanks!

I've definitely downgraded the (lithium) contamination theory. I'll still take a (very modest) 100:1 bet on it tho :)

In regard to your (implied) criticism that SMTM's blog post(s) haven't been edited, it occurred to me that they may not be a 'edit blog posts' person. That seems related to their offered reasons for refusing bet challenge, i.e. 'we're in hypothesis exploration mode'. They might similarly be intending to write a follow-up blog post instead of editing the existing one.

(I actually prefer 'new post' versus 'edit existing post' as a blogging/writing policy – if there isn't a very nice (e.g. 'GitHub like') history diff visualization available of the edit history.)

Comment by Kenny on Finance - accessibility (asking for textbook recommendations) · 2022-06-23T15:42:11.944Z · LW · GW

I've been asking various people this same question basically and am still looking for (more) concrete recommendations. (Several people basically answered 'work for a finance/investment/trading company', which is ... not ideal!)

I'm tentatively planning on doing a more intense search for this soonish and I'll comment here, or supply an answer, if I find anything that seems promising.