plex's Shortform

post by plex (ete) · 2020-11-22T19:42:38.852Z · LW · GW · 18 comments

Contents

18 comments

18 comments

Comments sorted by top scores.

comment by plex (ete) · 2023-10-29T23:25:31.743Z · LW(p) · GW(p)

Life is Nanomachines

In every leaf of every tree
If you could look, if you could see
You would observe machinery
Unparalleled intricacy
 
In every bird and flower and bee
Twisting, churning, biochemistry
Sustains all life, including we
Who watch this dance, and know this key

Illustration: A magnified view of a vibrant green leaf, where molecular structures and biological nanomachines are visible. Hovering nearby, a bird's feathers reveal intricate molecular patterns. A bee is seen up close, its body showcasing complex biochemistry processes in the form of molecular chains and atomic structures. Nearby, a flower's petals and stem reveal the dance of biological nanomachines at work. Human silhouettes in the background observe with fascination, holding models of molecules and atoms.

Replies from: niplav
comment by niplav · 2023-10-30T11:57:29.313Z · LW(p) · GW(p)

Related: Video of the death of a single-celled Blepharisma.

comment by plex (ete) · 2024-07-09T13:20:18.644Z · LW(p) · GW(p)

Rationalists try to be well calibrated and have good world models, so we should be great at prediction markets, right?

Alas, it looks bad at first glance:

I've got a hopeful guess at why people referred from core rationalist sources seem to be losing so many bets, based on my own scores. My manifold score looks pretty bad (-M192 overall profit), but there's a fun reason for it. 100% of my resolved bets are either positive or neutral, while all but one of my unresolved bets are negative or neutral.

Here's my full prediction record:

The vast majority of my losses are on things that don't resolve soon and are widely thought to be unlikely (plus a few tiny not particularly well thought out bets like dropping M15 on LK-99), and I'm for sure losing points there. but my actual track record cached out in resolutions tells a very different story.

I wonder if there are some clever stats that @James Grugett [LW · GW] @Austin Chen [LW · GW]  or others on the team could do to disentangle these effects, and see what the quality-adjusted bets on critical questions like the AI doom ones would be absent this kind of effect. I'd be excited to see the UI showing an extra column on the referrers table showing cashed out predictions only rather than raw profit. Or generally emphasising cached out predictions in the UI more heavily, to mitigate the Keynesian beauty contest style effects of trying to predict distant events.

Replies from: habryka4, D0TheMath
comment by habryka (habryka4) · 2024-07-09T18:06:39.739Z · LW(p) · GW(p)

These datapoints just feel like the result of random fluctuations. Both Writer and Eliezer mostly drove people to participate on the LK-99 stuff where lots of people were confidently wrong. In-general you can see that basically all the top referrers have negative income: 

Among the top 10, Eliezer and Writer are somewhat better than the average (and yaboi is a huge outlier, which my guess is would be explained by them doing something quite different than the other people). 

Replies from: ete
comment by plex (ete) · 2024-07-10T17:31:00.047Z · LW(p) · GW(p)

Agree, expanding to the top 9[1] makes it clear they're not unusual in having large negative referral totals. I'd still expect Ratia to be doing better than this, and would guess a bunch of that comes from betting against common positions on doom markets, simulation markets, and other things which won't resolve anytime soon (and betting at times when the prices are not too good, because of correlations in when that group is paying attention).

  1. ^

    Though the rest of the leaderboard seems to be doing much better

comment by Garrett Baker (D0TheMath) · 2024-07-09T17:39:41.657Z · LW(p) · GW(p)

The vast majority of my losses are on things that don't resolve soon

The interest rate on manifold makes such investments not worth it anyway, even if everyone had reasonable positions to you.

comment by plex (ete) · 2021-01-07T23:46:47.100Z · LW(p) · GW(p)

A couple of months ago I did some research into the impact of quantum computing on cryptocurrencies, seems maybe significant, and a decent number of LWers hold cryptocurrency. I'm not sure if this is the kind of content that's wanted, but I could write up a post on it.

Replies from: ChristianKl
comment by ChristianKl · 2021-01-08T12:05:50.172Z · LW(p) · GW(p)

Writeup of good research is generally welcome on LessWrong.

comment by plex (ete) · 2024-10-31T12:41:35.224Z · LW(p) · GW(p)

Titles of posts I'm considering writing in comments, upvote ones you'd like to see written.

Replies from: ete, ete, ete, ete, gwern, ete
comment by plex (ete) · 2024-10-31T12:43:17.880Z · LW(p) · GW(p)

Why SWE automation means you should probably sell most of your cryptocurrency

comment by plex (ete) · 2024-10-31T12:54:16.968Z · LW(p) · GW(p)

An opinionated tour of the AI safety funding landscape

comment by plex (ete) · 2024-10-31T12:56:10.524Z · LW(p) · GW(p)

Grantmaking models and bottlenecks

comment by plex (ete) · 2024-10-31T13:03:33.601Z · LW(p) · GW(p)

Ayahuasca: Informed consent on brain rewriting

(based on anecdotes and general models, not personal experience)

comment by gwern · 2024-10-31T15:18:55.382Z · LW(p) · GW(p)

Can't you do this as polls in a single comment?

Replies from: ete
comment by plex (ete) · 2024-10-31T16:37:28.635Z · LW(p) · GW(p)

LW supports polls? I'm not seeing it in https://www.lesswrong.com/tag/guide-to-the-lesswrong-editor, [? · GW] unless you mean embed a manifold market which.. would work, but adds an extra step between people and voting unless they're registered to manifold already.

comment by plex (ete) · 2024-10-31T12:44:26.283Z · LW(p) · GW(p)

Mesa-Hires - Making AI Safety funding stretch much further

comment by plex (ete) · 2020-11-22T19:42:39.283Z · LW(p) · GW(p)

Thinking about some things I may write. If any of them sound interesting to you let me know and I'll probably be much more motivated to create it. If you're up for reviewing drafts and/or having a video call to test ideas that would be even better.

  • Memetics mini-sequence (https://www.lesswrong.com/tag/memetics [? · GW] has a few good things, but no introduction to what seems like a very useful set of concepts for world-modelling)
    • Book Review: The Meme Machine (focused on general principles and memetic pressures toward altruism)
    • Meme-Gene and Meme-Meme co-evolution (focused on memeplexes and memetic defences/filters, could be just a part of the first post if both end up shortish)
    • The Memetic Tower of Generate and Test (a set of ideas about the specific memetic processes not present in genetic evolution, inspired by Dennet's tower of generate and test)
    • (?) Moloch in the Memes (even if we have an omnibenevolent AI god looking out for sentient well-being, things maybe get weird in the long term as ideas become more competent replicators/persisters if the overseer is focusing on the good and freedom of individuals rather than memes. I probably won't actually write this, because I don't have much more than some handwavey ideas about a situation that is really hard to think about.)
  • Unusual Artefacts of Communication (some rare and possibly good ways of sharing ideas, e.g. Conversation Menu, CYOA, Arbital Paths, call for ideas. Maybe best as a question with a few pre-written answers?)
Replies from: cSkeleton
comment by cSkeleton · 2023-04-15T18:41:38.841Z · LW(p) · GW(p)

Hi, did you ever go anywhere with Conversation Menu? I'm thinking of doing something like this related to AI risk to try to quickly get people to the arguments around their initial reaction and if helping with something like this is the kind of thing you had in mind with Conversation Menu I'm interested to hear any more thoughts you have around this. (Note, I'm thinking of fading in buttons more than a typical menu.) Thanks!