Posts

papetoast's Shortforms 2023-01-20T01:56:32.921Z

Comments

Comment by papetoast on LessWrong's (first) album: I Have Been A Good Bing · 2024-04-21T13:23:00.939Z · LW · GW

Github is known to occasionally be unreachable in China. In general I think people in LW should be able to figure out VPNs

Comment by papetoast on Open Thread Spring 2024 · 2024-04-21T08:08:41.697Z · LW · GW

I just stumbled on this website: https://notes.andymatuschak.org/About_these_notes It has a similar UI but for Obsidian-like linked notes. The UI seem pretty good.

Comment by papetoast on Absence of Evidence Is Evidence of Absence · 2024-04-08T04:19:24.115Z · LW · GW

Sorry, I don't feel like completely understanding your POV is worth the time. But I did read you reply 2-3 times. In roughly the same order as your writing.

Yes, so if you observe no sabotage, then you do update about the existence of a fifth column that would have, with some probability, sabotaged (an infinite possibility). But you don't update about the existence of the fifth column that doesn't sabotage, or wouldn't have sabotaged YET, which are also infinite possibilities. 

I'm not sure why infinity matters here, many things have infinite possibilities (like any continuous random variable) and you can still apply a rough estimate on the probability distribution.

I guess it's a general failure of Bayesian reasoning. You can't update 1 confidence beliefs, you can't update 0 confidence beliefs, and you can't update undefined beliefs.

I think this is an argument similar to an infinite recursion of where do priors come from? But Bayesian updates usually produces better estimate than your prior (and always better than your prior if you can do perfect updates, but that's impossible), and you can use many methods to guestimate a prior distribution.

You have a pretty good model about what might cause the sun to rise tomorrow, but no idea, complete uncertainty (not 0 with certainty nor 1 with certainty, nor 50/50 uncertainty, just completely undefined certainty) about what would make the sun NOT rise tomorrow, so you can't (rationally) Bayesian reason about it. You can bet on it, but you can't rationally believe about it.

Unknown Unknowns are indeed a thing. You can't completely rationally Bayesian reason about it, and that doesn't mean you can't try to Bayesian reason about it. Eliezer didn't say you can become a perfect Bayesian reasoner either, he always said you can attempt to reason better, and strive to approach Bayesian reasoning.

Comment by papetoast on Neil Warren's Shortform · 2024-04-06T02:32:25.429Z · LW · GW

Relatedly, in-line private feedback. I saw a really good design for alerting typos here.

Comment by papetoast on 2023 Survey Results · 2024-03-29T04:47:14.182Z · LW · GW

To the four people who picked 37 and thought there was a 5% chance other people would also choose it, well played. 

Wow, that's really a replicable phenomenon

Comment by papetoast on adamzerner's Shortform · 2024-03-21T03:41:42.526Z · LW · GW

Threads are pretty good, most help channels should probably be a forum (or 1 forum + 1 channel). Discord threads do have a significant drawback of lowering visibility by a lot, and people don't like to write things that nobody ever sees.

Discord forum

^ Forum

Comment by papetoast on Open Thread – Winter 2023/2024 · 2024-01-01T16:03:29.177Z · LW · GW

I didn't read either links, but you can write whatever you want on LessWrong! While most posts you see are very high quality, this is because there is a distinction between frontpage posts (promoted by mods) and personal blogposts (the default). See Site Guide: Personal Blogposts vs Frontpage Posts.

And yes some people do publish blogposts on LessWrong, jefftk being one that I follow.

FAQ: What can I post on LessWrong?

Posts on practically any topic are welcomed on LessWrong. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be. [...]

Comment by papetoast on Forcing yourself to keep your identity small is self-harm · 2023-12-12T04:56:15.167Z · LW · GW

I tend to think of "keep my identity small" as "keep my attachments to identity dimensions weak". 

Very much agree.

Comment by papetoast on Lech Mazur's Shortform · 2023-12-12T04:53:37.329Z · LW · GW

suggestions:

  1. Duplicate this to the open thread to increase visibility
  2. I don't know your exact implementation for forming the ranked list, but I worry that if you (for example) simply sort from low likelihood to high likelihood, it encourages people to only submit very low probability predictions.
Comment by papetoast on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-10T03:18:34.179Z · LW · GW

For possible solutions:

1. This is my problem and I should find a way to stop feeling ugh

2. Have some ways to easily read a summary of long comments (AI or author generated)

3. People should write shorter comments on average

Comment by papetoast on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-10T03:15:49.375Z · LW · GW

I often have an ugh feeling towards reading long comments.

Posts are usually well written, but long comments are usually rambly, even the highest karma ones. It takes a lot of effort to read the comments on top of reading the post, and the payoff is often small.

But for multiple reasons, I still feel an obligation to read at least some comments, and ugh.

Comment by papetoast on adamzerner's Shortform · 2023-12-09T09:06:38.120Z · LW · GW

You'd need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.

Modelling humans with Bayesian agent seems wrong.

For humans, I think the problem usually isn't the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn't exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.

(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)

Comment by papetoast on adamzerner's Shortform · 2023-12-07T14:04:10.916Z · LW · GW

I found LW's font size to be a little bit small but I have managed to get used to it. After reading your message I think I will try going to 110%, thanks. (170% is too large I feel like I'm reading on my phone on landscape)

Comment by papetoast on View and bet in Manifold prediction markets on Lesswrong · 2023-12-05T03:38:17.205Z · LW · GW

I cannot bet in the preview multi-binary markets (also clicking "See 2 more answers" doesn't work)

And then the popup closes itself and I am not logged in.

https://www.lesswrong.com/posts/2qQyKpXzbPov2Fmdr/using-prediction-platforms-to-select-quantified-self#Self_Blinded_Experiments 

Comment by papetoast on Open Thread – Winter 2023/2024 · 2023-12-05T03:30:55.736Z · LW · GW

Why isn't this kinda cool and worth participating in?

I wrote two comments about why people don't read your post, but as I was betting I realized another two problems about the markets:

  1. (Not your fault) The Manifold betting integration kind of sucks. Clicking "See 2 more answers" does nothing, and the options are ordered by percentage.
  2. There isn't enough liquidity in your markets. It makes betting difficult because the even M5 increments changes too much. idk, maybe buy some mana to subsidize your markets? It would also make people seeing your market from Manifold more interested to bet as they will have more to gain for the prediction.
Comment by papetoast on Open Thread – Winter 2023/2024 · 2023-12-05T03:19:06.772Z · LW · GW

The title doesn't set a good expectation of the contents. If I am a person interested in "Please Bet On My Quantified Self Decision Markets", I want to bet. I won't expect to (and shouldn't be expected to) read all your lengthy experimental details. It took a while for me to find the markets.

Comment by papetoast on Open Thread – Winter 2023/2024 · 2023-12-05T01:19:35.399Z · LW · GW
  • The actual quote was also too long that I would have stopped reading if I wasn't trying to analyse your post.
  • The quote is also out of context, in that I am very confused about what the author was trying to say from the first paragraph. Because I was skimming, I didn't really understand the quote until the market section.

Fortunately, there’s a good (and well-known) alternative alternative to what?, which is to randomize decisions sometimes, at random yeah that makes sense, but how does randomization relate to prediction markets?. You tell people: “I will roll a 20-sided die. If it comes up 1-19, everyone gets their money back and I do what I want what is I do what I want. If it comes up 20, the bets activate and I decide what to do using a coinflip. ok so this is about a bet, but then why coin flip??

Comment by papetoast on Open Thread – Autumn 2023 · 2023-12-04T04:07:39.752Z · LW · GW

@Raemon Is this intentionally unpinned?

Comment by papetoast on 2023 Unofficial LessWrong Census/Survey · 2023-12-03T02:02:09.225Z · LW · GW

For google forms, if the question is not required, you can click on the same radio button twice to cancel the selection

Comment by papetoast on The 101 Space You Will Always Have With You · 2023-11-29T06:58:17.086Z · LW · GW

I love this post. I needed the reminder that sometimes I should spend more time explaining things to people.

Comment by papetoast on papetoast's Shortforms · 2023-11-27T07:50:56.342Z · LW · GW

4 reasons to talk about your problem with friends

This is an advice I would tell myself 5 years ago, just storing it somewhere public and forcing myself to write. Writing seems like an important skill but I always feel like I have nothing to say.

  1. It forces you to think. Sometimes you aren't actually thinking about solutions to a problem even though it has been bothering you for a long time.
  2. for certain problems: a psychological feeling of being understood. For some people, getting the idea that "what I'm feeling is normal" is also important. It can be a false sense of comfort, but sometimes you need it. 
  3. a hidden "contract" that makes the stake of not fixing your problem higher, you now risk looking unimpressive to more people if you fail to fix your problem, since now your friend also knows you're struggling. This increases the motivation of solving the problem. Related: akrasia??
  4. also the slim possibility that even though you think your friend cant help you but actually they can.
Comment by papetoast on Wikipedia is not so great, and what can be done about it. · 2023-11-27T03:27:30.879Z · LW · GW

I read the whole thing, there are no solutions in the middle. I also feel like the "what can be done about it" part is very underdeveloped

Comment by papetoast on Open Thread – Autumn 2023 · 2023-11-22T10:14:08.904Z · LW · GW

It would be nice if side-comments setting is remembered. Right now it defaults to Show Upvoted without regards to what I previously selected.

Comment by papetoast on Reinforcement Via Giving People Cookies · 2023-11-20T03:20:42.097Z · LW · GW

Did you use reinforcement to make yourself write a post every weekday?

Comment by papetoast on What’s your best utilitarian model for risking your best kidneys? · 2023-11-13T09:43:20.679Z · LW · GW

(I only thought for 1 minute.)

This argument seems valid for large number of soldiers (~100,000). But when numbers are small, a different strategy should dominate. Perhaps forcing a uniform distribution of donating kidneys (by randomly forcing a solider to donate their kidney) could work better.

Comment by papetoast on papetoast's Shortforms · 2023-11-07T02:37:59.397Z · LW · GW

Since we have meta search engines that aggregate search results from many search engines, is it time for us to get a meta language model* to get results from chatGPT, Bing, Bard, and Claude all at the same time, and then automatically rank them, perhaps even merging all of the replies into a single reply.

*meta language model is an extremely bad name because of the company Meta and the fact that the thing I am thinking of isn't really a language model, but ¯\_(ツ)_/¯

Comment by papetoast on papetoast's Shortforms · 2023-11-07T02:31:24.407Z · LW · GW

I admit that 30-50% is arbitrary and shouldn't be brought up like a fact, I have removed it. (I didn't mean to have such a strong tone there, but I did) What I really want to say is that the default location for the target text to be somewhere closer to the middle/wherever most people usually put their eyes on. (Perhaps exactly the height where you clicked the in-page redirect?)

I still stand by that it should not be exactly at the top for ease of reading (I hope this doesn't sound too motte-and-bailey). The reason that it is redirected to the top is probably because it is a very objective location and wouldn't get affected by device size. But it is very much not a standard location where the current line of text you are reading will be. I am willing to bet that <3% of people read articles where they scroll their currently reading line up to the top three visible lines.

Comment by papetoast on papetoast's Shortforms · 2023-11-06T13:36:44.067Z · LW · GW

I always thought that the in-page redirects are fucking stupid, it should bring the text I want to see closer to eye level, not exactly at the top where even browser bars can block the text (happens when you go back from footnotes to article on LW).

Comment by papetoast on [deleted post] 2023-11-06T13:30:42.148Z

I think it never worked, the old LW wiki for Cox's theorem has the same broken link.

Comment by papetoast on On (Not) Reading Papers · 2023-10-22T08:14:51.486Z · LW · GW

"high impact journals" often reward extreme brevity

I read the linked article and I don't think it supports your claim. The author referenced a few examples of extremely short abstracts and papers written with the intention of setting records for brevity, then talks about a conversation with his friend that shorter papers have been proliferating. The article does not provide a strong argument that high impact journals reward extreme brevity in general.

Comment by papetoast on Why it's necessary to shoot yourself in the foot · 2023-10-22T06:40:35.236Z · LW · GW

When teaching programming, we should let people make these mistakes, and then show them the tools to correct them.

We can also just teach them what mistakes are likely to happen and why, then show them the tools. Letting them actually make the mistakes will make them understand more deeply but I think it is usually not worth the extra time.

Comment by papetoast on Open Thread – Autumn 2023 · 2023-10-17T01:47:53.793Z · LW · GW

It is kind of unfortunate that the top search suggestion for lesswrong is still "lesswrong cult". I tested it on multiple new devices and it is very consistent.

Comment by papetoast on Optimal Exercise · 2023-10-08T09:01:22.437Z · LW · GW

What does it mean for exercise to be optimal?

(Quoting for side bar visibility, not as a response)

RomeoStevens has made a followup, Updates and Reflections on Optimal Exercise after Nearly a Decade on his new account.

Comment by papetoast on Central, Hong Kong – ACX Meetups Everywhere Fall 2023 · 2023-10-07T06:03:16.605Z · LW · GW

I just got on A11 from the Bridge station, so everything should be ok.

Comment by papetoast on Central, Hong Kong – ACX Meetups Everywhere Fall 2023 · 2023-10-07T03:18:13.778Z · LW · GW

How much earlier can I come to the venue? Estimating when should I leave my home.

Comment by papetoast on Central, Hong Kong – ACX Meetups Everywhere Fall 2023 · 2023-10-05T13:46:11.739Z · LW · GW

Depends on the typhoon :/. I have no idea whether I can come from Macau on Saturday.

(does anyone here understand the typhoon data well enough to tell me whether it may actually be problematic or I am worrying too much.)

Comment by papetoast on Has anyone actually tried to convince Terry Tao or other top mathematicians to work on alignment? · 2023-09-29T08:14:43.814Z · LW · GW

Any updates? I skimmed through the comments and answers but it seems that we just know someone planned to contact Terrance Tao, and no results have been reported back.

Comment by papetoast on How have you become more hard-working? · 2023-09-25T14:21:39.173Z · LW · GW

I used Boss as a Service combined with Beeminder to give myself a lot of accountability.

Comment by papetoast on LessWrong FAQ · 2023-09-25T01:26:42.091Z · LW · GW

I think most people actually knows this. When you try to cancel an upvote on mobile, you have to cycle from upvote -> strong upvote -> no vote

Comment by papetoast on Asking for help as an O(1) lookup · 2023-09-17T08:35:54.885Z · LW · GW

Wouldn't it be cool if you could pay $20 or so to hop on a quick call and talk it through with a real chef?

I argee, while I personally don't cook, I would definitely love to be able to call a professional when needed.

Calling is especially nice too. I sometimes find asking help inconvenient because it will take my friend hours to respond, and calling them seems too unnecessary/costs too much social credit.

Comment by papetoast on What wiki-editing features would make you use the LessWrong wiki more? · 2023-09-16T08:51:29.657Z · LW · GW

Apparently you need to have experimental features on.

https://www.lesswrong.com/posts/q6d4vtGwt5FE84wk7/beta-feature-google-docs-like-editing-for-lesswrong-posts

Comment by papetoast on Survey advice · 2023-09-16T06:35:47.372Z · LW · GW

Guidedtrack.com

is now https://www.guidedtrack.com/

Comment by papetoast on What wiki-editing features would make you use the LessWrong wiki more? · 2023-09-14T06:58:09.806Z · LW · GW

Unless you are specifically talking about writing a wiki article (which I have 0 experience), it is possible to do this with a draft without co-authorship.

Comment by papetoast on How do I read things on the internet · 2023-08-21T12:46:39.216Z · LW · GW

Meta: The text in your website is too cluttered that I found reading in LW easier, something about the indentation and the small paragraph spacing

Comment by papetoast on Announcing the EA Archive · 2023-07-06T15:28:00.287Z · LW · GW

Any reason why syncthing is not used? It can provide continuous file synchronization on windows/mac/linux/android, though I have never tried a large scale syncing across multiple people so I don't know how well it would work in this use case.

Edit: I should elaborate a little bit, the benefit is that syncthing can allow every copy of the EA Archive to be of the newest version as long as Internet and Syncthing relay servers are available.

Comment by papetoast on Open Thread: June 2023 (Inline Reacts!) · 2023-07-05T05:34:26.387Z · LW · GW

4 small remarks on reactions not too related to your update

  1. I think the scroll bar with its relatively dark colour still makes scroll-for-more-reacts pretty obvious. Not sure how to improve on this, maybe just making the colour a lighter gray?
  2. The scroll bar has different thickness on Firefox vs Chromium (The fact that I am using Firefox may be why I found the scroll bar too obvious)

  3. Initially, I thought the top bar of reacts are either the most commonly used reacts or the most recently used reacts by myself. Either way, I expected that I could find them in the list of reacts. I got a little bit confused when I read through the whole list once and couldn't find those reacts.
  4. On really long comments, I can see that a part has a react by the underlines, but I cannot tell what is the react because reacts are on the bottom of a comment, and the comment is way too long to see both its content and the bottom at once. Would be great to see the reacts when I hover over the underlined text.
Comment by papetoast on lsusr's Shortform · 2023-06-21T04:25:55.367Z · LW · GW

EDIT: nevermind, I just see that you wrote Contrarian Writing Advice in response to Daniel Kokotaiko. I haven't read that.

Disagree with 2, 6. Not sure about 5. Agree with others.

2. Write the minimum necessary to prove a point. Do not preempt counterarguments.

https://slatestarcodex.com/2016/02/20/writing-advice/
Scott suggests to anticipate and defuse counterarguments. (#8 of his list). I rarely write anything but it seems about right to preemptively refute the most likely ways that people will misunderstand you. I also like Duncan's Ruling Out Everything Else, which suggests setting up some boundaries so that other's cannot misinterpret you too much.

6. Do not repeat yourself.

Using examples help readers understand and using a lot of examples will probably make you repeat some points a few times. (Perhaps you don't count that as repeating yourself?) It is probably best if you use more examples but mark them as non-compulsory reading in one way or another.

Comment by papetoast on Open Thread: June 2023 (Inline Reacts!) · 2023-06-21T04:12:55.361Z · LW · GW

Is there a way to find all the posts that you have strong upvoted? I bet there is a hidden query that I can use on the All Posts page but I can't find it easily.

If it actually doesn't exist then this comment acts as my vote on getting it implemented.

Comment by papetoast on How to Price a Futures Contract · 2023-06-20T11:30:34.362Z · LW · GW

How do you extract risk-free profit from an underpriced future with strike price ?

This should read overpriced, not underpriced, right?

Comment by papetoast on Updates and Reflections on Optimal Exercise after Nearly a Decade · 2023-06-11T13:01:57.072Z · LW · GW

You should link this post on the top of your old one