Posts

papetoast's Shortforms 2023-01-20T01:56:32.921Z

Comments

Comment by papetoast on Green goo is plausible · 2024-12-12T14:35:07.460Z · LW · GW

You can still nominate posts until Dec 14th?

Comment by papetoast on adamzerner's Shortform · 2024-12-12T13:43:11.509Z · LW · GW

Thought about community summaries a very little bit too, with the current LW UI, I envision that the most likely way to achieve this is to

  1. Write a distillation comment instead of post
  2. Quote the first sentence of the sequences post so that it could show up on the side at the top
  3. Wait for the LW team to make this setting persistent so people can choose Show All

Comment by papetoast on Picking favourites is hard · 2024-12-05T13:13:14.087Z · LW · GW

There is also the issue of things only being partially orderable.

When I was recently celebrating something, I was asked to share my favorite memory. I realized I didn't have one. Then (since I have been studying Naive Set Theory a LOT), I got tetris-effected and as soon as I heard the words "I don't have a favorite" come out of my mouth, I realized that favorite memories (and in fact favorite lots of other things) are partially ordered sets. Some elements are strictly better than others but not all elements are comparable (in other words, the set of all memories ordered by favorite does not have a single maximal element). This gives me a nice framing to think about favorites in the future and shows that I'm generalizing what I'm learning by studying math which is also nice!

- Jacob G-W in his shortform

Comment by papetoast on papetoast's Shortforms · 2024-12-03T22:35:31.482Z · LW · GW

It is hard to see, changed to n.

Comment by papetoast on papetoast's Shortforms · 2024-12-03T13:06:59.833Z · LW · GW

In my life I have never seen a good one-paragraph explanation of backpropagation so I wrote one.

The most natural algorithms for calculating derivatives are done by going through the expression syntax tree[1]. There are two ends in the tree; starting the algorithm from the two ends corresponds to two good derivative algorithms, which are called forward propagation (starting from input variables) and backward propagation respectively. In both algorithms, calculating the derivative of one output variable  with respect to one input variable  actually creates a lot of intermediate artifacts. In the case of forward propagation, these artifacts means you get  for ~free, and in backward propagation you get  for ~free. Backpropagation is used in machine learning because usually there is only one output variable (the loss, a number representing difference between model prediction and reality) but a lot of input variables (parameters; in the scale of millions to billions).

This blogpost has the clearest explanation. Credits for the image too.

https://colah.github.io/posts/2015-08-Backprop/
  1. ^

    or maybe a directed acyclic graph for multivariable vector-valued functions like f(x,y)=(2x+y, y-x)

Comment by papetoast on (The) Lightcone is nothing without its people: LW + Lighthaven's big fundraiser · 2024-12-01T13:30:25.924Z · LW · GW

Donated $25 for all the things I have learned here.

Comment by papetoast on Facets and Social Networks · 2024-12-01T08:34:59.891Z · LW · GW

Strongly agreed. Content creators seem to get around this by creating multiple accounts for different purposes, but this is difficult to maintain for most people.

Comment by papetoast on Perils of Generalizing from One's Social Group · 2024-11-25T05:53:31.103Z · LW · GW

I rarely see them show awareness of the possibility that selection bias has created the effect they're describing.

In my experience with people I encounter, this is not true ;)

Comment by papetoast on papetoast's Shortforms · 2024-11-23T11:42:46.295Z · LW · GW

Joe Rogero: Buying something more valuable with something less valuable should never feel like a terrible deal. If it does, something is wrong.

clone of saturn: It's completely normal to feel terrible about being forced to choose only one of two things you value very highly.

https://www.lesswrong.com/posts/dRTj2q4n8nmv46Xok/cost-not-sacrifice?commentId=zQPw7tnLzDysRcdQv

Comment by papetoast on papetoast's Shortforms · 2024-11-12T11:46:59.775Z · LW · GW

Yes!

Comment by papetoast on adamzerner's Shortform · 2024-11-12T02:17:27.814Z · LW · GW
  1. Butterfly ideas?
  2. By default I expect the author to have a pretty strong stance on the main idea of a post, also the content are usually already refined and complete, so the barrier of entry to having a comment that is valuable is higher.
Comment by papetoast on papetoast's Shortforms · 2024-11-12T02:10:32.800Z · LW · GW

Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism)

True in my example. I acknowledge that my example is wrong and should have been more explicit about having an alternative. Quoting myself from the comment to Vladimir_Nesov:

Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes.

If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer.

I do accept this as the rational answer, doesn't mean it is not irritating. If A (skillful translator) cares about having a good translation of X slightly more than Y, and B (poor translator) cares about Y much more than X. If B can act first, he can work on X and "force" A (via expected utility) to work on Y. This is a failure of mine to not talk about difference in preference in my examples and expect people to extrapolate and infer it out.

Comment by papetoast on papetoast's Shortforms · 2024-11-12T01:36:33.861Z · LW · GW

Again, seems like we are in agreement lol. I agree with what you said and I meant that, but tried to compress it into one sentence and failed to communicate.

Comment by papetoast on papetoast's Shortforms · 2024-11-11T15:09:11.126Z · LW · GW

It sure can! I think we are in agreement on sunk cost fallacy. I just don't think it applies to example 1 because there exists alternatives that can keep the sunk resources. Btw this is why my example is on the order of $100, at this price point you probably have a couple alternative things to buy to spend the money.

Comment by papetoast on papetoast's Shortforms · 2024-11-11T14:38:45.268Z · LW · GW

(I need to defend the sad and the annoying in two separate parts)

  1. Yes, and but sometimes that is already annoying on its own (Bob is not perfectly rational and sometimes he just really want the quality headphone, but now math tells Bob that Tim gifting him that headphone means he would have to wait e.g. ~2 years before it is worth buying a new one). Of course Bob can improve his life in other ways with his saved money, but still, would be nice if you can just ask Tim to buy something else if you had known.
  2. Sometimes increasing sum(projects) does not translate directly to increasing utility. This is more obvious in real life scenarios where actors are less rational and time is a real concept. The sad thing happens when someone with good intention but with poor skill (and you don't know they are that bad) signing up to a time-critical project and failing/doing sub-par
Comment by papetoast on papetoast's Shortforms · 2024-11-11T14:26:03.803Z · LW · GW

This is a tangent, but Sunk cost fallacy is not really a fallacy most of the time, because spending more resources beforehand really increases the chance of "success" most of the time. For more: https://gwern.net/sunk-cost 

I am trying to pinpoint the concept of "A doing a mediocre job of X will force B to rationally do Y instead of X, making the progress of X worse than if A had not done anything". The examples are just examples that hopefully helps you locate the thing I am handwaving at. I do not try to make them logically perfect because that would take too much time.

Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes. Of course there are some arguments about preference, utility != dollar amount or something along those lines. But (b) is the better option in my constructed example to show the point.

Let me know if I still need to explain example 2

Comment by papetoast on papetoast's Shortforms · 2024-11-11T13:15:15.949Z · LW · GW

It is sad and annoying that if you do a mediocre job (according to the receiver), doing things even for free (volunteer work/gifting) can sabotage the receiver along the dimension you're supposedly helping.

This is super vague the way I wrote it, so examples.

Example 1. Bob wants to upgrade and buy a new quality headphone. He has a $300 budget. His friend Tim not knowing his budget, bought a $100 headphone for Bob. (Suppose second-handed headphones are worthless) Now Bob cannot just spend $300 to get a quality headphone. He would also waste Tim's $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.

Example 2. Andy, Bob, and Chris are the only three people who translates Chinese books to English for free as a hobby. Because there are so many books out there, it is often not worth it to re-translate a book even if the previous one is bad, because spending that time to translate a different book is just more helpful to others. Andy and Bob are pretty good, but Chris absolutely sucks. It is not unreadable, but they are just barely better than machine translation. Now Chris has taken over to translate book X, which happens a pretty good book. The world is now stuck with Chris' poor translation on book X with Andy and Bob never touching it again because they have other books to work on.

Comment by papetoast on Open Thread Fall 2024 · 2024-11-09T04:22:06.044Z · LW · GW

I want to use this chance to say that I really want to be able to bookmark a sequence

Comment by papetoast on Abstractions are not Natural · 2024-11-07T06:14:38.512Z · LW · GW

Agreed on the examples of natural abstractions. I held a couple abstraction examples in my mind (e.g. atom, food, agent) while reading the post and found that it never really managed to attack these truly very general (dare I say natural) abstractions.

Comment by papetoast on Open Thread Fall 2024 · 2024-10-29T09:43:57.694Z · LW · GW

I overlayed my phone's display (using scrcpy) on top of the website rendered on Windows (Firefox). Image 1 shows that they indeed scaled to align. Image 2 (Windows left, Android right) shows how the font is bolder on Windows and somewhat blurred.

The monitor is 2560x1440 (website at 140%) and the phone is 1440x3200 (100%) mapped onto 585x1300.

Comment by papetoast on Open Thread Fall 2024 · 2024-10-29T06:30:21.611Z · LW · GW

I am on Windows. This reply is on Android and yeah definitely some issue with Windows / my PC

Comment by papetoast on Open Thread Fall 2024 · 2024-10-29T03:07:58.401Z · LW · GW

I hallucinated

Comment by papetoast on Open Thread Fall 2024 · 2024-10-29T02:22:30.937Z · LW · GW

Re: the new style (archive for comparision)

Not a fan of

1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.

2. the unboxed karma/argeement vote. It is fine per se, but the old one is also perfectly fine.

 

Edit: I have to say that the font on Windows is actively slightly painful and I need to reduce the time spent reading comments or quick takes.

Comment by papetoast on Word Spaghetti · 2024-10-25T01:03:42.460Z · LW · GW

One funny thing I have noticed about myself is that I am bad enough at communicating certain ideas in speech that sometimes it is easier to handwave at what a couple things that I don't mean and let the listener figure out the largest semantic cluster in the remaining "meaning space".

Comment by papetoast on Laziness death spirals · 2024-10-17T08:50:47.708Z · LW · GW

Even as I’m caught up in lazy activity, I’m making specific plans to be productive tomorrow.

How? I personally can't really make detailed or good plans during lazy mode

Comment by papetoast on How I got 4.2M YouTube views without making a single video · 2024-10-14T09:33:44.315Z · LW · GW

The link is not clickable

Comment by papetoast on Overview of strong human intelligence amplification methods · 2024-10-10T07:43:35.640Z · LW · GW

Manifold is pretty weak evidence for anything >=1 year away because there are strong incentives to bet on short term markets.

Comment by papetoast on Information dark matter · 2024-10-04T12:25:19.940Z · LW · GW

The list of once “secret” documents is very cool, thanks for that. (But I skimmed the other parts too)

Comment by papetoast on Dalcy's Shortform · 2024-10-03T02:41:41.362Z · LW · GW

I think the interchangeability is just hard to understand. Even though I know they are the same thing, it is still really hard to intuitively see them as being equal. I personally try (but not very hard) to stick with X -> Y in mathy discussions and if/only if for normal discussions

Comment by papetoast on Should we abstain from voting? (In nondeterministic elections) · 2024-10-02T12:33:47.818Z · LW · GW

For nondeterministic voting surely you can just estimate the expected utility of your vote and decide whether voting is worth the effort. Probably even easier than deterministic ones.

Btw, I feel like the post is too incomplete on its own for the title Should we abstain from voting?. It feels more like Why being uninformed isn't a reason to not vote.

Comment by papetoast on Dalcy's Shortform · 2024-09-11T01:07:05.284Z · LW · GW

Maybe make a habit of blocking https://www.lesswrong.com/posts/*  while writing?

Comment by papetoast on How I got 4.2M YouTube views without making a single video · 2024-09-04T02:53:58.821Z · LW · GW

The clickbait title is misleading, but I forgive this one because I do end up finding it interesting, and it is short. In general I mostly don't try to punish things if it end up to be good/correct.

Comment by papetoast on papetoast's Shortforms · 2024-08-22T03:33:32.503Z · LW · GW

Starting today I am going to collect a list of tricks that websites use to prevent you from copy and pasting text + how to circumvent them. In general, using ublock origin and allow right click properly fixes most issues.

1. Using href (https://lnk.to/LACA-15863s, archive)

behavior: https://streamable.com/sxeblz

solution: use remove-attr in ublock origin - lnk.to##.header__link:remove-attr(href)

2. Using a background image to cover the text (https://varium.jp/talent/ahiru/, archive)

Note: this example is probably just incompetence. 

behavior: https://streamable.com/bw2wlv 

solution: block the image with ublock origin

3. Using draggable=true (Spotify album titles, archive)

Note: Spotify does have a legit reason to use draggable. You can drag albums or tracks to your library, but I personally prefer having texts to be selectable.

behavior: https://streamable.com/cm0t6b

solution: use remove-attr in ublock origin - open.spotify.com##.encore-internal-color-text-base.encore-text-headline-large.encore-text:upward(1):remove-attr(draggable)

4. Using EventListeners to nullify selections (https://www.uta-net.com/song/2765/, archive)

behavior: https://streamable.com/2i1e9k

solution: locate the function using the browser debugger/ctrl+f, then do some ublock origin javascript filter stuff that I don't really understand. Seems to just be overriding functions and EventListeners. The Annoyances filters worked in this case.

Comment by papetoast on Raemon's Shortform · 2024-08-19T07:22:00.031Z · LW · GW

To answer your question directly - not really.

I think index pages are just meant to be used by only a small minority of people in any community. In my mind, the LW concepts page is like the wiki topic groups (not sure what they're called).

The similarities are:

  1. It is fun to go through the concepts page and find tags I haven't learned about, this is good for exploration but a rare use case (for me)
  2. Because it is an index, it is useful when you have a concept in your mind but couldn't remember the name

But the concepts page has a worse UX than wiki since you have to explicitly search for it, rather than it popping up in the relevant tags page, and also they show up in a cluster

Comment by papetoast on Raemon's Shortform · 2024-08-18T08:15:37.432Z · LW · GW

How do you use them?

I use it when I am interested in learning about a specific topic. I rarely use the Concepts page, because it contains too many tags, and sometimes I don't even know what tag I am looking for. Instead, I usually already have one or two articles that I have previously read, which feels similar to the topic I am thinking about. I would then search for those posts, look at the tags, and click on the one that is relevant. In the tag page, I start by reading the wiki, but often feel disappointed by the half-done/incompleteness of the wiki. Then I filter by high karma and read the articles from top to bottom, skipping ones that feels irrelevant or uninteresting based on title.

Do you wish you could get value from them better?

I wish the default most relevant ordering is not based on the raw score, but rather a normalized relevance score or something more complicated, because right now it means nothing other that "this post is popular so a lot of people voted on the tags". This default is really bad, every new user has to independently realize that they should change the sorting. LW also does not remember the sorting so I have to change it manually every time, which is irritating but not a big deal.

Comment by papetoast on Habryka's Shortform Feed · 2024-08-17T04:23:20.296Z · LW · GW

I understand that having the audio player above the title is the path of least resistance, since you can't assume there is enough space on the right to put it in. But ideally things like this should be dynamic, and only take up vertical space if you can't put it on the right, no? (but I'm not a frontend dev)

Alternatively, I would consider moving them vertically above the title a slight improvement. It is not great either, but at least the reason for having the gap is more obvious.

test

The above screenshots are done in a 1920x1080 monitor

Comment by papetoast on Habryka's Shortform Feed · 2024-08-16T08:46:51.874Z · LW · GW

I like most of the changes, but strongly dislike the large gap before the title. (I similarly dislike the large background in the top 50 of the year posts)

Comment by papetoast on Open Thread Summer 2024 · 2024-08-07T04:41:11.192Z · LW · GW

Seems like every new post - no matter the karma - is getting the "listen to this post" button now. I love it.

Comment by papetoast on papetoast's Shortforms · 2024-08-07T03:49:29.550Z · LW · GW

I do believe that projects in general often fail due to lack of glue responsibilities, but didn't want to generalize too much in what I wrote.

Start with integration. Get the end-to-end WORKING MOCKUP going with hardcoded behaviors in each module, but working interfaces. This is often half or more of the work, and there's no way to avoid it - doing it at the end is painful and often fails. Doing it up front is painful but actually leads to completion.

Being able to convince everyone to put in the time to do this upfront is already a challenge :/ Sometimes I feel quite hopeless?/sad? in that I couldn't realistically make some coordination techniques work because of everyone's difference of goals and hidden motivations, or the large upfront cost in building a new consensus away from the Schelling point of normal university projects.

Comment by papetoast on papetoast's Shortforms · 2024-08-06T07:55:12.704Z · LW · GW

A common failure mode in group projects is that students will break up the work into non-overlapping parts, and proceed to stop giving a fuck about other's work afterwards because it is not their job anymore.

This especially causes problems at the final stage where they need to combine the work and make a coherent piece out of it.

  1. No one is responsible for merging the work
  2. Lack of mutual communication during the process means that the work pieces cannot be nicely connected without a lot of modifications (which no one is responsible for).

At this point the deadline is likely just a couple days (or hours) away, everyone is tired of this crap and don't want to work on it, but the combined work is still a piece of incoherent crap.

I wonder how I can do better at coordination while dealing with normal peers and while only doing a fair amount of work.

Comment by papetoast on Superbabies: Putting The Pieces Together · 2024-07-27T06:52:06.119Z · LW · GW

Thanks for adding a much more detailed/factual context! This added more concrete evidence to my mental model of "ELO is not very accurate in multiple ways" too. I did already know some of the inaccuracies in how I presented it, but I wanted to write something rather than nothing, and converting vague intuitions into words is difficult.

Comment by papetoast on Superbabies: Putting The Pieces Together · 2024-07-26T05:32:08.719Z · LW · GW

Take with a grain of salt.

Observation:

  1. Chess engines during development only play against themselves, so they use a relative ELO system that is detached from the FIDE ELO. https://github.com/official-stockfish/Stockfish/wiki/Regression-Tests#normalized-elo-progression https://training.lczero.org/?full_elo=1 https://nextchessmove.com/dev-builds/sf14
  2. It is very hard to find chess engines confidently telling you what their FIDE ELO is.

Interpretation / Guess: Modern chess engines probably need to use like some intermediate engines to transitively calculate their ELO. (Engine A is 200 ELO greater than players at 2200, Engine B is again 200 ELO better than A...) This is expensive to calculate and the error bar likely increases as you use more intermediate engines.

Comment by papetoast on Superbabies: Putting The Pieces Together · 2024-07-22T07:21:54.311Z · LW · GW

I follow chess engines very casually as a hobby. Trying to calibrate chess engine's computer against computer ELO with human ELO is a real problem. I doubt extrapolating IQ over 300 will provide accurate predictions.

Comment by papetoast on papetoast's Shortforms · 2024-07-15T07:20:08.854Z · LW · GW

Ranting about LangChain, a python library for building stuff on top of llm calls.

LangChain is a horrible pile of abstractions. There are many ways of doing the same thing. Every single function has a lot of gotchas (that doesn't even get mentioned in documentations). Common usage patterns are hidden behind unintuitive, hard to find locations (callbacks has to be implemented as an instance of a certain class in a config TypedDict). Community support is non-existent despite large number of users. Exceptions are often incredibly unhelpful with unreadable stack trace. Lots of stuff are impossible to type check because langchain allows for too much flexibility, they take in prompt templates as format strings (i.e. "strings with {variables}") and then allows you to fill in the template at runtime with a dict, so now nothing can be statically type checked :)

Comment by papetoast on papetoast's Shortforms · 2024-07-08T07:58:29.741Z · LW · GW

There are a few things I dislike about math textbooks and pdfs in general. For example, how math textbooks often use theorems that are from many pages ago and require switching back and forth. (Sometimes there isn't even a hyperlink!). I also don't like how proofs sometimes go way too deep into individual steps and sometimes being way too brief.

I wish something like this exists (Claude generated it for me, prompt: https://pastebin.com/Gnis891p)

Comment by papetoast on papetoast's Shortforms · 2024-07-01T04:54:17.037Z · LW · GW

Many people don't seem to know when and how to invalidate the cached thoughts they have. I noticed an instance of being unable to cache invalidate the model of a person from my dad. He is probably still modelling >50% of me as who I am >5 years ago.

The Intelligent Social Web briefly talked about this for other reasons.

A lot of (but not all) people get a strong hit of this when they go back to visit their family. If you move away and then make new friends and sort of become a new person (!), you might at first think this is just who you are now. But then you visit your parents… and suddenly you feel and act a lot like you did before you moved away. You might even try to hold onto this “new you” with them… and they might respond to what they see as strange behavior by trying to nudge you into acting “normal”: ignoring surprising things you say, changing the topic to something familiar, starting an old fight, etc.

In most cases, I don’t think this is malice. It’s just that they need the scene to work. They don’t know how to interact with this “new you”, so they tug on their connection with you to pull you back into a role they recognize. If that fails, then they have to redefine who they are in relation to you — which often (but not always) happens eventually.

Comment by papetoast on [New Feature] Your Subscribed Feed · 2024-07-01T03:12:27.162Z · LW · GW

I would like the option to separate subscribing to posts and subscribing to comments. I mostly just want to subscribe to posts, because it is much easier to decide whether I want to read a post than a comment.

Comment by papetoast on Regularly meta-optimization · 2024-06-29T10:09:21.063Z · LW · GW

that is much clearer that I think you should have said it out loud in the post

Comment by papetoast on Open Thread Summer 2024 · 2024-06-29T03:07:41.369Z · LW · GW

I also mostly switched to browser bookmark now, but I do think even this simple implementation of in-site bookmarks is overall good. Book marking in-site can sync over devices by default, and provides more integrated information.

Comment by papetoast on Open Thread Summer 2024 · 2024-06-26T02:03:33.570Z · LW · GW

I want to be able to quickly see whether I have bookmarked a post to avoid clicking into it (hence I suggested it to be a badge, rather than a button like in the Bookmarks tab). Especially with the new recommendation system that resurfaces old posts, I sometimes accidentally click on posts that I bookmarked months before.