Posts
Comments
- Butterfly ideas?
- By default I expect the author to have a pretty strong stance on the main idea of a post, also the content are usually already refined and complete, so the barrier of entry to having a comment that is valuable is higher.
Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism)
True in my example. I acknowledge that my example is wrong and should have been more explicit about having an alternative. Quoting myself from the comment to Vladimir_Nesov:
Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes.
If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer.
I do accept this as the rational answer, doesn't mean it is not irritating. If A (skillful translator) cares about having a good translation of X slightly more than Y, and B (poor translator) cares about Y much more than X. If B can act first, he can work on X and "force" A (via expected utility) to work on Y. This is a failure of mine to not talk about difference in preference in my examples and expect people to extrapolate and infer it out.
Again, seems like we are in agreement lol. I agree with what you said and I meant that, but tried to compress it into one sentence and failed to communicate.
It sure can! I think we are in agreement on sunk cost fallacy. I just don't think it applies to example 1 because there exists alternatives that can keep the sunk resources. Btw this is why my example is on the order of $100, at this price point you probably have a couple alternative things to buy to spend the money.
(I need to defend the sad and the annoying in two separate parts)
- Yes, and but sometimes that is already annoying on its own (Bob is not perfectly rational and sometimes he just really want the quality headphone, but now math tells Bob that Tim gifting him that headphone means he would have to wait e.g. ~2 years before it is worth buying a new one). Of course Bob can improve his life in other ways with his saved money, but still, would be nice if you can just ask Tim to buy something else if you had known.
- Sometimes increasing sum(projects) does not translate directly to increasing utility. This is more obvious in real life scenarios where actors are less rational and time is a real concept. The sad thing happens when someone with good intention but with poor skill (and you don't know they are that bad) signing up to a time-critical project and failing/doing sub-par
This is a tangent, but Sunk cost fallacy is not really a fallacy most of the time, because spending more resources beforehand really increases the chance of "success" most of the time. For more: https://gwern.net/sunk-cost
I am trying to pinpoint the concept of "A doing a mediocre job of X will force B to rationally do Y instead of X, making the progress of X worse than if A had not done anything". The examples are just examples that hopefully helps you locate the thing I am handwaving at. I do not try to make them logically perfect because that would take too much time.
Anyways, the unwritten thing is that Bob care about having a quality headphone and a good pair of shoes equally. So given that he already has an alright headphone, he would get more utility by buying a good pairs of shoes instead. It is essentially a choice between (a) getting a $300 headphone and (b) getting a $100 headphone and a $300 pair of shoes. Of course there are some arguments about preference, utility != dollar amount or something along those lines. But (b) is the better option in my constructed example to show the point.
Let me know if I still need to explain example 2
It is sad and annoying that if you do a mediocre job (according to the receiver), doing things even for free (volunteer work/gifting) can sabotage the receiver along the dimension you're supposedly helping.
This is super vague the way I wrote it, so examples.
Example 1. Bob wants to upgrade and buy a new quality headphone. He has a $300 budget. His friend Tim not knowing his budget, bought a $100 headphone for Bob. (Suppose second-handed headphones are worthless) Now Bob cannot just spend $300 to get a quality headphone. He would also waste Tim's $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.
Example 2. Andy, Bob, and Chris are the only three people who translates Chinese books to English for free as a hobby. Because there are so many books out there, it is often not worth it to re-translate a book even if the previous one is bad, because spending that time to translate a different book is just more helpful to others. Andy and Bob are pretty good, but Chris absolutely sucks. It is not unreadable, but they are just barely better than machine translation. Now Chris has taken over to translate book X, which happens a pretty good book. The world is now stuck with Chris' poor translation on book X with Andy and Bob never touching it again because they have other books to work on.
I want to use this chance to say that I really want to be able to bookmark a sequence
Agreed on the examples of natural abstractions. I held a couple abstraction examples in my mind (e.g. atom, food, agent) while reading the post and found that it never really managed to attack these truly very general (dare I say natural) abstractions.
I overlayed my phone's display (using scrcpy) on top of the website rendered on Windows (Firefox). Image 1 shows that they indeed scaled to align. Image 2 (Windows left, Android right) shows how the font is bolder on Windows and somewhat blurred.
The monitor is 2560x1440 (website at 140%) and the phone is 1440x3200 (100%) mapped onto 585x1300.
I am on Windows. This reply is on Android and yeah definitely some issue with Windows / my PC
Re: the new style (archive for comparision)
Not a fan of
1. the font weight, everything seem semi-bolded now and a little bit more blurred than before. I do not see myself getting used to this.
2. the unboxed karma/argeement vote. It is fine per se, but the old one is also perfectly fine.
Edit: I have to say that the font on Windows is actively slightly painful and I need to reduce the time spent reading comments or quick takes.
One funny thing I have noticed about myself is that I am bad enough at communicating certain ideas in speech that sometimes it is easier to handwave at what a couple things that I don't mean and let the listener figure out the largest semantic cluster in the remaining "meaning space".
Even as I’m caught up in lazy activity, I’m making specific plans to be productive tomorrow.
How? I personally can't really make detailed or good plans during lazy mode
The link is not clickable
Manifold is pretty weak evidence for anything >=1 year away because there are strong incentives to bet on short term markets.
The list of once “secret” documents is very cool, thanks for that. (But I skimmed the other parts too)
I think the interchangeability is just hard to understand. Even though I know they are the same thing, it is still really hard to intuitively see them as being equal. I personally try (but not very hard) to stick with X -> Y in mathy discussions and if/only if for normal discussions
For nondeterministic voting surely you can just estimate the expected utility of your vote and decide whether voting is worth the effort. Probably even easier than deterministic ones.
Btw, I feel like the post is too incomplete on its own for the title Should we abstain from voting?. It feels more like Why being uninformed isn't a reason to not vote.
Maybe make a habit of blocking https://www.lesswrong.com/posts/*
while writing?
The clickbait title is misleading, but I forgive this one because I do end up finding it interesting, and it is short. In general I mostly don't try to punish things if it end up to be good/correct.
Starting today I am going to collect a list of tricks that websites use to prevent you from copy and pasting text + how to circumvent them. In general, using ublock origin and allow right click properly fixes most issues.
1. Using href (https://lnk.to/LACA-15863s, archive)
behavior: https://streamable.com/sxeblz
solution: use remove-attr in ublock origin - lnk.to##.header__link:remove-attr(href)
2. Using a background image to cover the text (https://varium.jp/talent/ahiru/, archive)
Note: this example is probably just incompetence.
behavior: https://streamable.com/bw2wlv
solution: block the image with ublock origin
3. Using draggable=true (Spotify album titles, archive)
Note: Spotify does have a legit reason to use draggable. You can drag albums or tracks to your library, but I personally prefer having texts to be selectable.
behavior: https://streamable.com/cm0t6b
solution: use remove-attr in ublock origin - open.spotify.com##.encore-internal-color-text-base.encore-text-headline-large.encore-text:upward(1):remove-attr(draggable)
4. Using EventListener
s to nullify selections (https://www.uta-net.com/song/2765/, archive)
behavior: https://streamable.com/2i1e9k
solution: locate the function using the browser debugger/ctrl+f, then do some ublock origin javascript filter stuff that I don't really understand. Seems to just be overriding functions and EventListeners. The Annoyances filters worked in this case.
To answer your question directly - not really.
I think index pages are just meant to be used by only a small minority of people in any community. In my mind, the LW concepts page is like the wiki topic groups (not sure what they're called).
The similarities are:
- It is fun to go through the concepts page and find tags I haven't learned about, this is good for exploration but a rare use case (for me)
- Because it is an index, it is useful when you have a concept in your mind but couldn't remember the name
But the concepts page has a worse UX than wiki since you have to explicitly search for it, rather than it popping up in the relevant tags page, and also they show up in a cluster
How do you use them?
I use it when I am interested in learning about a specific topic. I rarely use the Concepts page, because it contains too many tags, and sometimes I don't even know what tag I am looking for. Instead, I usually already have one or two articles that I have previously read, which feels similar to the topic I am thinking about. I would then search for those posts, look at the tags, and click on the one that is relevant. In the tag page, I start by reading the wiki, but often feel disappointed by the half-done/incompleteness of the wiki. Then I filter by high karma and read the articles from top to bottom, skipping ones that feels irrelevant or uninteresting based on title.
Do you wish you could get value from them better?
I wish the default most relevant ordering is not based on the raw score, but rather a normalized relevance score or something more complicated, because right now it means nothing other that "this post is popular so a lot of people voted on the tags". This default is really bad, every new user has to independently realize that they should change the sorting. LW also does not remember the sorting so I have to change it manually every time, which is irritating but not a big deal.
I understand that having the audio player above the title is the path of least resistance, since you can't assume there is enough space on the right to put it in. But ideally things like this should be dynamic, and only take up vertical space if you can't put it on the right, no? (but I'm not a frontend dev)
Alternatively, I would consider moving them vertically above the title a slight improvement. It is not great either, but at least the reason for having the gap is more obvious.
The above screenshots are done in a 1920x1080 monitor
I like most of the changes, but strongly dislike the large gap before the title. (I similarly dislike the large background in the top 50 of the year posts)
Seems like every new post - no matter the karma - is getting the "listen to this post" button now. I love it.
I do believe that projects in general often fail due to lack of glue responsibilities, but didn't want to generalize too much in what I wrote.
Start with integration. Get the end-to-end WORKING MOCKUP going with hardcoded behaviors in each module, but working interfaces. This is often half or more of the work, and there's no way to avoid it - doing it at the end is painful and often fails. Doing it up front is painful but actually leads to completion.
Being able to convince everyone to put in the time to do this upfront is already a challenge :/ Sometimes I feel quite hopeless?/sad? in that I couldn't realistically make some coordination techniques work because of everyone's difference of goals and hidden motivations, or the large upfront cost in building a new consensus away from the Schelling point of normal university projects.
A common failure mode in group projects is that students will break up the work into non-overlapping parts, and proceed to stop giving a fuck about other's work afterwards because it is not their job anymore.
This especially causes problems at the final stage where they need to combine the work and make a coherent piece out of it.
- No one is responsible for merging the work
- Lack of mutual communication during the process means that the work pieces cannot be nicely connected without a lot of modifications (which no one is responsible for).
At this point the deadline is likely just a couple days (or hours) away, everyone is tired of this crap and don't want to work on it, but the combined work is still a piece of incoherent crap.
I wonder how I can do better at coordination while dealing with normal peers and while only doing a fair amount of work.
Thanks for adding a much more detailed/factual context! This added more concrete evidence to my mental model of "ELO is not very accurate in multiple ways" too. I did already know some of the inaccuracies in how I presented it, but I wanted to write something rather than nothing, and converting vague intuitions into words is difficult.
Take with a grain of salt.
Observation:
- Chess engines during development only play against themselves, so they use a relative ELO system that is detached from the FIDE ELO. https://github.com/official-stockfish/Stockfish/wiki/Regression-Tests#normalized-elo-progression https://training.lczero.org/?full_elo=1 https://nextchessmove.com/dev-builds/sf14
- It is very hard to find chess engines confidently telling you what their FIDE ELO is.
Interpretation / Guess: Modern chess engines probably need to use like some intermediate engines to transitively calculate their ELO. (Engine A is 200 ELO greater than players at 2200, Engine B is again 200 ELO better than A...) This is expensive to calculate and the error bar likely increases as you use more intermediate engines.
I follow chess engines very casually as a hobby. Trying to calibrate chess engine's computer against computer ELO with human ELO is a real problem. I doubt extrapolating IQ over 300 will provide accurate predictions.
Ranting about LangChain, a python library for building stuff on top of llm calls.
LangChain is a horrible pile of abstractions. There are many ways of doing the same thing. Every single function has a lot of gotchas (that doesn't even get mentioned in documentations). Common usage patterns are hidden behind unintuitive, hard to find locations (callbacks has to be implemented as an instance of a certain class in a config TypedDict). Community support is non-existent despite large number of users. Exceptions are often incredibly unhelpful with unreadable stack trace. Lots of stuff are impossible to type check because langchain allows for too much flexibility, they take in prompt templates as format strings (i.e. "strings with {variables}") and then allows you to fill in the template at runtime with a dict, so now nothing can be statically type checked :)
There are a few things I dislike about math textbooks and pdfs in general. For example, how math textbooks often use theorems that are from many pages ago and require switching back and forth. (Sometimes there isn't even a hyperlink!). I also don't like how proofs sometimes go way too deep into individual steps and sometimes being way too brief.
I wish something like this exists (Claude generated it for me, prompt: https://pastebin.com/Gnis891p)
Many people don't seem to know when and how to invalidate the cached thoughts they have. I noticed an instance of being unable to cache invalidate the model of a person from my dad. He is probably still modelling >50% of me as who I am >5 years ago.
The Intelligent Social Web briefly talked about this for other reasons.
A lot of (but not all) people get a strong hit of this when they go back to visit their family. If you move away and then make new friends and sort of become a new person (!), you might at first think this is just who you are now. But then you visit your parents… and suddenly you feel and act a lot like you did before you moved away. You might even try to hold onto this “new you” with them… and they might respond to what they see as strange behavior by trying to nudge you into acting “normal”: ignoring surprising things you say, changing the topic to something familiar, starting an old fight, etc.
In most cases, I don’t think this is malice. It’s just that they need the scene to work. They don’t know how to interact with this “new you”, so they tug on their connection with you to pull you back into a role they recognize. If that fails, then they have to redefine who they are in relation to you — which often (but not always) happens eventually.
I would like the option to separate subscribing to posts and subscribing to comments. I mostly just want to subscribe to posts, because it is much easier to decide whether I want to read a post than a comment.
that is much clearer that I think you should have said it out loud in the post
I also mostly switched to browser bookmark now, but I do think even this simple implementation of in-site bookmarks is overall good. Book marking in-site can sync over devices by default, and provides more integrated information.
I want to be able to quickly see whether I have bookmarked a post to avoid clicking into it (hence I suggested it to be a badge, rather than a button like in the Bookmarks tab). Especially with the new recommendation system that resurfaces old posts, I sometimes accidentally click on posts that I bookmarked months before.
This is like raw, n=1, personal feedback.
No, not really. I read it twice but couldn't bring myself to care. It seems you are going into tangents and not actually talking directly about your technique. I could be wrong, but I also couldn't care enough to read into the sentences and understand what you're actually pointing at with all the words. Having conclusion is nice because I jumped straight to that at first, seems kind of too normal to justify the clickbait though. Overall I feel like I read some ramblings and didn't learn much.
I would suggest using less clickbaity titles on LessWrong
I would love to get a little bookmark symbol on the frontpage
Metaphor rebranded themselves. No and no, thanks for sharing though, will try it out!
Related: Replace yourself before you stop organizing your community.
I think this is a important skill to learn and a important failure mode to be aware of. There are so many nice things that are gone because the only person that is doing the thing just stopped one day.
Unplugging the charger and putting it in my bag (and the reverse) is a trivial inconvenience that annoyed me many times
I found that it is possible to yield noticeably better search results than Google by using Kagi as default and fallback to Exa (prev. Metaphor).
Kagi is $10/mo though with a 100 searches trial. Kagi's default results are slightly better than Google, and it also offers customization of results which I haven't seen in other search engines.
Exa is free, it uses embeddings and empirically it understood semantics way better than other search engines and provide very unique search results.
If you are interested in experimenting you can find more search engines in https://www.searchenginemap.com/ and https://github.com/The-Osint-Toolbox/Search-Engines
You can buy a even cheaper one from taobao, this one is $20 before shipping (but I expect the buying experience to be quite complex if you're outside china)
I pattern matched onto "10 Times Scientists Admitted They Were Wrong" and thought you made some sort of editing mistake, now I see what you're saying with the added quotes