Bob Jacobs's Shortform
post by B Jacobs (Bob Jacobs) · 2020-06-01T19:40:37.367Z · LW · GW · 42 commentsContents
42 comments
42 comments
Comments sorted by top scores.
comment by B Jacobs (Bob Jacobs) · 2020-08-18T15:45:31.234Z · LW(p) · GW(p)
I know LessWrong has become less humorous over the years, but this idea popped into my head when I made my bounty comment [LW(p) · GW(p)] and I couldn't stop myself from making it. Feel free to downvote this shortform if you want the site to remain a super serious forum. For the rest of you: here is my wanted poster for the reference class problem [LW · GW]. Please solve it, it keeps me up at night.
comment by B Jacobs (Bob Jacobs) · 2022-03-18T12:35:12.679Z · LW(p) · GW(p)
I have a Mnemonic device for checking whether a model is Gears-like [? · GW] or not.
G E A R S:
Does a variable Generate Empirical Anticipations?
Can a variable be Rederived?
Is a variable hard to Substitute?
comment by B Jacobs (Bob Jacobs) · 2020-06-23T11:13:41.875Z · LW(p) · GW(p)
I was writing a post about how you can get more fuzzies (=personal happiness) out of your altruism, but decided that it would be better as a shortform. I know the general advice is to purchase your fuzzies and utilons separately [LW · GW] but if you're going to do altruism anyway, and there are ways to increase your happiness of doing so without sacrificing altruistic output, then I would argue you should try to increase that happiness. After all, if altruism makes you miserable you're less likely to do it in the future and if it makes you happy you will be more likely to do it in the future (and personal happiness is obviously good in general).
The most obvious way to do it is with conditioning e.g giving yourself a cookie, doing a handpump motion every time you donate etc. Since there's already a boatload of stuff written about conditioning I won't expand on it further. I then wanted to adapt the tips from Lukeprog's the science of winning at life [? · GW] to this particular topic, but I don't really have anything to add so you can probably just read it and apply it to doing altruism.
The only purely original thing I wanted to advice is to diversify your altruistic output. I found out there have already been defenses made in favor of this concept but I would like to give additional arguments. The primary one being that it will keep you personally emotionally engaged with different parts of the world. When you invest something (e.g time/money) into a cause you become more emotionally attached to said cause. So someone who only donates to malaria bednets will (on average) be less emotionally invested into deworming even though these are both equally important projects. While I know on an intellectual level that donating 50 dollars to malaria bednets is better than donating 25 dollars, it will emotionally both feel like a small drop in the ocean. When advancements in the cause get made I get to feel fuzzies that I contributed, but crucially these won't be twice as warm if I donated twice as much. But if I donate to separate causes (e.g bednets and deworming) then for every advancement/milestone I will get to feel fuzzies from these two different causes (so twice as much).
This will lessen the chance of you becoming a victim of the bandwagon effect (of a particular cause) or becoming victim of the sunk-cost fallacy (if a cause you thought was effective turns out to be not very effective after all). This will also keep your worldview broad instead of either becoming depressed if your singular cause doesn't advance or becoming ignorant of the world at large. So if you do diversify then every victory in the other causes creates more happiness for you, allowing you to align yourself much better with the worlds needs.
comment by B Jacobs (Bob Jacobs) · 2022-11-30T08:16:21.517Z · LW(p) · GW(p)
I tried a bit of a natural experiment to see if rationalists would be more negative towards an idea if it's called socialism vs if it's called it something else. I made two posts that are identical, except one calls it socialism right at the start, and one only reveals I was talking about socialism at the very end (perhaps it would've been better if I hadn't revealed it at all). The former I posted to LW, the latter I posted to the EA forum.
I expected that the comments on LW would be more negative, that I would get more downvotes and gave it a 50% chance the mods wouldn't even promote it to the frontpage on LW (but would on EA forum).
The comments were more negative on LW. I did get more downvotes, but I also got more upvotes and got more karma overall: (12 karma from 19 votes on EA and 27 karma from 39 votes on LW). Posts tend to get more karma on LW, but the difference is big enough that I consider my prediction to be wrong. Lastly, the LW mods did end up promoting it to the frontpage, but it took a very long time (maybe they had a debate about it).
Overall, while rationalists are more negative towards socialist ideas that are called socialist, they aren’t as negative as I expected and will update accordingly.
Replies from: Viliam, Raemon, Dagon↑ comment by Viliam · 2022-12-01T07:35:29.632Z · LW(p) · GW(p)
My problem with calling things "socialist" is that the word is typically used in a motte-and-bailey fashion: "seizing the means of production, centralized planning" vs "cooperating, helping each other". (Talking about the latter, but in a way that makes an applause light of the former.) This is analogical to "religion" meaning either "following the commandments in ancient books literally, obeying religious leaders" or "perceiving beauty in the universe, helping each other". Neither socialists not christians have invented the concept of human cooperation.
More meta: if other people also have a problem with clarity of thought/communication, this should be a greater concern for LW audience than for EA audience, given the different focus of the websites.
↑ comment by Raemon · 2022-11-30T09:02:39.302Z · LW(p) · GW(p)
Just wanted to say I think this was an interesting experiment to run. (I’m not sure I think the data here is clean enough to directly imply anything, since among other things EAF and LW have different audiences. But, still think this was a neat test of the mods)
↑ comment by Dagon · 2022-11-30T17:08:07.787Z · LW(p) · GW(p)
I was one of the downvotes you predicted, but I didn't react as negatively as I expected to. I suspect I'd have been roughly as critical of "democratization" - it's a word that can mean many many different things, and the article, while long and somewhat interesting, didn't actually match either title.
Fun experiment, and mostly I'm surprised that there's so little overlap between the sites that nobody pointed out the duplicate, which should have been a crosspost.
comment by B Jacobs (Bob Jacobs) · 2020-06-05T11:18:38.977Z · LW(p) · GW(p)
Continuing my streak of hating on terms this community loves [LW · GW].
I hate the term 'Motte-and-bailey'. Not because the fallacy itself is bad, but because you are essentially indirectly accusing your interlocutor of switching definitions on purpose. In my experience this is almost always an accident, but even if it wasn't, you still shouldn't immediately brand your interlocutor as malicious. I propose we use the term 'defiswitchion' (combining 'definition' and 'switch') since it is actually descriptive and easier to understand for people who hear it for the first time and you are not indirectly accusing your interlocutor of using dirty debate tactics.
Replies from: gworley, Dagon↑ comment by Gordon Seidoh Worley (gworley) · 2020-06-05T21:45:22.519Z · LW(p) · GW(p)
I think you're pointing to something that is a fully general problem with reasoning biases and logical fallacies: if people know about it, they might take you pointing out that they're doing it as an attack on them rather than noticing they may be inadvertently making a mistake.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-06-05T22:24:39.072Z · LW(p) · GW(p)
To quote myself from the other comment:
Getting it pointed out to you that you used an invalid argument still stings, but doesn't sour a debate nearly as much as your interlocutor accusing you of active sabotage.
We cannot make the process painless, but we can soften the blow. I think not comparing someone's argument to a sneaky military maneuver might be a good start.
↑ comment by Dagon · 2020-06-05T15:56:38.258Z · LW(p) · GW(p)
I'll stick with motte-and-bailey (though actually, I use "bait-and-switch" more often). In my experience, most of the time it's useful to point out to someone, it _is_ intentional, or at least negligent. Very often this is my response to someone repeating a third-party argument point, and the best thing to do is to be very explicit that it's not valid.
I'll argue that the accusation is inherent in the thing. Introducing the term "defiswitchion", requires that you explain it, and it _still_ is accusing your correspondent of sloppy or motivated unclarity.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-06-05T20:33:44.165Z · LW(p) · GW(p)
Defiswitchion just describes what happend without implying ill will, Motte and bailey was an actual military strategy meaning you frame the debate as a battle with them acting aggressively. Bait-and-switch is arguably even worse in implying mal-intent. Getting it pointed out to you that you used an invalid argument still stings, but doesn't sour a debate nearly as much as your interlocutor accusing you of active sabotage. Most people don't even know what ad hominem means let alone being able to construct complicated rhetorical techniques. But that doesn't matter because you should always extend the principle of charity to someone anyway.
comment by B Jacobs (Bob Jacobs) · 2020-06-07T16:20:59.919Z · LW(p) · GW(p)
I just realized that coherence value theory can actually be read as a warning for inadequate equilibria in your worldview. If you are losing epistemic value because a new (but more accurate) belief doesn't fit with your old beliefs then you have to reject it, meaning you can get stuck with an imperfect belief system (e.g a religion). In other words: coherentism works better if you have slack in your belief system [LW · GW] (this is one of my favorite LW posts, highly recommended).
comment by B Jacobs (Bob Jacobs) · 2020-06-06T11:30:03.911Z · LW(p) · GW(p)
The concepts of Unitarism, Centralization, Authoritarianism and top-down seem to be all conveying the same thing, but when you look closely you start to notice the small differences. Why then do we think these concepts are similar? Because implementing them has the same effect: high-risk and high-reward.
We tend to think that authoritarianism etc are inherently bad, but that's only because the ways you can screw up a situation vastly outnumber the ways in which you can improve a situation. When Scott Alexander talks about an AI overlord taking away your freedom, many people get squeamish. But while the worst dystopias will be top-down, so will the best utopias. Identifying yourself as being inherently pro big-government or anti big-government seems rather counterproductive. It might be better to ask yourself what is this particular top-down system doing in this specific moment in time and temporarily choose a position on the pro-anti scale, even if people accuse you of having no ideology (you don't need to have an ideology for everything as long as you have an underlying philosophy).
comment by B Jacobs (Bob Jacobs) · 2020-06-01T19:40:37.765Z · LW(p) · GW(p)
With climate change getting worse by the day we need to switch to sustainable energy sources sooner rather than later. The new Molten salt reactors are small, clean and safe, but still carry the stigma of nuclear energy. Since these reactors (like others) can use old nuclear waste as a fuel source, I suggest we rebrand them to "Nuclear Waste Eaters" and give them (or a company that makes them) a logo in the vein of this quick sketch I made: https://postimg.cc/jWy3PtjJ
Hopefully a rebranding to "thing getting rid of the thing you hate, also did you know it's clean and safe" will get people more motivated for these kinds of energy sources.
comment by B Jacobs (Bob Jacobs) · 2020-07-18T16:49:22.163Z · LW(p) · GW(p)
This is a short argument against subjective idealism. Since I don't think there are (m)any subjective idealist on this site I've decided to make it a shortform rather than a full post.
We don't know how big reality really is. Most people agree that it isn't smaller than the things you perceive, because if I have perception of something the perception exists. Subjective Idealism says that only the perceptions are real and the things outside of our perception don't exist:
But if you're not infinitely certain [LW · GW] that subjective idealism is correct, then you have to at least assign some probability that a different model of reality (e.g your perception + one other category of things exists) is true:
But of course there are many other types of models that could also be true:
In fact the other models outnumber subjective idealism infinity to one, making it seem more probable that things outside your immediate perception exist.
(I don't think this argument is particularly strong in itself, but it could be used to strengthen other arguments.)
Replies from: mr-hire, Richard_Kennaway, Vladimir_Nesov↑ comment by Matt Goldenberg (mr-hire) · 2020-07-20T13:22:20.619Z · LW(p) · GW(p)
It bothers me that there are no red things in your perception in any of your pictures.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-07-23T20:03:03.577Z · LW(p) · GW(p)
I already mentioned in the post:
Most people agree that it isn't smaller than the things you perceive, because if I have perception of something the perception exists
Obviously you can hallucinate a bear without there being a bear, but the hallucination of the bear would exist (according to most people). There are models that say that even sense data does not exist but those models are very strange, unpopular and unpersuasive (for me and most other people). But if you think that both the phenomenon and the noumenon don't exist, then I would be interested in hearing your reasons for that conclusion.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-07-23T20:11:13.714Z · LW(p) · GW(p)
Biggest world where that's the case for me is some form of the malicious demon argument.
- I can make mistakes when doing things like adding 297 + 972, and forget to carry a one.
- Could there be a malicious demon that always makes me make the same mistakes? So I really believe the logical answer is 1296, because every time I check using different procedures, I get the same answer?
- Could the same malicious demon then make me make a separate mistake, so I believed that 2+2 =5? It just has to be a bigger mistake that I make every time, doesn't seem different in kind than the previous thought.
- Logically, my experience exists because that's a priori the definition of existence. But couldn't the same malicious demon make me believe that was logically sound, while actually there's some error that I was making every time to draw that conclusion? Again, that doesn't seem very different in kind than believing 2+2=5.
- In the space of all possible minds, is it possible there are some that have a malicious demon baked in. If mine was one, how would I know?
↑ comment by B Jacobs (Bob Jacobs) · 2020-07-24T17:20:03.920Z · LW(p) · GW(p)
Yes the malicious demon was also the model that sprung to my mind. To answer your question; there are certainly possible minds that have "demons" (or faulty algorithms) that make finding their internal mistakes impossible (but my current model thinks that evolution wouldn't allow those minds to live for very long). Although this argument has the same feature as the simulation argument in that any counterargument can be countered with "But what if the simulation/demon wants you to think that?". I don't have any real solution for this except to say that it doesn't really matter for our everyday life and we shouldn't put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-07-24T17:58:32.795Z · LW(p) · GW(p)
I don't have any real solution for this except to say that it doesn't really matter for our everyday life and we shouldn't put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).
I think this is true in every day life, but not true when you're doing philosophy of mind like in the above post. I don't think any of your argument is wrong, I just think you should include the possibility that your observations don't exist in your reasoning.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-07-24T19:35:54.788Z · LW(p) · GW(p)
Well to be fair this was just a short argument against subjective idealism with three pictures to briefly illustrate the point and this was not (nor did it claim to be) a comprehensive list of all the possible models in the field of philosophy of mind (otherwise I would also have to include pictures with the perception being red and the outside being green, or half being green no matter where they are, or everything being red, or everything being green etc)
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-07-24T22:42:34.886Z · LW(p) · GW(p)
Yes, that's fair. This was definitely a nitpicky request.
↑ comment by Richard_Kennaway · 2020-07-18T21:33:07.511Z · LW(p) · GW(p)
Isn't this a universal argument against everything? "There are so many other things that might be true, so how can you be sure of this one?"
Replies from: TAG↑ comment by TAG · 2020-07-20T17:19:28.502Z · LW(p) · GW(p)
It's a valid argument, too.
In Already In Motion [actually not, but somewhere] EY noticed that epistemic processes start from unjustified assumptions, and concludes that,even so , you should carry on as before. But even though scepticism doesn't motivate you to switch to different beliefs, it should motivate you to be less certain of everything.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2020-07-21T07:28:20.602Z · LW(p) · GW(p)
Less certain than what, though? That's an update you make once only, perhaps in childhood, when you first wake up to the separation between perceptions and the outside world, between beliefs and perceptions, and so on up the ladder of abstraction.
Replies from: TAG↑ comment by Vladimir_Nesov · 2020-07-18T20:48:40.249Z · LW(p) · GW(p)
It's not clear what "subjective idealism is correct" means, because it's not clear what "a given thing is real" means (at least in the context of this thread). It should be more clear what a claim means before it makes sense to discuss levels of credence in it.
If we are working with credences assigned to hypotheticals, the fact that the number of disjoint hypotheticals incompatible with some hypothetical S is large doesn't in itself make them (when considered altogether) more probable than S. (A sum of an infinite number of small numbers can still be small.)
Working with credences in hypotheticals is not the only possible way to reason. If we are talking about weird things like subjective idealism, assumptions about epistemics are not straightforward and should be considered.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-07-18T22:23:48.610Z · LW(p) · GW(p)
You are correct, this argument only works if you have a specific epistemic framework and a subjective idealistic framework which might not coincide in most subjective idealist. I only wrote it down because I just so happened to have used this argument successfully against someone with this framework (and I also liked the visualization I made for it). I didn't want to go into what "a given thing is real" means because it's a giant can of philosophical worms and I try to keep my shortforms short. Needless to say that this argument works with some philosophical definitions of "real" but not others. So as I said, this argument is pretty weak in itself and can only be used in certain situation in conjunction with other arguments.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2020-07-18T23:20:37.058Z · LW(p) · GW(p)
(I think making arguments clear is more meaningful than using them for persuasion.)
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-07-19T08:56:56.478Z · LW(p) · GW(p)
This goes without saying and I apologize if I gave the impression that people should use this argument and it's visualization to persuade rather than to explain.
comment by B Jacobs (Bob Jacobs) · 2020-06-17T15:31:45.522Z · LW(p) · GW(p)
QALY is an imperfect metric because (among other things) an action that has an immediately apparent positive effect, might have far off negative effects. I might e.g cure the disease of someone who's actions lead directly to world war 3. I could argue that we should use qaly's (or something similar to qaly's) as the standard metric for a countries succes instead of gdp, but just like with gdp you are missing the far future values.
One metric I could think of is that we calculate a country's increase in it's citizens immediately apparent qaly's without pretending we can calculate all the ripple effects. Instead we divide this number by the countries ecological footprint. But there are metrics for other far off risks too. Like nuclear weapon yield or percentage of gdp spent on the development of autonomous weapons. I'm also not sure how good QALY's are at measuring mental health. Should things like leisure, social connections and inequality get their own metric. How do we balance them all?
I've tried to make some sort of justifiable metric for myself, but it's just too complex, time consuming and above my capabilities. Anyone got a better systems?
Replies from: Dagon↑ comment by Dagon · 2020-06-17T16:46:57.265Z · LW(p) · GW(p)
IMO, it's better to identify the set of metrics you want to optimize on various dimensions. Trying to collapse it into one loses a lot of information and accelerates Goodhart.
GDP is a fine component to continue to use - it's not sufficient, but it does correlate fairly well with commercial production of a region. Adding in QALs (it's annual, so automatically is years. you only need to report quality-adjusted lives to compare year-to-year) doesn't seem wrong to me.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-06-17T20:54:10.509Z · LW(p) · GW(p)
Another option is to just try to optimize for the thing directly, without specifying a measurement. Then come up with ad-hoc measurements that make sense for any given situation to make sure you're on track.
There's obviously a cost to doing this, but also benefits.
Replies from: Dagon↑ comment by Dagon · 2020-06-19T15:42:07.346Z · LW(p) · GW(p)
Without measurements, it's hard to know that "the thing" you're optimizing for is actually a thing at all, and nearly impossible to know if it's the same thing as someone else is optimizing for.
Agreed that you shouldn't lose sight that the measurements are usually proxies and reifications of what you want, and you need to periodically re-examine which measures should be replaced when they're no longer useful for feedback purposes. But disagreed that you can get anywhere with no measurements. Note that I think of "measurement" as a fairly broad term - any objective signal of the state of the world.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-06-19T16:26:24.220Z · LW(p) · GW(p)
Surely you can use your own intuitions? That will capture much more data than any individual measurement.
I do agree that it's harder to make sure people are optimizing for the same things in this case, especially as an organization goes over successive Dunbar Numbers.
This is one reason that vibing is important [LW(p) · GW(p)].
Replies from: Dagon↑ comment by Dagon · 2020-06-19T20:16:22.282Z · LW(p) · GW(p)
Surely you can use your own intuitions?
Of course. Understanding and refining your intuitions is a critical part of this ("this" being rational goal definition and pursuit). And the influence goes in both directions - measurements support or refute your intuitions, and your intuitions guide what to measure and how precisely. I'll argue that this is true intrapersonally (you'll have conflicting intuitions, and it'll require measurement and effort to understand their limits), as well as for sub- and super-dunbar groups.
I don't think I understand "vibing" well enough to know if it's any different than simply discussing things at multiple different levels of abstraction.
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-06-19T20:52:32.081Z · LW(p) · GW(p)
- measurements support or refute your intuitions, and your intuitions guide what to measure and how precisely. I'll argue that this is true intrapersonally (you'll have conflicting intuitions, and it'll require measurement and effort to understand their limits), as well as for sub- and super-dunbar groups.
Yes, my point being, a valid approach for certain projects is to use your intuitions to guide you, and then use ad-hoc measurements at various points to ensure your intuitions are doing well.
comment by B Jacobs (Bob Jacobs) · 2020-06-11T18:23:35.480Z · LW(p) · GW(p)
This design has become so ubiquitous for atheist groups that it has become the unofficial symbol for atheism. I think the design is very pretty, but I still don't like it. When people ask me why I think atheism isn't a religion I can say that atheism doesn't have rituals, symbols, doctrines etc. When there is a symbol for atheism that weakens this argument significantly. I get that many people who are leaving religion want to find a new ingroup, but I would prefer it if they used the symbol of secular humanism instead of this one.
Replies from: Pattern, ChristianKl↑ comment by Pattern · 2020-06-13T07:04:39.339Z · LW(p) · GW(p)
When people ask me why I think atheism isn't a religion
First of all, "it" is.
Secondly, theism isn't a religion.
Replies from: Bob Jacobs↑ comment by B Jacobs (Bob Jacobs) · 2020-06-13T10:09:16.749Z · LW(p) · GW(p)
Both theism and atheism aren't a religion. See also this video (5:50)
↑ comment by ChristianKl · 2020-06-11T20:42:15.895Z · LW(p) · GW(p)
If someone wants to have an atheism group and not a secular humanism group it makes sense to actually use symbol that stands for that idea.
The core idea of having something like an atheism group is that there's something that's a shared belief and that atheism is more then just an absense of something that doesn't really matter.