Posts

How do I get rid of the ungrounded assumption that evidence exists? 2020-10-15T08:02:07.893Z · score: 5 (2 votes)
Sortition Model of Moral Uncertainty 2020-10-08T17:44:11.208Z · score: 8 (3 votes)
A Toy Model of Hingeyness 2020-09-07T17:38:59.826Z · score: 16 (5 votes)
2020 LessWrong Demographics Survey Results 2020-07-13T13:53:47.700Z · score: 15 (6 votes)
Hierarchy of Evidence 2020-07-11T12:54:27.536Z · score: 7 (3 votes)
Measuring Meta-Certainty 2020-07-05T21:15:22.075Z · score: 8 (4 votes)
By what metric do you judge a reference class? 2020-06-15T18:34:18.262Z · score: 7 (4 votes)
2020 LessWrong Demographics Survey 2020-06-11T20:05:41.859Z · score: 20 (8 votes)
Bob Jacobs's Shortform 2020-06-01T19:40:37.367Z · score: 3 (1 votes)
Reexamining The Dark Arts 2020-06-01T14:11:47.647Z · score: 7 (10 votes)
Should we stop using the term 'Rationalist'? 2020-05-29T15:11:18.329Z · score: 11 (13 votes)
Updated Hierarchy of Disagreement 2020-05-28T15:57:57.570Z · score: 14 (7 votes)
Why aren’t we testing general intelligence distribution? 2020-05-26T16:07:30.833Z · score: 24 (13 votes)
A Taijitu symbol for Moloch and Slack 2020-05-25T20:03:44.447Z · score: 78 (29 votes)
Nihilism doesn't matter 2020-05-21T18:19:14.259Z · score: 6 (5 votes)
[Meta] Three small suggestions for the LW-website 2020-05-20T11:18:38.930Z · score: 9 (7 votes)
A Problem With Patternism 2020-05-19T20:16:54.835Z · score: 5 (4 votes)
Making a Crowdaction platform 2020-05-16T16:08:11.383Z · score: 20 (10 votes)
Meta-Preference Utilitarianism 2020-02-04T20:24:36.814Z · score: 10 (5 votes)

Comments

Comment by bob-jacobs on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-15T11:25:15.824Z · score: 1 (1 votes) · LW · GW

There's evidence in the form of observations of events outside the cartesian boundary. There's evidence in internal process of reasoning, whose nature depends on the mind.

My previous comment said:

both empirical and tautological evidence

With "empirical evidence" I meant "evidence in the form of observations of events outside the cartesian boundary" and with "tautological argument" I meant "evidence in internal process of reasoning, whose nature depends on the mind".

When doing math, evidence comes up more as a guide to intuition than anything explicitly considered. There are also metamathematical notions of evidence, rendering something evidence-like clear.

Yes, but they are both "information that indicates whether a belief is more or less valid". Mathematical proof is also evidence, so they have the same structure. Do you have a way to ground them? Or if you somehow have a way to ground one form of proof but not the other, could you share just the one? (Since the structure is the same I suspect that the grounding of one could also be applied to the other)

EDIT: Based on the reply I think it’s fair to say that this discussion is going around in circles. I’m not sure why you‘re not interested in engaging with my definition (or questions), but since this is rather unproductive for the both of us I have elected to stop commenting.  

Comment by bob-jacobs on How do I get rid of the ungrounded assumption that evidence exists? · 2020-10-15T08:53:02.401Z · score: 1 (1 votes) · LW · GW

I meant both empirical and tautological evidence, so general information that indicates whether a belief is more or less valid. When you say that you can keep track of truth, why do you believe you can? What is that truth based on, evidence?

Comment by bob-jacobs on A Toy Model of Hingeyness · 2020-09-09T14:20:22.813Z · score: 1 (1 votes) · LW · GW

It might be interesting to distinguish between "personal hingeyness" and "utilitarian hingeyness". Humans are not utilitarians so we care mostly about stuff that's happening in our own lives, when we die, our personal tree stops and we can't get more hinges. But the "utilitarian hingeyness" continues as it describes all possible utility. I made this with population ethics in mind, but you could totally use the same concept for your personal life, but then the most hingey time for you and the most hingey time for everyone will be different.

I'm not sure I understand your last paragraph, because you didn't clarify what you meant with the word "hingeyness"? If you meant by that "the range of total amount of utility you can potentially generate" (aka hinge broadness) or "the amount by which that range shrinks" (aka hinge reduction) It is possible to draw a tree where the first tick of an 11 tick tree has just as broad of a range as an option in the 10th tick. So the hinge broadness and the hinge reduction can be just as big in the 10th as in the 1st tick, but not bigger. I don't think you're talking about "hinge shift", but maybe you were talking about hinge precipiceness instead in which case, yes that can totally be bigger in the 10th tick.

Comment by bob-jacobs on A Toy Model of Hingeyness · 2020-09-09T13:16:12.956Z · score: 1 (1 votes) · LW · GW

If in the first image we replace the 0 with a -100 (much wider) what happens? The amount of endings for 1 is still larger than 3. The amount of branches for 1 is still larger than 3. The width of the range of the possible utility of the endings for 1 is [-100 to 8] and for 3 is [-100 to 6] (smaller). The width of the range of the total amount of utility you could generate over the future branches is [1->3->-100 = -96 up to 1->2->8= 11] for 1 and [3->-100= -97 up to 3->6= 9] for 3 (smaller). Is this a good example of what you're trying to convey? If not could you maybe draw an example tree, to show me what you mean?

Comment by bob-jacobs on A Toy Model of Hingeyness · 2020-09-08T15:12:43.615Z · score: 1 (1 votes) · LW · GW

Ending in negative numbers wouldn't change anything. The amount of endings will still shrink, the amount of branches will shrink, the range of the possible utility of the endings will still shrink or stay the same length, the range of the total amount of utility you could generate over the future branches will also shrink or stay the same length. Try it! Replace any number in any of my models with a negative number or draw your own model and see what happens.

Comment by bob-jacobs on A Toy Model of Hingeyness · 2020-09-08T09:53:16.474Z · score: 1 (1 votes) · LW · GW

If we draw a tree of all possible timelines (and there is an end to the tree) the older choices will always have more branches that will sprout out because of them. If we are purely looking at the possible endings then the 1 in the first image has a range of 4 possible endings, but 2 only has 2 possible endings. If we're looking at branches then the 1 has a range of 6 possible branches, while 2 only has 2 possible branches. If we're looking at ending utility then 1 has a range of [0-8] while 2 only has [7-8]. If we're looking at the range of possible utility you can experience then 1 has a range from 1->3->0 = 4 utility all the way to 1->2->8 = 11 utility, while 2 only has 2->7 = 9 to 2->8 = 10.

When we talk about the utility of endings it is possible that the range doesn't change. For example:

(I can't post images in comments so here is a link to the image I will use to illustrate this point)

Here the "range of utility in endings" tick 1 has (the first 10) is [0-10] and the range of endings the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.

Now the width of the range of the total amount of utility you could potentially experience can also stay the same. For example the lowest utility tick 1 can experience is 10->0->0 = 10 utility and the highest is 10-0-10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility. The probability has changed (ending with a weird number like 19 is impossible for tick 2). The range has also shifted downwards from [10-20] to [0-10], but the width stays the same.

It just occurred to me that some people may find the shift in range also important for hingeyness. Maybe call that 'hinge shift'?

Crucially, in none of these definitions is it possible to end up with a wider range later down the line than when you started.

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-08-18T15:45:31.234Z · score: 11 (4 votes) · LW · GW

I know LessWrong has become less humorous over the years, but this idea popped into my head when I made my bounty comment and I couldn't stop myself from making it. Feel free to downvote this shortform if you want the site to remain a super serious forum. For the rest of you: here is my wanted poster for the reference class problem. Please solve it, it keeps me up at night.

Comment by bob-jacobs on Multitudinous outside views · 2020-08-18T14:17:21.968Z · score: 3 (2 votes) · LW · GW

Thanks for replying to my question, but although this was nicely written it doesn't really solve the problem. So I'm putting up a $100 bounty for anyone on this site (or outside it) who can solve this problem by the end of next year. (I don't expect it will work, but it might motivate some people to start thinking about it).

Comment by bob-jacobs on Calibration Practice: Retrodictions on Metaculus · 2020-08-03T10:34:32.295Z · score: 1 (1 votes) · LW · GW

I've touched on this before, but it would be wise to take your meta-certainty into account when calibrating. It wouldn't be hard for me to claim 99.9% accurate calibration by just making a bunch of very easy predictions (an extreme example would be buying a bunch of different dice and making predictions about how they're going to roll). My post goes into more detail but TLDR by trying to predict how accurate your prediction is going to be you can start to distinguish between "harder" and "easier" phenomena. This makes it easier to compare different peoples calibration and allows you to check how good you really are at making predictions.

Comment by bob-jacobs on mAIry's room: AI reasoning to solve philosophical problems · 2020-07-30T17:01:01.774Z · score: 1 (1 votes) · LW · GW

I can also "print my own code", if I make a future version of a MRI scan I could give you all the information necessary to understand (that version of) me, but as soon as I look at it my neurological patterns change. I'm not sure what you mean with "add something to it", but I could also give you a copy of my brain scan and add something to it. Humans and computers can of course know a summery of themselves, but never the full picture.

Comment by bob-jacobs on mAIry's room: AI reasoning to solve philosophical problems · 2020-07-29T21:45:56.652Z · score: 1 (1 votes) · LW · GW

An annoying philosopher would ask whether you could glean knowledge of your "meta-qualia" aka what it consciously feels like to experience what something feels like. The problem is that fully understanding our own consciousness is sadly impossible. If a computer discovers that in a certain location on it's hardware it has stored a picture of a dog, it must then store that information somewhere else, but if it subsequently tries to know everything about itself it must store that knowledge of the knowledge of the picture's location somewhere else, which it must also learn. This repeats in a loop until the computer crashes. An essay can fully describe most things but not itself: "The author starts the essay with writing that he starts the essay with writing that...". So annoyingly there will always be experiences that are mysterious to us.

Comment by bob-jacobs on Billionaire Economics · 2020-07-29T11:05:29.967Z · score: 1 (1 votes) · LW · GW

I was not referring to the 'billionaires being universally evil', but to the 'what progressives think' part.

Comment by bob-jacobs on Billionaire Economics · 2020-07-29T10:43:35.442Z · score: 3 (2 votes) · LW · GW

I was talking about the "as progressives think"

Comment by bob-jacobs on Billionaire Economics · 2020-07-28T09:54:17.067Z · score: 5 (4 votes) · LW · GW
billionaires really are universally evil just as progressives think

Can you please add a quantifier when you make assertions about plurals. You can make any group sound dumb/evil by not doing it. E.g I can make atheists sound evil by saying the truthful statement: “Atheists break the law”. But that's only because I didn't add a quantifier like “all”, “most”, “at least one”, “a disproportionate number”, etc.

Comment by bob-jacobs on Hierarchy of Evidence · 2020-07-26T18:40:33.897Z · score: 2 (2 votes) · LW · GW

And by what metric do you separate the competent experts from the non-competent experts? I also prefer listening to experts because they can explain vast amounts of things in "human" terms, inform me how different things interact and subsequently answer my specific questions. It's just that for any single piece of information you'd rather have a meta-analysis backing you up than an expert opinion.

Comment by bob-jacobs on Hierarchy of Evidence · 2020-07-26T18:17:00.412Z · score: 1 (1 votes) · LW · GW

Thanks, fixed it for all the files (and made some other small changes)

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-07-24T19:35:54.788Z · score: 1 (1 votes) · LW · GW

Well to be fair this was just a short argument against subjective idealism with three pictures to briefly illustrate the point and this was not (nor did it claim to be) a comprehensive list of all the possible models in the field of philosophy of mind (otherwise I would also have to include pictures with the perception being red and the outside being green, or half being green no matter where they are, or everything being red, or everything being green etc)

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-07-24T17:20:03.920Z · score: 1 (1 votes) · LW · GW

Yes the malicious demon was also the model that sprung to my mind. To answer your question; there are certainly possible minds that have "demons" (or faulty algorithms) that make finding their internal mistakes impossible (but my current model thinks that evolution wouldn't allow those minds to live for very long). Although this argument has the same feature as the simulation argument in that any counterargument can be countered with "But what if the simulation/demon wants you to think that?". I don't have any real solution for this except to say that it doesn't really matter for our everyday life and we shouldn't put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-07-23T20:03:03.577Z · score: 1 (1 votes) · LW · GW

I already mentioned in the post:

Most people agree that it isn't smaller than the things you perceive, because if I have perception of something the perception exists

Obviously you can hallucinate a bear without there being a bear, but the hallucination of the bear would exist (according to most people). There are models that say that even sense data does not exist but those models are very strange, unpopular and unpersuasive (for me and most other people). But if you think that both the phenomenon and the noumenon don't exist, then I would be interested in hearing your reasons for that conclusion.

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-07-19T08:56:56.478Z · score: 1 (1 votes) · LW · GW

This goes without saying and I apologize if I gave the impression that people should use this argument and it's visualization to persuade rather than to explain.

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-07-18T22:23:48.610Z · score: 1 (1 votes) · LW · GW

You are correct, this argument only works if you have a specific epistemic framework and a subjective idealistic framework which might not coincide in most subjective idealist. I only wrote it down because I just so happened to have used this argument successfully against someone with this framework (and I also liked the visualization I made for it). I didn't want to go into what "a given thing is real" means because it's a giant can of philosophical worms and I try to keep my shortforms short. Needless to say that this argument works with some philosophical definitions of "real" but not others. So as I said, this argument is pretty weak in itself and can only be used in certain situation in conjunction with other arguments.

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-07-18T16:49:22.163Z · score: 1 (1 votes) · LW · GW

This is a short argument against subjective idealism. Since I don't think there are (m)any subjective idealist on this site I've decided to make it a shortform rather than a full post.

We don't know how big reality really is. Most people agree that it isn't smaller than the things you perceive, because if I have perception of something the perception exists. Subjective Idealism says that only the perceptions are real and the things outside of our perception don't exist:

But if you're not infinitely certain that subjective idealism is correct, then you have to at least assign some probability that a different model of reality (e.g your perception + one other category of things exists) is true:

But of course there are many other types of models that could also be true:

In fact the other models outnumber subjective idealism infinity to one, making it seem more probable that things outside your immediate perception exist.

(I don't think this argument is particularly strong in itself, but it could be used to strengthen other arguments.)

Comment by bob-jacobs on 2020 LessWrong Demographics Survey Results · 2020-07-13T21:33:35.531Z · score: 3 (2 votes) · LW · GW

I mean I did say in advance that I would publish the raw data, plus I specifically tried to avoid too personal questions, plus I explicitly said in my old posts to not answer questions you feel uncomfortable about, but if it makes you really uncomfortable I'll delete that part of the post.

Comment by bob-jacobs on 2020 LessWrong Demographics Survey Results · 2020-07-13T21:20:10.238Z · score: 4 (3 votes) · LW · GW

That's probably because the moderators decided to keep the post a personal blogspot for some reason.

Comment by bob-jacobs on Measuring Meta-Certainty · 2020-07-07T14:44:14.517Z · score: 1 (1 votes) · LW · GW

I was trying to convey the same problem, although the underlying issue has much broader implications. Apparently johnswentworth is trying to solve a related problem but I'm currently not up to date with his posts so I can't vouch for the quality. Being able to quantify empirical differences would solve a lot of different philosophical problems in one fell swoop, so that might be something I should look into for my masters degree.

Comment by bob-jacobs on Measuring Meta-Certainty · 2020-07-06T14:56:24.958Z · score: 1 (1 votes) · LW · GW
Does the previous belief count as a hit or miss for the purposes of meta-certainty?

A miss. I would like to be able to quantify how far off certain predictions are. I mean sometimes you can quantify it but sometimes you can't. I have previously made a question posts about it that got very little traction so I'm gonna try to solve this philosophical problem myself once I have some more time.

One could also mean that a belief like "probability for world war" could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers.

This could be a possible bias in meta-certainty that could be discovered (but isn't the concept of meta-certainty itself).

"conviction" could describe it but I think subjective degrees of belief are not supposed point to things like that.

Conviction could be an adequate word for it, but I'll stick with meta-certainty to avoid confusion. You could rank your meta-certainty in "order of defense", but I would start out explaining it in the way that I did in my response to ChristianKl.

Comment by bob-jacobs on Measuring Meta-Certainty · 2020-07-06T14:40:19.233Z · score: 3 (2 votes) · LW · GW
What does it mean to have certainty over a degree of certainty?

When I say "I'm 99% certain that my prediction 'the dice has a 1 in 6 chance of rolling a five' is correct", I'm having a degree of certainty about my degree of certainty. I'm basically making a prediction about how good I am at predicting.

How do you go about measuring whether or not the certainty is right?

This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I'm basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.

I'm not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.

Comment by bob-jacobs on Measuring Meta-Certainty · 2020-07-06T12:08:07.455Z · score: 1 (1 votes) · LW · GW

Your degree of certainty about your degree of certainty. That's why it's called meta-certainty.

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-06-23T11:13:41.875Z · score: 7 (4 votes) · LW · GW

I was writing a post about how you can get more fuzzies (=personal happiness) out of your altruism, but decided that it would be better as a shortform. I know the general advice is to purchase your fuzzies and utilons separately but if you're going to do altruism anyway, and there are ways to increase your happiness of doing so without sacrificing altruistic output, then I would argue you should try to increase that happiness. After all, if altruism makes you miserable you're less likely to do it in the future and if it makes you happy you will be more likely to do it in the future (and personal happiness is obviously good in general).

The most obvious way to do it is with conditioning e.g giving yourself a cookie, doing a handpump motion every time you donate etc. Since there's already a boatload of stuff written about conditioning I won't expand on it further. I then wanted to adapt the tips from Lukeprog's the science of winning at life to this particular topic, but I don't really have anything to add so you can probably just read it and apply it to doing altruism.

The only purely original thing I wanted to advice is to diversify your altruistic output. I found out there have already been defenses made in favor of this concept but I would like to give additional arguments. The primary one being that it will keep you personally emotionally engaged with different parts of the world. When you invest something (e.g time/money) into a cause you become more emotionally attached to said cause. So someone who only donates to malaria bednets will (on average) be less emotionally invested into deworming even though these are both equally important projects. While I know on an intellectual level that donating 50 dollars to malaria bednets is better than donating 25 dollars, it will emotionally both feel like a small drop in the ocean. When advancements in the cause get made I get to feel fuzzies that I contributed, but crucially these won't be twice as warm if I donated twice as much. But if I donate to separate causes (e.g bednets and deworming) then for every advancement/milestone I will get to feel fuzzies from these two different causes (so twice as much).

This will lessen the chance of you becoming a victim of the bandwagon effect (of a particular cause) or becoming victim of the sunk-cost fallacy (if a cause you thought was effective turns out to be not very effective after all). This will also keep your worldview broad instead of either becoming depressed if your singular cause doesn't advance or becoming ignorant of the world at large. So if you do diversify then every victory in the other causes creates more happiness for you, allowing you to align yourself much better with the worlds needs.

Comment by bob-jacobs on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T10:16:39.057Z · score: 6 (5 votes) · LW · GW

Not really. It's so strange that the US journalistic code of ethics has very strict rules about revealing information from anonymous sources, but doesn't seem to have any rules about revealing information from pseudonymous sources.

Comment by bob-jacobs on Climate technology primer (1/3): basics · 2020-06-22T22:32:31.430Z · score: 1 (1 votes) · LW · GW

Just wanted to add a link to the newest carbon capture plant that could suck out as much carbon dioxide as 40 million trees. Backed by Bill Gates this plant can capture one ton of co2 for less than $233.

https://www.youtube.com/watch?v=XHX9pmQ6m_s

Comment by bob-jacobs on Bucky's Shortform · 2020-06-17T16:03:00.841Z · score: 3 (2 votes) · LW · GW

I think Natural Reasons by Susan Hurley made the same argument (I don't own a copy so I can't check)

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-06-17T15:31:45.522Z · score: 1 (1 votes) · LW · GW

QALY is an imperfect metric because (among other things) an action that has an immediately apparent positive effect, might have far off negative effects. I might e.g cure the disease of someone who's actions lead directly to world war 3. I could argue that we should use qaly's (or something similar to qaly's) as the standard metric for a countries succes instead of gdp, but just like with gdp you are missing the far future values.

One metric I could think of is that we calculate a country's increase in it's citizens immediately apparent qaly's without pretending we can calculate all the ripple effects. Instead we divide this number by the countries ecological footprint. But there are metrics for other far off risks too. Like nuclear weapon yield or percentage of gdp spent on the development of autonomous weapons. I'm also not sure how good QALY's are at measuring mental health. Should things like leisure, social connections and inequality get their own metric. How do we balance them all?

I've tried to make some sort of justifiable metric for myself, but it's just too complex, time consuming and above my capabilities. Anyone got a better systems?

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-06-13T10:09:16.749Z · score: 1 (1 votes) · LW · GW

Both theism and atheism aren't a religion. See also this video (5:50)

Comment by bob-jacobs on 2020 LessWrong Demographics Survey · 2020-06-12T13:48:26.072Z · score: 1 (1 votes) · LW · GW

I'm from europe so I find both the term and category of 'hispanics' kinda dumb. I only put that there because the previous surveys did. I put central and south america in parentheses so I would choose hispanic in your case even though I agree that that's a messy category.

Comment by bob-jacobs on 2020 LessWrong Demographics Survey · 2020-06-12T09:10:30.899Z · score: 1 (1 votes) · LW · GW

Positive and negative risk

Comment by bob-jacobs on 2020 LessWrong Demographics Survey · 2020-06-12T09:09:59.692Z · score: 1 (1 votes) · LW · GW

Noted for next time

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-06-11T18:23:35.480Z · score: 1 (3 votes) · LW · GW

This design has become so ubiquitous for atheist groups that it has become the unofficial symbol for atheism. I think the design is very pretty, but I still don't like it. When people ask me why I think atheism isn't a religion I can say that atheism doesn't have rituals, symbols, doctrines etc. When there is a symbol for atheism that weakens this argument significantly. I get that many people who are leaving religion want to find a new ingroup, but I would prefer it if they used the symbol of secular humanism instead of this one.

Comment by Bob Jacobs on [deleted post] 2020-06-11T11:33:47.464Z
It's not an attack, and I would recommend not taking it as one.

Attack is just the way in which 'verbal arguments against X' are often shortened to, but while it is the common way of phrasing such a thing, I agree that it is stylistically odd. I didn't assume you had any malice in mind, I was just using it the common way but will refrain from doing so (in similar context) in the future.

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.
Speaking of deleting things, what happened to your other post?

Alright no problem, things like that happen all the time so I will just delete it. I described what happend to the other post here. This was one of the difficult cases where I had to balance my desire to have a record of the things (and mistakes) people (including me) said and not wanting to clog the website with low-quality (as the downvotes indicated) content (I think I found a good solution). I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

Comment by Bob Jacobs on [deleted post] 2020-06-10T17:53:24.378Z

I would favor a conversation where we keep attacks on the persons to an absolute minimum and focus instead on the arguments being made (addressing the person is sometimes necessary, but entirely ignoring the argument in favor of attempting to psychoanalyze a stranger on the internet is not a good way to have a philosophical discussion). Secondly I would also like to hear a counterargument to the argument I made. And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Comment by Bob Jacobs on [deleted post] 2020-06-09T08:20:29.443Z

Yeah but type B and its many forms like placebo, nocebo, psychosis etc etc are already widely known and documented. I only included it begrudgingly for the sake of completeness and because someone was going to mention it in the comments (not sure why I made the effect-size so small). This post is not titled: ‘steelman of the pragmatist position’, otherwise I would have indeed focused more on the type B for which there are more real world examples. I wanted to think up some fun and strange thought experiments that might push peoples brain in directions they don’t usually go and consider new angles.

Comment by Bob Jacobs on [deleted post] 2020-06-08T22:57:46.526Z

You could probably write the same answer without the snark. Your study on placebo only mentions it working on IBS patients so its not the grand dismissal of placebo that you claim it is, but even if it was there are still plenty of similar phenomena. The easiest to adapt would be the nocebo-effect, just switch the positives with the negatives in the example and you have your nocebo argument.

Comment by Bob Jacobs on [deleted post] 2020-06-08T21:12:39.075Z

Thanks you for writing this, yeah it appears that we are talking past each other (particularly in my debate with Jimmy). I was going to write another post to try and clarify this debate, but decided against it for five reasons:

1. I already promised Isusr I would make a different (and hard to make) post this week and I don't want to spam out posts.

2. The downvotes are telling me this is clearly a controversial/community-splitting topic and I've been making too much of those lately

3. I've now already made two posts on this topic so I'm starting to get sick of it.

4. While my english has improved remarkably these past years it is apparently not yet at the level where I can discuss contemporary philosophy without taking the risk of not effectively communicating with my interlocutor (this might sometimes be the fault of your interlocutor but it has happened two times in two weeks so I'm giving myself at the very least partial blame)

5. I have exams right now, so I should really be studying more for those.

Maybe I'll make one in a couple months if no one else has made a post on it yet.

Comment by Bob Jacobs on [deleted post] 2020-06-08T16:10:05.924Z

I was referring to that block of text that you have encoded, I decoded it and there you state the assumption that your interlocutor will lie. And no I am assuming they are true which is why I said "we assume it's true". I would also keep anecdotal evidence to a minimum in this type of discussion because I would want my interlocutor to be able to check every step of my reasoning. And anecdotal evidence for a positive occurrence of phenomenon does not discount the existence of a negative occurrence. I say there exists such a thing as X and the counterargument is but this one time there was Y. Do you have any arguments as to why my counterarguments or something in a similar vein couldn't happend?

[EDIT] Richard says he meant the encoded text to only mean that the reader thinks up, but doesn't present the false story. This is a plausible interpretation of the text and since I can't know which one was meant I will assume it was the more charitable one and retract these comments.

Comment by Bob Jacobs on [deleted post] 2020-06-08T14:57:27.998Z

We assume it's true, we don't have any evidence. I could tell stories about my personal experience but you'd have no way to check them. At least saying upfront that it's a thought experiment is keeping the debate ground neutral and allows peoples reasoning to do the work instead of their emotions. And no I would never make up a story to defend my argument, the fact that you would assume your interlocutor is being a liar without any evidence to back that up is really hampering my desire to debate you.

Comment by Bob Jacobs on [deleted post] 2020-06-08T13:02:18.435Z

I think this is obvious, but we shouldn't be afraid to examen when and why our rules fail.

Comment by Bob Jacobs on [deleted post] 2020-06-08T11:46:51.497Z

I'm very glad that you managed to train yourself to do that but this option is not available for everyone. I see a lot of engaging the details and giving singular instances of something not occurring, but I don't see a lot of engaging in the least convenient possible world. As I was writing this reply it became longer and longer, so I decided to rewrite it and make it it's own post. You can check out some more inconvenient counterexamples I thought up here. [Edit: I saved the post to draft by accident. I didn't want to reupload it, but if we ever get a way to have 'unlisted' posts I will upload it unlisted. Until that time I have changed the link so you can still see my post and the comments it received]

Comment by Bob Jacobs on [deleted post] 2020-06-07T22:10:51.420Z
Would you say "this hot dog is worth eating" is similarly "a subjective value" and not "an objective belief"?

This is entering the domain of axiology. Nothing wrong with debating axiology per se, but I would rather not get too much off topic. So I'll drop this argument for simplicities sake.

But placebo on the other hand is a very real phenomenon in human beings where we think (e.g) a pill is a pharmaceutical but isn't (epistemically irrational), but our irrational belief still helps us achieve our goals of (e.g) not dying (instrumentally rational).

Comment by Bob Jacobs on [deleted post] 2020-06-07T20:32:05.573Z

Don’t discount our hands tho. It is what allowed the tool use and (perhaps more importantly) the initial use of fire to be in our control. A big brain means nothing if you can’t use it to manipulate the world around you and opposable thumbs gave humans the finesse to actualize their clever fantasies.

Comment by bob-jacobs on Bob Jacobs's Shortform · 2020-06-07T16:20:59.919Z · score: 3 (2 votes) · LW · GW

I just realized that coherence value theory can actually be read as a warning for inadequate equilibria in your worldview. If you are losing epistemic value because a new (but more accurate) belief doesn't fit with your old beliefs then you have to reject it, meaning you can get stuck with an imperfect belief system (e.g a religion). In other words: coherentism works better if you have slack in your belief system (this is one of my favorite LW posts, highly recommended).