Posts

Foreacting agents 2023-12-08T19:57:21.989Z
Resolving moral uncertainty with randomization 2023-09-29T11:23:42.175Z
Sortition Model of Moral Uncertainty 2020-10-08T17:44:11.208Z
A Toy Model of Hingeyness 2020-09-07T17:38:59.826Z
2020 LessWrong Demographics Survey Results 2020-07-13T13:53:47.700Z
Hierarchy of Evidence 2020-07-11T12:54:27.536Z
By what metric do you judge a reference class? 2020-06-15T18:34:18.262Z
2020 LessWrong Demographics Survey 2020-06-11T20:05:41.859Z
Bob Jacobs's Shortform 2020-06-01T19:40:37.367Z
Should we stop using the term 'Rationalist'? 2020-05-29T15:11:18.329Z
Updated Hierarchy of Disagreement 2020-05-28T15:57:57.570Z
Why aren’t we testing general intelligence distribution? 2020-05-26T16:07:30.833Z
A Taijitu symbol for Moloch and Slack 2020-05-25T20:03:44.447Z
[Meta] Three small suggestions for the LW-website 2020-05-20T11:18:38.930Z
A Problem With Patternism 2020-05-19T20:16:54.835Z
Making a Crowdaction platform 2020-05-16T16:08:11.383Z
Meta-Preference Utilitarianism 2020-02-04T20:24:36.814Z

Comments

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2022-11-30T08:16:21.517Z · LW · GW

I tried a bit of a natural experiment to see if rationalists would be more negative towards an idea if it's called socialism vs if it's called it something else. I made two posts that are identical, except one calls it socialism right at the start, and one only reveals I was talking about socialism at the very end (perhaps it would've been better if I hadn't revealed it at all). The former I posted to LW, the latter I posted to the EA forum.

I expected that the comments on LW would be more negative, that I would get more downvotes and gave it a 50% chance the mods wouldn't even promote it to the frontpage on LW (but would on EA forum).

The comments were more negative on LW. I did get more downvotes, but I also got more upvotes and got more karma overall: (12 karma from 19 votes on EA and 27 karma from 39 votes on LW). Posts tend to get more karma on LW, but the difference is big enough that I consider my prediction to be wrong. Lastly, the LW mods did end up promoting it to the frontpage, but it took a very long time (maybe they had a debate about it).

Overall, while rationalists are more negative towards socialist ideas that are called socialist, they aren’t as negative as I expected and will update accordingly.

Comment by Bob Jacobs on [deleted post] 2022-11-28T08:57:34.868Z

Sorry guys. I woke up to another giant batch of new comments and I just don't have the time or energy to respond to them all with the quality that I would want. My comments were already getting shorter and shorter while my longer, more nuanced comments were getting sniped before I could post them. I'm sure some of you made some excellent points.

Comment by Bob Jacobs on [deleted post] 2022-11-27T18:57:35.712Z

I cited controlled experiments, you counter with an observation that I have already responded to in both the post and the comments:

I explained this in this section:

One issue that arises with starting a socialist firms is acquiring initial investing.[27] This is probably because co-ops want to maximize income (wages), not profits. They pursue the interests of their members rather than investors and may sometimes opt to increase wages instead of profits. Capitalist firms on the other hand are explicitly investor owned so investor interests will take priority.

A socialist firm can be more productive and not dominate the economy if it's hard to start a socialist firm.

 

The strength of a case depends on the strength of the evidence, not on the number of citations!

You are not engaging with the evidence I cited.

Comment by Bob Jacobs on [deleted post] 2022-11-27T18:49:17.749Z

A spot check is supposed to take a number of random sources and check them, not pick the one claim you find most suspicious (that isn't even about co-ops) and use that to dismiss the entire literature on co-ops.

Comment by Bob Jacobs on [deleted post] 2022-11-27T18:17:53.243Z

I cite four different studies that show that the theory doesn't match the observations, Lao Mein doesn't cite anything. This is the most extreme version of being a selective skeptic.

Comment by Bob Jacobs on [deleted post] 2022-11-27T17:23:16.657Z

I’m not handwaving anything I wrote a whole section about how experiments contradict this and what could explain this:

“Experiments have shown that people randomly allocated to do tasks in groups where they can elect their leaders and/or choose their pay structures are more productive than those who are led by an unelected manager who makes pay choices for them.[20] One study looked at real firms with high levels of worker ownership of shares in the company and found that workers are keener to monitor others, making them more productive than those with low or no ownership of shares and directly contradicting the free rider hypothesis.[21] It turns out there are potential benefits to giving workers control and a stake in the running of the organization they work for. This allows workers to play a key role in decision making and reorient the goals of the organization.[22] One explanation for this phenomenon is that of "localized knowledge". According to economist Friedrich Hayek, top-down organizers have difficulty harnessing and coordinating around local knowledge, and the policies they write that are the same across a wide range of circumstances don't account for the "particular circumstances of time and place".[23] (For examples of this, read Seeing Like a State by political scientist James Scott) Those who make the top-down policies in a traditional company are different to those who have to follow them. In addition, those who manage the company are most often different to those who own the company. These groups have different incentives and accumulate different knowledge. This means that co-ops have two main advantages:

Workers can harness their collective knowledge to make running the firm more effective. Workers can use their voting power to ensure the organization is more aligned with their values. Interestingly enough, I have yet to come across a co-op that uses the state of the art of social choice theory, so they could potentially get a lot lot better.“

Comment by Bob Jacobs on [deleted post] 2022-11-27T17:10:37.755Z

My prior is that other things are less effective and you need evidence to show they are more effective not vice versa.

Appeal to presuppositions always feels weird to me. A socialist could just as easily say 'my priors say the opposite'. In any case, you made a claim of comparison, not me, why is the burden of proof suddenly on me?

Of course. I'm saying it doesn't even get to make that argument which can sometimes muddy the waters enough to make some odd-seeming causes look at least plausibly effective.

I'm trying to explain the scientific literature on co-ops, not persuade you of some scam.

Comment by Bob Jacobs on [deleted post] 2022-11-27T16:30:23.133Z

However, in spot-checking whether the statistics were totally wrong, I found myself struggling with wading through signups and links and long mostly irrelevant articles. Of course some nonzero amount of this is likely to happen with spot-checks but it seemed like the layers of links just made it even worse.


This is dishonest, the vast majority of the sources are primary scientific studies and the few times I do refer to secondary sources it isn't irrelevant.

You did handle it right, especially your deleted comment.

OP to explain what data/model it was based on; the problem is that then OP responded back with repeating the links instead of explaining what he had read in the links

Yeah, because the primary source is right there?! What value would me explaining in my second language bring to the explanation, when you can click on the link and immediately download the primary source?

Comment by Bob Jacobs on [deleted post] 2022-11-27T12:24:45.115Z

But anyway, no, this link doesn't link directly to the study either, it links to a report that links to the study

You can immediately see a button that says "download report" when you click on that link. I wouldn't call that "digging for sources".

The wall of text doesn't really answer my questions about the independence of employee engagement.

Furthermore they suggest that managers have a huge effect on employee engagement, which seems to point to a potential area where this assumption could fail.

It's not independent, co-ops let you vote on managers which allows productivity to increase.

EDIT: I have apologized to (and thanked) tailcalled via messages, and have added the document as the third source. Once again, thanks for the suggestion.
 

Comment by Bob Jacobs on [deleted post] 2022-11-27T12:15:14.985Z

I've already explained why socialists firms wouldn't necessarily take over the economy even if they were productive in both the post and other comments.

Comment by Bob Jacobs on [deleted post] 2022-11-27T12:04:49.734Z
  • They were not direct links to the study, but instead i direct links to articles that talk about the study, so I had to dig further manually.


It was the second source in the post: [2]

  • The articles are often big and contain lots of specific things that might not be directly relevant to your point of using it in the post. 

There was a summary of it on the linked page itself:

Unfortunately, most employees remain disengaged at work. In fact, low engagement alone costs the global economy $7.8 trillion.
 

 

Even having opened the study, I'm still left with confusions about the methodology

From the study

Methodology

The primary data in this report come from the Gallup World Poll, through which Gallup has conducted surveys of the world’s adult population, using randomly selected samples, since 2005. The survey is administered annually face to face or by telephone, covering more than 160 countries and areas since its inception. In addition to the World Poll data, Gallup collected extensive random samples of working populations in the United States and Germany; these samples were also added to the dataset.

The target population of the World Poll is the entire civilian, noninstitutionalized, aged-15- and-older population. Gallup’s data in this report reflect the responses of adults, aged-15- and-older, who were employed for any number of hours by an employer.

With some exceptions, all samples are probability-based and nationally representative. Gallup uses data weighting to minimize bias in survey-based estimates; ensure samples are nationally representative for each country; and correct for unequal selection probability, nonresponse and double coverage of landline and mobile phone users when using both mobile phone and landline frames. Gallup also weights its final samples to match the national demographics of each selected country.

Regional findings in this report include data obtained from 2021 to as late as March 2022 (reported as part of 2021 data in this report). To determine percentage point changes for regions, Gallup uses data from 2020 and 2021 from the same countries in each region.

Country-specific findings in “Appendix 1: Country Comparisons” are based on data aggregated from three years of polling (2019, 2020 and 2021 — with several countries’ 2021 data obtained in early 2022). Percentage point changes for countries indicate the differences in percentage points when comparing the average from 2018, 2019 and 2020 with the average from 2019, 2020 and 2021.

Gallup typically surveys 1,000 individuals in each country or area, using a standard set
of core questions that has been translated into the major languages of the respective country. In some countries, Gallup collects oversamples in major cities or areas of special interest. Additionally, in some large countries, such as China and Russia, sample sizes include at least 2,000 adults. In a small number of countries, the sample size is less than 1,000. In this report, Gallup does not provide country-level data (aggregate of 2019, 2020 and 2021 data) or country-level percentage point change data (aggregate of 2018, 2019 and 2020 data) for any country that has an aggregate n size of less than 300.

For results based on the total sample of adults globally, the margin of sampling error ranged from ±0.5 percentage points to ±0.7 percentage points at the 95% confidence level. For results based on the total sample of adults in each region, the margin of sampling error ranged from ±0.6 percentage points to ±5.0 percentage points at the 95% confidence level. For results based on the total sample of adults in each country, the margin of sampling error ranged from ±0.5 percentage points to ±8.5 percentage points at the 95% confidence level. All reported margins of sampling error include computed design effects for weighting.

 

I'm not sure I understand the economics of this. If co-ops have an inherent massive growth advantage, wouldn't that outweigh the advantage capitalist firms have in giving more dividends to investors? Because while in the short term the capitalist firms would maybe give more to their investors, in the long term the co-ops would grow bigger and therefore have more money to give, even if they allocate a smaller fraction of it?

I never claimed a massive growth advantage:

There seems to be a small increase in companywide productivity[33]

As I said, the meta-analysis's only show a small growth advantage. If e.g a socialist firm grows with $1000 and a capitalist firm with $900, but the capitalist firm gives the $900 to the investors and the socialist firm gives $500 to both the investors and the employees, the investors can make more money with capitalist firms.
 

Comment by Bob Jacobs on [deleted post] 2022-11-27T11:29:32.328Z

There's just no way that things like this are remotely as effective as say GiveWell causes

Do you have any evidence for this?

and it barely even has longtermist points

Not all EA's are longtermists.

Comment by Bob Jacobs on [deleted post] 2022-11-27T10:08:53.958Z

What data and model are these estimates of the causal effects of it based on?

You can find my sources in the references section. This was based on a gallup study

Another thing that confuses me is why socialist firms need special support and don't naturally come to dominate the economy. You seem to attribute this to owners extracting value, but that seems short-sighted; presumably if you have an economy with a mixture of socialist and non-socialist firms, and the socialist firms are much more productive, they would grow quicker and become dominant over time.

I explained this in this section:

One issue that arises with starting a socialist firms is acquiring initial investing.[27] This is probably because co-ops want to maximize income (wages), not profits. They pursue the interests of their members rather than investors and may sometimes opt to increase wages instead of profits. Capitalist firms on the other hand are explicitly investor owned so investor interests will take priority.

A socialist firm can be more productive and not dominate the economy if it's hard to start a socialist firm.

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2022-03-18T12:35:12.679Z · LW · GW

I have a Mnemonic device for checking whether a model is Gears-like or not.
G E A R S:

Does a variable Generate Empirical Anticipations?

Can a variable be Rederived?

Is a variable hard to Substitute?

Comment by Bob Jacobs on [deleted post] 2020-10-15T11:25:15.824Z

There's evidence in the form of observations of events outside the cartesian boundary. There's evidence in internal process of reasoning, whose nature depends on the mind.

My previous comment said:

both empirical and tautological evidence

With "empirical evidence" I meant "evidence in the form of observations of events outside the cartesian boundary" and with "tautological argument" I meant "evidence in internal process of reasoning, whose nature depends on the mind".

When doing math, evidence comes up more as a guide to intuition than anything explicitly considered. There are also metamathematical notions of evidence, rendering something evidence-like clear.

Yes, but they are both "information that indicates whether a belief is more or less valid". Mathematical proof is also evidence, so they have the same structure. Do you have a way to ground them? Or if you somehow have a way to ground one form of proof but not the other, could you share just the one? (Since the structure is the same I suspect that the grounding of one could also be applied to the other)

Comment by Bob Jacobs on [deleted post] 2020-10-15T08:53:02.401Z

I meant both empirical and tautological evidence, so general information that indicates whether a belief is more or less valid. When you say that you can keep track of truth, why do you believe you can? What is that truth based on, evidence?

Comment by B Jacobs (Bob Jacobs) on A Toy Model of Hingeyness · 2020-09-09T14:20:22.813Z · LW · GW

It might be interesting to distinguish between "personal hingeyness" and "utilitarian hingeyness". Humans are not utilitarians so we care mostly about stuff that's happening in our own lives, when we die, our personal tree stops and we can't get more hinges. But the "utilitarian hingeyness" continues as it describes all possible utility. I made this with population ethics in mind, but you could totally use the same concept for your personal life, but then the most hingey time for you and the most hingey time for everyone will be different.

I'm not sure I understand your last paragraph, because you didn't clarify what you meant with the word "hingeyness"? If you meant by that "the range of total amount of utility you can potentially generate" (aka hinge broadness) or "the amount by which that range shrinks" (aka hinge reduction) It is possible to draw a tree where the first tick of an 11 tick tree has just as broad of a range as an option in the 10th tick. So the hinge broadness and the hinge reduction can be just as big in the 10th as in the 1st tick, but not bigger. I don't think you're talking about "hinge shift", but maybe you were talking about hinge precipiceness instead in which case, yes that can totally be bigger in the 10th tick.

Comment by B Jacobs (Bob Jacobs) on A Toy Model of Hingeyness · 2020-09-09T13:16:12.956Z · LW · GW

If in the first image we replace the 0 with a -100 (much wider) what happens? The amount of endings for 1 is still larger than 3. The amount of branches for 1 is still larger than 3. The width of the range of the possible utility of the endings for 1 is [-100 to 8] and for 3 is [-100 to 6] (smaller). The width of the range of the total amount of utility you could generate over the future branches is [1->3->-100 = -96 up to 1->2->8= 11] for 1 and [3->-100= -97 up to 3->6= 9] for 3 (smaller). Is this a good example of what you're trying to convey? If not could you maybe draw an example tree, to show me what you mean?

Comment by B Jacobs (Bob Jacobs) on A Toy Model of Hingeyness · 2020-09-08T15:12:43.615Z · LW · GW

Ending in negative numbers wouldn't change anything. The amount of endings will still shrink, the amount of branches will shrink, the range of the possible utility of the endings will still shrink or stay the same length, the range of the total amount of utility you could generate over the future branches will also shrink or stay the same length. Try it! Replace any number in any of my models with a negative number or draw your own model and see what happens.

Comment by B Jacobs (Bob Jacobs) on A Toy Model of Hingeyness · 2020-09-08T09:53:16.474Z · LW · GW

If we draw a tree of all possible timelines (and there is an end to the tree) the older choices will always have more branches that will sprout out because of them. If we are purely looking at the possible endings then the 1 in the first image has a range of 4 possible endings, but 2 only has 2 possible endings. If we're looking at branches then the 1 has a range of 6 possible branches, while 2 only has 2 possible branches. If we're looking at ending utility then 1 has a range of [0-8] while 2 only has [7-8]. If we're looking at the range of possible utility you can experience then 1 has a range from 1->3->0 = 4 utility all the way to 1->2->8 = 11 utility, while 2 only has 2->7 = 9 to 2->8 = 10.

When we talk about the utility of endings it is possible that the range doesn't change. For example:

(I can't post images in comments so here is a link to the image I will use to illustrate this point)

Here the "range of utility in endings" tick 1 has (the first 10) is [0-10] and the range of endings the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.

Now the width of the range of the total amount of utility you could potentially experience can also stay the same. For example the lowest utility tick 1 can experience is 10->0->0 = 10 utility and the highest is 10-0-10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility. The probability has changed (ending with a weird number like 19 is impossible for tick 2). The range has also shifted downwards from [10-20] to [0-10], but the width stays the same.

It just occurred to me that some people may find the shift in range also important for hingeyness. Maybe call that 'hinge shift'?

Crucially, in none of these definitions is it possible to end up with a wider range later down the line than when you started.

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-08-18T15:45:31.234Z · LW · GW

I know LessWrong has become less humorous over the years, but this idea popped into my head when I made my bounty comment and I couldn't stop myself from making it. Feel free to downvote this shortform if you want the site to remain a super serious forum. For the rest of you: here is my wanted poster for the reference class problem. Please solve it, it keeps me up at night.

Comment by B Jacobs (Bob Jacobs) on Multitudinous outside views · 2020-08-18T14:17:21.968Z · LW · GW

Thanks for replying to my question, but although this was nicely written it doesn't really solve the problem. So I'm putting up a $100 bounty for anyone on this site (or outside it) who can solve this problem by the end of next year. (I don't expect it will work, but it might motivate some people to start thinking about it).

Comment by B Jacobs (Bob Jacobs) on Calibration Practice: Retrodictions on Metaculus · 2020-08-03T10:34:32.295Z · LW · GW

I've touched on this before, but it would be wise to take your meta-certainty into account when calibrating. It wouldn't be hard for me to claim 99.9% accurate calibration by just making a bunch of very easy predictions (an extreme example would be buying a bunch of different dice and making predictions about how they're going to roll). My post goes into more detail but TLDR by trying to predict how accurate your prediction is going to be you can start to distinguish between "harder" and "easier" phenomena. This makes it easier to compare different peoples calibration and allows you to check how good you really are at making predictions.

Comment by B Jacobs (Bob Jacobs) on mAIry's room: AI reasoning to solve philosophical problems · 2020-07-30T17:01:01.774Z · LW · GW

I can also "print my own code", if I make a future version of a MRI scan I could give you all the information necessary to understand (that version of) me, but as soon as I look at it my neurological patterns change. I'm not sure what you mean with "add something to it", but I could also give you a copy of my brain scan and add something to it. Humans and computers can of course know a summery of themselves, but never the full picture.

Comment by B Jacobs (Bob Jacobs) on mAIry's room: AI reasoning to solve philosophical problems · 2020-07-29T21:45:56.652Z · LW · GW

An annoying philosopher would ask whether you could glean knowledge of your "meta-qualia" aka what it consciously feels like to experience what something feels like. The problem is that fully understanding our own consciousness is sadly impossible. If a computer discovers that in a certain location on it's hardware it has stored a picture of a dog, it must then store that information somewhere else, but if it subsequently tries to know everything about itself it must store that knowledge of the knowledge of the picture's location somewhere else, which it must also learn. This repeats in a loop until the computer crashes. An essay can fully describe most things but not itself: "The author starts the essay with writing that he starts the essay with writing that...". So annoyingly there will always be experiences that are mysterious to us.

Comment by B Jacobs (Bob Jacobs) on Billionaire Economics · 2020-07-29T11:05:29.967Z · LW · GW

I was not referring to the 'billionaires being universally evil', but to the 'what progressives think' part.

Comment by B Jacobs (Bob Jacobs) on Billionaire Economics · 2020-07-29T10:43:35.442Z · LW · GW

I was talking about the "as progressives think"

Comment by B Jacobs (Bob Jacobs) on Billionaire Economics · 2020-07-28T09:54:17.067Z · LW · GW
billionaires really are universally evil just as progressives think

Can you please add a quantifier when you make assertions about plurals. You can make any group sound dumb/evil by not doing it. E.g I can make atheists sound evil by saying the truthful statement: “Atheists break the law”. But that's only because I didn't add a quantifier like “all”, “most”, “at least one”, “a disproportionate number”, etc.

Comment by B Jacobs (Bob Jacobs) on Hierarchy of Evidence · 2020-07-26T18:40:33.897Z · LW · GW

And by what metric do you separate the competent experts from the non-competent experts? I also prefer listening to experts because they can explain vast amounts of things in "human" terms, inform me how different things interact and subsequently answer my specific questions. It's just that for any single piece of information you'd rather have a meta-analysis backing you up than an expert opinion.

Comment by B Jacobs (Bob Jacobs) on Hierarchy of Evidence · 2020-07-26T18:17:00.412Z · LW · GW

Thanks, fixed it for all the files (and made some other small changes)

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-07-24T19:35:54.788Z · LW · GW

Well to be fair this was just a short argument against subjective idealism with three pictures to briefly illustrate the point and this was not (nor did it claim to be) a comprehensive list of all the possible models in the field of philosophy of mind (otherwise I would also have to include pictures with the perception being red and the outside being green, or half being green no matter where they are, or everything being red, or everything being green etc)

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-07-24T17:20:03.920Z · LW · GW

Yes the malicious demon was also the model that sprung to my mind. To answer your question; there are certainly possible minds that have "demons" (or faulty algorithms) that make finding their internal mistakes impossible (but my current model thinks that evolution wouldn't allow those minds to live for very long). Although this argument has the same feature as the simulation argument in that any counterargument can be countered with "But what if the simulation/demon wants you to think that?". I don't have any real solution for this except to say that it doesn't really matter for our everyday life and we shouldn't put too much energy in trying to counter the uncounterable (but that feels kinda lame tbh).

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-07-23T20:03:03.577Z · LW · GW

I already mentioned in the post:

Most people agree that it isn't smaller than the things you perceive, because if I have perception of something the perception exists

Obviously you can hallucinate a bear without there being a bear, but the hallucination of the bear would exist (according to most people). There are models that say that even sense data does not exist but those models are very strange, unpopular and unpersuasive (for me and most other people). But if you think that both the phenomenon and the noumenon don't exist, then I would be interested in hearing your reasons for that conclusion.

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-07-19T08:56:56.478Z · LW · GW

This goes without saying and I apologize if I gave the impression that people should use this argument and it's visualization to persuade rather than to explain.

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-07-18T22:23:48.610Z · LW · GW

You are correct, this argument only works if you have a specific epistemic framework and a subjective idealistic framework which might not coincide in most subjective idealist. I only wrote it down because I just so happened to have used this argument successfully against someone with this framework (and I also liked the visualization I made for it). I didn't want to go into what "a given thing is real" means because it's a giant can of philosophical worms and I try to keep my shortforms short. Needless to say that this argument works with some philosophical definitions of "real" but not others. So as I said, this argument is pretty weak in itself and can only be used in certain situation in conjunction with other arguments.

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-07-18T16:49:22.163Z · LW · GW

This is a short argument against subjective idealism. Since I don't think there are (m)any subjective idealist on this site I've decided to make it a shortform rather than a full post.

We don't know how big reality really is. Most people agree that it isn't smaller than the things you perceive, because if I have perception of something the perception exists. Subjective Idealism says that only the perceptions are real and the things outside of our perception don't exist:

But if you're not infinitely certain that subjective idealism is correct, then you have to at least assign some probability that a different model of reality (e.g your perception + one other category of things exists) is true:

But of course there are many other types of models that could also be true:

In fact the other models outnumber subjective idealism infinity to one, making it seem more probable that things outside your immediate perception exist.

(I don't think this argument is particularly strong in itself, but it could be used to strengthen other arguments.)

Comment by B Jacobs (Bob Jacobs) on 2020 LessWrong Demographics Survey Results · 2020-07-13T21:33:35.531Z · LW · GW

I mean I did say in advance that I would publish the raw data, plus I specifically tried to avoid too personal questions, plus I explicitly said in my old posts to not answer questions you feel uncomfortable about, but if it makes you really uncomfortable I'll delete that part of the post.

Comment by B Jacobs (Bob Jacobs) on 2020 LessWrong Demographics Survey Results · 2020-07-13T21:20:10.238Z · LW · GW

That's probably because the moderators decided to keep the post a personal blogspot for some reason.

Comment by Bob Jacobs on [deleted post] 2020-07-07T14:44:14.517Z

I was trying to convey the same problem, although the underlying issue has much broader implications. Apparently johnswentworth is trying to solve a related problem but I'm currently not up to date with his posts so I can't vouch for the quality. Being able to quantify empirical differences would solve a lot of different philosophical problems in one fell swoop, so that might be something I should look into for my masters degree.

Comment by Bob Jacobs on [deleted post] 2020-07-06T14:56:24.958Z
Does the previous belief count as a hit or miss for the purposes of meta-certainty?

A miss. I would like to be able to quantify how far off certain predictions are. I mean sometimes you can quantify it but sometimes you can't. I have previously made a question posts about it that got very little traction so I'm gonna try to solve this philosophical problem myself once I have some more time.

One could also mean that a belief like "probability for world war" could get different odds when asked in the morning, afternoon or night while dice odds get more stable answers.

This could be a possible bias in meta-certainty that could be discovered (but isn't the concept of meta-certainty itself).

"conviction" could describe it but I think subjective degrees of belief are not supposed point to things like that.

Conviction could be an adequate word for it, but I'll stick with meta-certainty to avoid confusion. You could rank your meta-certainty in "order of defense", but I would start out explaining it in the way that I did in my response to ChristianKl.

Comment by Bob Jacobs on [deleted post] 2020-07-06T14:40:19.233Z
What does it mean to have certainty over a degree of certainty?

When I say "I'm 99% certain that my prediction 'the dice has a 1 in 6 chance of rolling a five' is correct", I'm having a degree of certainty about my degree of certainty. I'm basically making a prediction about how good I am at predicting.

How do you go about measuring whether or not the certainty is right?

This is (like I said) very hard. You can only calibrate your meta-certainty by gathering a boatload of data. If I give a 1 in 6 probability of an event occurring (e.g a dice roll returning a five), and such an event happens a million times, you can gauge how well you did on your certainty by checking how close it was to your 1 in 6 prediction (maybe it happened more, maybe it happened less) and calibrate yourself to be more optimistic or pessimistic. Similarly if I give a 99% chance of my probabilities (e.g 1 in 6) being right I'm basically saying: If the event (e.g you predicting something has a 1 in 6 chance of occurring) happened a million times you can gauge how well you did on your meta-certainty by checking how many times the you predicting 1 in 6 turned out to be wrong. So meta-certainty needs more data than regular certainty. It also means that you can only ever measure it a posteriori unfortunately. And you can never know for certain if your meta-certainty is right (the higher meta levels still exist after all), but you can get more accurate over time.

I'm not sure how far you want me to go with trying to defend measuring as a way of finding truth. If you have a problem with the philosophical position that certainty is probabilistic or the position of scientific realism in general then this might not be the best place to debate this issue. I would consider it off topic as I just accepted them as the premises for this posts, sorry if that was the problem you were trying to get at.

Comment by Bob Jacobs on [deleted post] 2020-07-06T12:08:07.455Z

Your degree of certainty about your degree of certainty. That's why it's called meta-certainty.

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-06-23T11:13:41.875Z · LW · GW

I was writing a post about how you can get more fuzzies (=personal happiness) out of your altruism, but decided that it would be better as a shortform. I know the general advice is to purchase your fuzzies and utilons separately but if you're going to do altruism anyway, and there are ways to increase your happiness of doing so without sacrificing altruistic output, then I would argue you should try to increase that happiness. After all, if altruism makes you miserable you're less likely to do it in the future and if it makes you happy you will be more likely to do it in the future (and personal happiness is obviously good in general).

The most obvious way to do it is with conditioning e.g giving yourself a cookie, doing a handpump motion every time you donate etc. Since there's already a boatload of stuff written about conditioning I won't expand on it further. I then wanted to adapt the tips from Lukeprog's the science of winning at life to this particular topic, but I don't really have anything to add so you can probably just read it and apply it to doing altruism.

The only purely original thing I wanted to advice is to diversify your altruistic output. I found out there have already been defenses made in favor of this concept but I would like to give additional arguments. The primary one being that it will keep you personally emotionally engaged with different parts of the world. When you invest something (e.g time/money) into a cause you become more emotionally attached to said cause. So someone who only donates to malaria bednets will (on average) be less emotionally invested into deworming even though these are both equally important projects. While I know on an intellectual level that donating 50 dollars to malaria bednets is better than donating 25 dollars, it will emotionally both feel like a small drop in the ocean. When advancements in the cause get made I get to feel fuzzies that I contributed, but crucially these won't be twice as warm if I donated twice as much. But if I donate to separate causes (e.g bednets and deworming) then for every advancement/milestone I will get to feel fuzzies from these two different causes (so twice as much).

This will lessen the chance of you becoming a victim of the bandwagon effect (of a particular cause) or becoming victim of the sunk-cost fallacy (if a cause you thought was effective turns out to be not very effective after all). This will also keep your worldview broad instead of either becoming depressed if your singular cause doesn't advance or becoming ignorant of the world at large. So if you do diversify then every victory in the other causes creates more happiness for you, allowing you to align yourself much better with the worlds needs.

Comment by B Jacobs (Bob Jacobs) on SlateStarCodex deleted because NYT wants to dox Scott · 2020-06-23T10:16:39.057Z · LW · GW

Not really. It's so strange that the US journalistic code of ethics has very strict rules about revealing information from anonymous sources, but doesn't seem to have any rules about revealing information from pseudonymous sources.

Comment by B Jacobs (Bob Jacobs) on Climate technology primer (1/3): basics · 2020-06-22T22:32:31.430Z · LW · GW

Just wanted to add a link to the newest carbon capture plant that could suck out as much carbon dioxide as 40 million trees. Backed by Bill Gates this plant can capture one ton of co2 for less than $233.

https://www.youtube.com/watch?v=XHX9pmQ6m_s

Comment by B Jacobs (Bob Jacobs) on Bucky's Shortform · 2020-06-17T16:03:00.841Z · LW · GW

I think Natural Reasons by Susan Hurley made the same argument (I don't own a copy so I can't check)

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-06-17T15:31:45.522Z · LW · GW

QALY is an imperfect metric because (among other things) an action that has an immediately apparent positive effect, might have far off negative effects. I might e.g cure the disease of someone who's actions lead directly to world war 3. I could argue that we should use qaly's (or something similar to qaly's) as the standard metric for a countries succes instead of gdp, but just like with gdp you are missing the far future values.

One metric I could think of is that we calculate a country's increase in it's citizens immediately apparent qaly's without pretending we can calculate all the ripple effects. Instead we divide this number by the countries ecological footprint. But there are metrics for other far off risks too. Like nuclear weapon yield or percentage of gdp spent on the development of autonomous weapons. I'm also not sure how good QALY's are at measuring mental health. Should things like leisure, social connections and inequality get their own metric. How do we balance them all?

I've tried to make some sort of justifiable metric for myself, but it's just too complex, time consuming and above my capabilities. Anyone got a better systems?

Comment by B Jacobs (Bob Jacobs) on Bob Jacobs's Shortform · 2020-06-13T10:09:16.749Z · LW · GW

Both theism and atheism aren't a religion. See also this video (5:50)

Comment by B Jacobs (Bob Jacobs) on 2020 LessWrong Demographics Survey · 2020-06-12T13:48:26.072Z · LW · GW

I'm from europe so I find both the term and category of 'hispanics' kinda dumb. I only put that there because the previous surveys did. I put central and south america in parentheses so I would choose hispanic in your case even though I agree that that's a messy category.

Comment by B Jacobs (Bob Jacobs) on 2020 LessWrong Demographics Survey · 2020-06-12T09:10:30.899Z · LW · GW

Positive and negative risk