Posts

tivelen's Shortform 2021-11-11T00:31:27.311Z

Comments

Comment by tivelen on Omicron Post #12 · 2022-01-06T16:20:13.451Z · LW · GW

A vaccination requirement could result in lower apparent effectiveness; so could risk compensation. In order to determine how much risk compensation occurred, we have to determine how much the vaccination requirement lowered the effectiveness. Without that analysis, concluding that risk compensation has a big enough effect to cause or contribute significantly to negative effectiveness is premature.

I am otherwise unsure of what you are trying to get at. The unvaccinated were prevented from doing a risky activity, and the vaccinated were allowed to do the activity (with a lower risk due to their status), yes.

Comment by tivelen on What are sane reasons that Covid data is treated as reliable? · 2022-01-06T15:56:20.165Z · LW · GW

I have a hypothesis that seems to fit the data. These numbers are not given out for the purpose of collecting data on vaccine side effects (that's what VAERS is for). They are intended to provide specialized medical care directed at those who have recently gotten vaccines.

Evidence:
One commenter reported calling a Walgreens number. If this is representative, these are local pharmacy/medical practice numbers that people are calling, not some national reporting service.

Reassurance is one of the jobs of a anyone providing medical care. "Even though you aren't feeling well after the treatment, you have nothing to worry about, the treatment is safe." is exactly what I would want someone to say if there was nothing either of us could do to help matters, especially if I was worried enough to call. You are especially likely to do so if you personally believe the vaccine is save (which is very likely for someone responding to such a number). If I was simply recording side effects, I wouldn't bother with that. Y

If you already believe the side effect is caused by the vaccine and think it's a very big deal, and then during the call they try to give the reassurance, you will instead distrust them, and also want to report their untrustworthiness to friends.

If you never call the number because you are not worried, or you do trust them, you have nothing notable to report. This would explain why every report looks like a reassurance that fell flat. Your sample is biased strongly towards looking exactly that way, regardless of how common side effects or the "there are no side effects" line actually is.

And all that assumes that this game of telephone, chaining between the medical establishment, the people taking the calls, your friends reporting the call, and then your fuzzy recollection, didn't distort any of the data.

Currently, this "explains" your data for me. As in, I am no longer confused about your reports about your friends. I understand what happened, I think. There is no data collection rejection involved, at least not related to these calls.

Do you doubt this hypothesis? If so, what evidence could you provide against it? What evidence would we need to collect to figure out whether the hypothesis is true?

I would expect that if one called such a number, one could confirm that the other person is doing no data collection about the likelihood of side effects, that the line in context is intended for reassurance if it comes up, and the entire call will otherwise be completely in line with providing post-vaccine medical care. Averaging across multiple calls, of course.

If I'm wrong, I would expect that getting a full description of an entire call would show that the line in question is used as a shutdown, side effects are not being recorded (but they are supposed to be recorded every time according to the rules of the job), there is no reasonable medical triage going on, and the numbers in question are intended purely to advocate for vaccine safety. Also averaging across multiple calls.

 

Comment by tivelen on Omicron Post #12 · 2022-01-05T16:21:02.516Z · LW · GW

Suppose 50% of vaccinated people would attend this event, and so would 50% of unvaccinated people, after considering the risks (ergo, there is no risk compensation). However, only vaccinated people are allowed to go to the event. Then the vaccinated people could have increased rates of Covid compared to unvaccinated people because of being more likely to attend superspreader events, even though they did not increase their level of risk compared to the unvaccinated population.

Whether this is the actual reason for the apparent negative effectiveness would depend on the actual percentages, and how common/dangerous superspreader events really are.

Comment by tivelen on What are sane reasons that Covid data is treated as reliable? · 2022-01-02T01:52:14.177Z · LW · GW

I searched the CDC's Vaccine Adverse Event Reporting System (VAERS) and there are 474 reported cases of abnormal blood pressure following COVID-19 vaccination. Looking further in the Google search, I found a study (n = 113) which indicated increased risk of high blood pressure after vaccination, especially after previous infection.

Plainly, not everyone in the healthcare system is on the same page about side effects. I'd err on the side of the Walgreens person you talked to being more accurate, given that high blood pressure is a known side effect. Not known by that Nebraska Medicine doctor, apparently.

Comment by tivelen on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T23:14:23.041Z · LW · GW

Who exactly told you that?

Comment by tivelen on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T21:56:25.958Z · LW · GW

I'm wondering what the details of your friends reporting attempts are. Who exactly did they talk to? VAERS is the official U.S. reporting system, what were their experiences with that? If there is an underreporting problem, we need as many specifics as we can get to combat it. Given that some vaccines do have well-known side effects among certain demographics, lots of people have been able to report their side effects successfully. We would need to figure out why your friend group has been far less successful to correct the issue.

Without an explicit probability calculation, how exactly are we supposed to determine what the levels of side effects in reality are, vs what the medical data that has been collected and reported suggests, vs what the average person thinks is true? Perhaps all are biased and/or untrustworthy. I'm not sure where we can go from there. Has personal testimony from our own social groups become the best we can do?

Comment by tivelen on A Defense of Functional Decision Theory · 2021-12-30T23:37:09.809Z · LW · GW

What does it mean to Left-box, exactly? As in, under what specific scenarios are you making a choice between boxes, and choosing the Left box?

Comment by tivelen on COVID Skepticism Isn't About Science · 2021-12-29T22:28:13.450Z · LW · GW

If you compare deaths to harms, you can end up scared of vaccines or Covid, depending on which you compare. If no one died of a vaccine in your group but one or two people were hurt by Covid, you will be scared of Covid. The question is, where does the framing come from? If no one died of Covid or a vaccine in your group (which seems to be the most likely case for a given group), which do you become scared of, and why?

Comment by tivelen on What is a probabilistic physical theory? · 2021-12-26T14:44:56.101Z · LW · GW

Perhaps such probabilities are based on intuition, and happen to be roughly accurate because the intuition has formed as a causal result of factors influencing the event? In order to be explicitly justified, one would need an explicit justification of intuition, or at least intuition within the field of knowledge in question.

I would say that such intuitions in many fields are too error-prone to justify any kind of accurate probability assessment. My personal answer then would be to discard probability assessments that cannot be justified, unless you have sufficient trust in your intuition about the statement in question.

What is your thinking on this prong of the dilemma (retracting your assessment of reasonableness on these probability assessments for which you have no justification)?

Comment by tivelen on What is a probabilistic physical theory? · 2021-12-26T03:34:49.218Z · LW · GW

My approach was not helpful at all, which I can clearly see now. I'll take another stab at your question.

You think it is reasonable to assign probabilities, but you also cannot explain how you do so or justify it. You are looking for such an explanation or justification, so that your assessment of reasonableness is backed by actual reason.

Are you unable to justify any probability assessments at all? Or is there some specific subset that you're having trouble with? Or have I failed to understand your question properly?

Comment by tivelen on What is a probabilistic physical theory? · 2021-12-25T20:01:26.215Z · LW · GW

Suppose an answer appeared here, and when you read it, you were completely satisfied by it. It answered your question perfectly. How would this world differ from one in which no answer remotely satisfied you? Would you expect yourself to have more accurate beliefs or help you achieve your goals?

If not, to the best of your knowledge, why have you decided to ask the question in the first place?

Comment by tivelen on Where do (did?) stable, cooperative institutions come from? · 2021-12-17T23:04:54.404Z · LW · GW

After 1960 upper classes retained most of them, but the working classes experienced major declines. These were societal in extent; no blame assigned, it is simply what happened. 

Why that happened seems to be the key to reversing it, though. If the four virtues are needed to get things back together, but they can fade from society for reasons unknown, trying to get them back is like bailing water from a sinking ship.

Comment by tivelen on tivelen's Shortform · 2021-12-16T22:49:03.288Z · LW · GW

Human sib-testing seems like it would be useful, for one thing. There was a post here about cloning great people from the past. We will be able to do that in the future if most moderately-well-off people keep pre-emptive copies.

In theory, this would have the same use cases as typical cloning, with an upfront cost and time delay. The main benefit it has over current cloning tech is that it avoids the health issues for the clones, which currently make it unviable.

We could clone people successfully with no further advances in science, or unusual costs. The ethical issues with cloning are no longer theoretical, if true. This seems like a big deal, but maybe cloning really isn't much of an ethical issue at all, and few people will be interested.

Comment by tivelen on Housing Markets, Satisficers, and One-Track Goodhart · 2021-12-16T22:20:03.271Z · LW · GW

To me the question is this: given that people like communities and presumably would be happy to pay money for them, why isn't this currently a factor in the housing market?

 

I'm not sure what you're getting at here. Could you describe how the housing market would be different if this was currently a factor?

Comment by tivelen on tivelen's Shortform · 2021-12-16T21:18:04.206Z · LW · GW

As of now, we cannot unfreeze people who have been cryogenically frozen and successfully revive them. However, we can freeze 5-day-old fertilized eggs and revive them successfully years later. When exactly does an embryo become unrevivable?

Identical twins split at around one week after fertilization, so if it were possible to revive past then, we could freeze one twin and let the other gestate, and effectively clone the gestated twin whenever desired. Since we can artificially induce twinning, then we could give every newly born person the ability to be cloned, seemingly with none of the downsides of current methods of cloning, although with the overhead of IVF treatment, which is substantial.

Is this currently possible under current scientific understanding? Is it more ethical than other methods of cloning? What ethical issues remain? Would anyone even want to do this if it were legally available?

Comment by tivelen on Privacy and Manipulation · 2021-12-07T03:02:54.000Z · LW · GW

In what way does this post do those bad things you mentioned? There is no mention of breaking innocent secrets, or secrets that would cause unjust ostracization, only patterns of actually harmful behavior.

If this post was made in confidence to you, would you tell others of it anyway?

Comment by tivelen on [deleted post] 2021-11-25T02:51:34.082Z

Any pattern identified by induction either continues to hold, in which case it is fine to believe it, or it stops holding, in which case it must be adjusted. A generalization is a form of induction, and so acts the same. Could you provide an example of induction leading down a garden path?

Comment by tivelen on [deleted post] 2021-11-22T22:59:58.150Z

Knowing that the sun will come up in the morning is knowledge, and a success of induction. You do not even need to know that the Earth orbits the sun to have that knowledge. There is more to know about the sun, but that is yet more success of induction, and does not erase the previous success as if it were worse than knowing nothing.

An observed pattern in reality works so long as reality is observed to obey the pattern. If the pattern breaks, the previous inductive hypothesis is adjusted. "The sun will rise in the morning" is an excellent inductive prediction that holds to this day, and allows people to live successfully.

Tomorrow the sun could simply not rise in the morning, and we'd find some new pattern about the sun. That wouldn't mean our old pattern was a hunch or pretended knowledge.

Studying neutrinos improves predictions, as does studying the growth cycles of plants. But there is no "global framework", only more data and the abstractions on the data. If it was hunches and pretended knowledge back then, it still is now, and if you are worried about this, I don't know any solution. I don't even see a problem. We will continue learning more, forever, or until we run out of new data, whichever comes first.

Comment by tivelen on What Would You Do Without Morality? · 2021-11-22T02:20:25.239Z · LW · GW

This is something I've thought about recently. Even if you cannot identify your goals, you still have to make choices. The difficult part is in determining the distribution of possible M. In the end, I think the best I've been able to do is to follow convergent instrumental goals that will maximize the probability of fulfilling any goal, regardless of the actual distribution of goals. It is necessary to let go of any ego as well, since you cannot care about yourself more than another person if you don't care about anything, now can you?

Comment by tivelen on Competence/Confidence · 2021-11-22T02:11:40.495Z · LW · GW

Interesting, that was something I considered, but didn't think was included in the idea of confidence. I have experienced that before. The stakes of a situation also seems like an objective fact, like competence. Perhaps the subjective evaluation of stakes and competence are entangled into the feeling of confidence. Maybe it has something to do with low variance of outcomes? If you have done something a lot, or if it doesn't really matter, then there isn't anything to worry about, because nothing that matters is up for grabs in the situation.

Comment by tivelen on Competence/Confidence · 2021-11-21T22:42:42.300Z · LW · GW

In the graphs, is "confidence" referring to "confidence in my ability to improve", then? And so we are graphing competence vs. ability to improve competence?

Otherwise, if I'm trying to place myself on one of these graphs, I'm simply unable to to anything but follow the dotted line. There is no "felt sense of confidence" that I can identify in myself, that doesn't originate in "I am competent at this".

Comment by tivelen on [deleted post] 2021-11-21T22:29:05.880Z

Knowledge is initially local. Induction works just fine without a global framework. People learn what works for them, and do that. Once the whole globe becomes interconnected, we each have more data to work with, but still most of it is irrelevant to any particular person's purposes. We cannot even physically hold a small fraction of the world's knowledge in our head, nor would we have any reason to.

Differences cannot be "settled" by words, only revealed and negotiated around. We have different knowledge because we have different predispositions, and different experiences that we have learned from, and have created different models as a result. We can create new experiences by talking about our experiences, but we cannot truly impart our experiences as we have had them by doing so.

It's our differences that make humanity more than just 8 billion human clones, and give it its distinct shape. Each difference, each experience, adds to humanity. What would humanity be if everyone agreed on everything, had all the same experiences? An ant colony, with no queen? The equivalent of one human brain, mass-produced in the billions?

Most of us wish humanity took a different, more pleasing shape, with less sharp edges and harsh colors. We try to mold it to our own tastes, remove conflict and suffering and ignorance and disability. We wouldn't be human if we didn't try. But no human being could ever succeed at it. Only humanity possibly could, warts and all.

Comment by tivelen on Competence/Confidence · 2021-11-21T03:03:53.457Z · LW · GW

How is confidence different from the belief you have in your own competence? Your self-reported confidence and competence should always be the same.

Is there something I'm missing, some way that confidence is distinct from belief in competence?

Comment by tivelen on Taking a simplified model · 2021-11-17T01:20:39.205Z · LW · GW

What is the mechanism, exactly? How do things unfold differently in high school vs. college with the laptop if someone attempts to steal it?

Comment by tivelen on Taking a simplified model · 2021-11-17T00:14:49.376Z · LW · GW

Do you have any examples in mind?

Comment by tivelen on An Emergency Fund for Effective Altruists (second version) · 2021-11-16T02:35:18.067Z · LW · GW

If an altruist falls on hard times, they can ask other altruists for help, and those altruists can decide to divert their charitable donations if they consider it worth more to help the altruist. If the altruists are donating to the same charities, it is very likely that restoring the ability to donate for the in-need altruist will more than pay for the donations diverted.

If charitable donations cannot be faked, and an altruist's report of hard times preventing their charity can be trusted, then this will work to provide a financial buffer based purely on mutual interest.

Only if most altruists in the network fall on hard times does this fail, as there aren't enough remaining charitable donations to redistribute. A global network of diversely employed altruists would minimize this risk.

Cases where an altruist is permanently knocked out of income (and therefore donation) would lack mutual interest. There would need to be a formal agreement to divert some charity for life to help them out, and this would most likely be separate from the prior network of mutual aid, and count as insurance.

Comment by tivelen on Improving on the Karma System · 2021-11-16T01:34:26.745Z · LW · GW

I appreciate the benefits of the karma system as a whole (sorting, hiding, and recommending comments based on perceived quality, as voted on by users and weighted by their own karma), but what are the benefits of specifically having the exact karma of comments be visible to anyone who reads them?

Some people in this thread have mentioned that they like that karma chugs along in the background: would it be even better if it were completely in the background, and stopped being an "Internet points" sort of thing like on all other social media? We are not immune to the effects of such things on rational thinking.

Sometimes in a discussion in comments, one party will be getting low karma on their posts, and the other high karma, and once you notice that you'll be subject to increased bias when reading the comments. Unless we're explicitly trying to bias ourselves towards posts others have upvoted, this seems to be operating against rationality.

Comments seem far more useful in helping writers make good posts. The "score" aspect of karma adds distracting social signaling, beyond what is necessary to keep posts prioritized properly. If I got X karma instead of Y karma for a post, it would tell me nothing about what I got right or wrong, and therefore wouldn't help me make better posts in the future. It would only make me compare myself to everyone else and let my biases construct reasoning for the different scores.

A sort of "Popular Comment" badge could still automatically be applied to high-karma comments, if indicating that is considered valuable, but I'm not sure that it would be.

TL;DR: Hiding the explicit karma totals of comments would keep all the benefits of karma for the health of the site, reduce cognitive load on readers and writers, and reduce the impact of groupthink, with no apparent downsides. Are there any benefits to seeing such totals that I've overlooked?

Comment by tivelen on A Defense of Functional Decision Theory · 2021-11-15T22:57:35.622Z · LW · GW

by running a simulation of you and seeing what that simulation did.

A simulation of your choice "upon seeing a bomb in the Left box under this scenario"? In that case, the choice to always take the Right box "upon seeing a bomb in the Left box under this scenario" is correct, and what any of the decision theories would recommend. Being in such a situation does necessitate the failure of the predictor, which means you are in a very improbable world, but that is not relevant to your decision in the world you happen to be in (simulated or not).

Or: A simulation of your choice in some different scenario (e.g. not seeing the contents of the boxes)? In that simulation, you would choose some box, but regardless of what that decision would happen to be, you are free to pick the Right box in this scenario, because it is a different scenario. Perhaps you picked Left in the alternative scenario, perhaps the predictor failed; neither is relevant here.

Why would any decision theory ever choose "Left" in this scenario?

Comment by tivelen on tivelen's Shortform · 2021-11-12T00:24:49.137Z · LW · GW

Such a system doesn't prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more "morally virtuous" to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement "X is more morally virtuous but not prescribed" coming from this moral system is not relevant to you. The system might as well say "X is more fribble". You won't care either way, unless the moral system also prescribes X, in which case X isn't supererogatory.

Comment by tivelen on tivelen's Shortform · 2021-11-11T01:38:03.598Z · LW · GW

If I am not obliged to do something, then why ought I do it, exactly? If it's morally optimal, then how could I justify not doing it?

Comment by tivelen on tivelen's Shortform · 2021-11-11T00:31:27.590Z · LW · GW

Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn't, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is "optional", right?

I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.

I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought "That's just slack applied to morality, isn't it?". Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.

Isn't my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn't I just about to conclude without actually reconciling the two positions?

Oh. Looks like I still have some actual work ahead of me, and some more learning to do.

Comment by tivelen on Erase button · 2021-11-10T02:17:15.906Z · LW · GW

The only difference between this and current methods of painless and quick suicide is how "easy" it is for such an intention and understanding to turn into an actual case of non-existence.

Building the rooms everywhere and recommending their use to anyone with such an intention ("providing" them) makes suicide maximally "easy" in this sense. On a surface level, this increases freedom, and allows people to better achieve their current goals.

But what causes such grounded intentions? Does providing such rooms make such conclusions easier to come to? If someone says they are analyzing the consequences and might intend to kill themselves soon, what do we do? Currently, we force people to stay alive, tell them how important their life is, how their family would suffer, that suicide is a sin, and so on, as a society, and we do this to everyone who is part of society.

None of these classic generic arguments will make sense anymore.  As soon as you acknowledge that some people ought to push the button, that anyone might need to consider such a thing at any time, you have to explain specifically why this particular person shouldn't right now, if you want to reduce their suicidal intentions. The fact that someone considering suicide happens to think of their family as a counter reason, is because of the universal societal meme, not its status as a rational reason (which it may very well happen to be).

We can designate certain groups (i.e. the terminally ill) as special, and restrict the rooms to them, creating new memes for everyone else to use based in their health, but the old memes remain broken, and the new ones may not be as strong.

I suspect that the main impact of providing the rooms will be socially encouraging suicide, regardless of what else we try to do, even if we tell ourselves we are only providing a choice for those who want it.

Comment by tivelen on Curated conversations with brilliant rationalists · 2021-11-09T13:59:24.142Z · LW · GW

I tested Otter.ai for free on the first forty minutes of one podcast (Education and Charity with Uri Bram), and listening at 2x speed allowed me to make a decent transcript at 1x speed overall with a few pauses for correction. The main time sinks were separating the speakers and correcting proper nouns, both of which seem to be features of the paid $8.33/month version of the program (which if used fully would cost $0.001/minute to use). If those two time sinks are in fact totally fixed by the paid version, I could easily imagine creating a decent accurate transcript in half the run time of the podcast. Someone who can type faster than me could possibly cut the time down even more.

If there is sufficient real demand for particular/all transcripts, I would be willing to do this transcription myself at no cost (though I would be best convinced of the need for these transcripts via some kind of payment for my work if I'm going to do a lot of them. I don't want to waste my effort on something people merely say they would like.)

Comment by tivelen on 2020 PhilPapers Survey Results · 2021-11-06T22:36:05.150Z · LW · GW

The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence's origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test. 

I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. "There exists some entity that is omnipotent, omniscient, and omnibenevolent" is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.

Comment by tivelen on 2020 PhilPapers Survey Results · 2021-11-06T01:08:04.126Z · LW · GW

Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.

Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.

Comment by tivelen on A system of infinite ethics · 2021-11-05T00:28:48.998Z · LW · GW

Your system may not worry about average life satisfaction, but it does seem to worry about expected life satisfaction, as far as I can tell. How can you define expected life satisfaction in a universe with infinitely-many agents of varying life-satisfaction? Specifically, given a description of such a universe (in whatever form you'd like, as long as it is general enough to capture any universe we may wish to consider), how would you go about actually doing the computation?

Alternatively, how do you think that computing "expected life satisfaction" can avoid the acknowledged problems of computing "average life satisfaction", in general terms?