Posts
Comments
I am quite glad to see that Lighthaven is on a path to financial sustainability, as I sometimes attend events there, and I am very much not looking to be subsidized by anyone's charity. One clarifying question. The rough Lighthaven budget above has a line for "interest". Am I correct in assume that that is the entire mortgage payment, both interest and principal, not just the interest? In other words, by successfully making the $1M payment each year, the amount you owe the bank is going down each year and will eventually hit zero?
"Agnostic" doesn't necessarily mean "unknowable and not subject to testing". Much more often it has the weaker meaning "not currently known". There is a house being built across the street. Is there a work van parked in front of it right now? I don't know. This is certainly knowable and subject to testing - I could get up, walk over to a window in the front of the house, and look. But I don't care enough to do that, so I continue to now know if there is a work van parked in front of the house across the street. I am agnostic about the existence of such a work van.
For people who do test prep seriously (I used to be a full time tutor), this has been known for decades. One of the standard things I used to tell every student was if you have no idea what the answer is, guess B, because B is statistically most likely to be the correct answer. When I was in 10th grade (this was 2002), I didn't have anything to gain by doing well on the math state standardized test, so I tested the theory that B is most likely to be correct. 38% of the answers on that test were in fact B.
> This is pretty weird. As far as we know, humans don’t tend to prefer choices labeled B, so we’re not sure where this could have come from in the training data. As humans, it initially didn’t even occur to us to look for it!
Remember, LLMs aren't modeling how a human reading text would process the text. LLMs are trying to model the patterns in the texts that are in the training data itself. In this case, that means they are doing something closer to imitating test writers than test takers. And it is well known that humans, including those who write tests, are bad at being random.
In the automatic response they told me that they expect to give me the decision before this deadline.
Contrary to the promise, I don't get any response.
There is an obvious disconnect here. A statement that someone "expects" to do something is not a promise, especially not when made in an automated response. If Igor misread this as a promise, and given that he has not provided exact quotes of the other alleged promises, it seems quite plausible that nobody ever promised anything, and that Igor was imprudent in re-arranging his life based on a pending grant application. If I'm right about that, then Igor has defamed EAIF by accusing them of "lies".
Great post!
> a client can show me where they buried their dozen murder victims and I wouldn’t be allowed to tell a soul, even if an innocent person is sitting in prison for their crimes.
For any Alaskan serial murderers reading this, do note that this does not apply to you. Your Alaskan attorney can breach attorney-client privilege to prevent an innocent person from going to jail. See Rule 1.6(b)(1)(C).
That's it. You continue to refuse to engage with the argument that a norm against lawsuits is harmful. You presume that such a norm exists, to try to illegitimately shift the burden to me to show that it does not. Now you presume that lawsuits are more harmful than a norm against lawsuits, to again try to illegitimately shift the burden to me to show the reverse. Even if your position were sound, your argumentative tactics are dirty, and I will not continue to engage with them.
I am talking about the community norm of not using lawsuits to settle arguments (and, more generally, disputes that are… just about words, let’s say). It’s not exclusively a property of small communities—that’s my point.
Then you misunderstood what I was trying to point at when I brought up the distinction about the scale of a community. I was trying to point at the fact that when lawsuits occur, there is likely already too much negative feeling between the parties for them to enjoy direct interactions even in a small group context, and if there isn't already, the lawsuit causes it. I was pointing to a fact about human psychology, which on a pragmatic level we need to arrange our social structures to deal with. I was not pointing to a norm about using libel lawsuits.
At this point, you've failed to engage with my point that having a norm against lawsuits is harmful, even though lawsuits themselves are also harmful. I'm guessing your case for having such a norm is that lawsuits are harmful, which is not something I dispute. Is there any more to say?
But the fact that we’ve not deviated from the behavior the norm would mandate, is evidence of the norm’s effective existence.
Negligible evidence, especially in comparison to the lack of any past discussion of such a norm. Your argument here is so bad, and your choice of language so ambiguous, that I have to question whether you are even arguing in good faith.
Can you articulate what exactly the property of small communities is that we are talking about, and what its benefits are? I still am not forming a coherent picture of what the heck you are talking about because, again, the thing I was trying to point to in making this distinction I think is inherently a property of small groups.
Are you seriously now claiming that all of society has a norm against lawsuits? I think that is just obviously wrong, particularly for the US. And the misappropriation of the more traditional "arguments get arguments, not bullets" is just astoundingly oblivious. Lawsuits are a kind of argument! They are an example of the thing we are supposed to do instead of bullets!
No, I cannot empirically observe that the rationalist community has operated by such a norm. I can empirically observe that I know of no instance where one rationalist has actually filed a libel suit against another, but this is much more likely to be due to either (1) my ignorance of such a suit, or (2) the low rate of actually filing libel lawsuits in society at large combined with the small size of the rationalist community. I know of no instance of a rationalist going to space either, but I'm pretty sure we don't have a norm against it. I'd never heard anyone speak of such a norm until the NL drama. That is significant evidence that there is no such norm.
May I ask which city you live in?
I agree that these different sorts of communities exist along a continuum. What startles me is that you seem to think that the intimacy of the something of the smaller community can and should be scaled to the larger sort of community. To my mind, it is inherently a property of the small size. Trying to scale it sets of loud alarm bells. I'm not sure to what extent I endorse this, but possibly one way of summarizing the problems of overly controlling organizations like cults is that they try to take the intimacy or something of a small community and scale it.
I also strongly disagree with your presumption that we are talking about "Going from having [the understanding that we do not use libel suits within the community], to not having it". I have never understood the rationalist community to have such a norm. From where I am sitting, Habryka is trying to create such a norm out of nothing, and I am not ok with that.
As I believe I have said already, I agree that libel suits, and law suits generally, can be damaging, and I certainly do not encourage anyone to use them. I'm just pointing out that having a norm against using lawsuits can be even more damaging.
A real court would apply complex rules of evidence, which sometimes involve balancing but often are more categorical. But yes, it's a different notion of public interest than whatever one rando thinks is public interest.
I agree that there is a significant difference between cases where the accused knows the identity of the accuser and cases where they do not, and we should split our analysis.
In cases where the accused does not know the identity of the accuser, I think the accusations would necessarily be so vague that I wouldn't update much on them, and I would hope other rationalists and EAs wouldn't either, but clearly there is a significant contingent of people in these communities who do not share my epistemic scruples. Given that, I don't know, seems a mess. But your rule that only the accused should share the identity of the accuser seems too absolute - surely accusers are sometimes in the wrong, and sometimes malicious, and in that case having their identities publicly known seems good. Yes that will result in some amount of social punishment, and if the accusations were false and malicious, then I think that is good.
The case where the accused does know the identity of the accuser is where my above logic about the accused appearing retaliatory would suggest it is better for a third party to name the accuser.
I think you are using an inapplicable definition of "community". Your example of a D&D group calls to mind a "community" in the sense of "a group of single digit number of people who are in the same room socially interacting on a recurring basis." In this sense of the word, neither EA nor rationality is a community. I agree that we should not expect Ben/Alice/Chloe to be in the same community with Kat/Emerson, for this narrow sense of community. And my assumption is that they weren't on the day before Ben made his post. And that is fine.
There is a broader sense of the word "community", which we might define as "an extended social network with shared identity and values", which does apply to EA and rationality. I don't see a reason why two people in a legal dispute shouldn't be able to remain in this sort of community.
Why do you think that third parties shouldn't name an accuser? If an accusation is being handled in the court of public opinion, presumably it is because the public has an interest in the truth of the matter, and therefor I would think that any member of the public who has relevant evidence ought to be able to present it. If the accusation depends on the credibility of the accuser, then the identity of the accuser seems like relevant evidence. If anything, I'd think the accused should be particularly hesitant to name the accuser, at least as a strategic matter, for fear of appearing retaliatory. Third parties, not being under that constraint, might be in a better position to name the accuser.
I find the general response to the threat of a libel suit to be deeply concerning. It is true that libel suits, and lawsuits generally, are expensive, time consuming, and generally unpleasant for everyone involved, including the victors. That is why I think NL ultimately made the right decision not to sue. That said, I also think that it is important not to use social pressure to discourage lawsuits. And I think we can all see this when we look at other communities from an outside perspective. When a community mistreats its members badly enough, it is important that the law be there as an escape hatch, and attempting to interfere with that by creating norms against lawsuits is therefor likely to be very harmful. The Amish famously will never seek recourse in the secular legal system, no matter how bad the wrong or what the circumstances are. Does anyone here admire this aspect of the Amish culture? Cults also famously use all kinds of pressure tactics to prevent members from seeking out the law. This is bad. We should not be like this. So when I see the way Habryka for example talks about the threat of a libel suit in this case, or Gwern, or a number of others, that sets off alarm bells for me. I don't think Habryka is a cult leader right now, but I do think he is veering uncomfortably in that direction and I hope he changes course.
I did not know this. How long has this been around?
Still strikes me as a really bad idea to ignore the norms that actual financial markets have developed over centuries of experience, but I am curious if this will actually solve the problem of judges biased by having a position in their own markets.
I knew they did not prohibit it, but I am surprised they are actively encouraging it. In any real-money market, doing anything analogous would almost certainly be grossly illegal. I have significant restrictions on my real-life trading, and I just work at a company that sells information about the market, but doesn't actually run it. I've found the practice of people betting in their own markets on manifold to predictably result in unfair resolutions, and so I do judge people who do it, and I judge more harshly if they don't actively disclose the fact. I came to manifold on the expectation that it was trying to be like a real-money prediction market, and just couldn't because of laws in the US. As I see them diverging more and more from the standards of real markets, I become more and more disappointed. But you do make a fair point that perhaps I should judge Manifold more than the market makers if they are actively encouraging such bad behavior.
I would were the judge not betting in the market. You really should be more upfront about that.
"anything related to", depending how it's interpreted, might be overly broad, but something like this seems like a necessary implication, yes. Is that a bad thing?
Oh, I agree that utilitarian considerations, particularly in the case of an existential threat, might warrant breaking a norm. I'm not saying Toner did anything wrong in any objective sense, I don't have a very strong view about that. I'm just trying to question Zvi's argument that Sam and OpenAI did something unusually bad in the way they responded to Toner's choice. It may be the case that Toner did the right and honorable thing given the position she was in and the information she had, and also that Sam and openAI did the normal and boring thing in response to that.
You do seem to be equivocating somewhat between board members (who have no formal authority in an organization) and the board itself (which has the ultimate authority in an organization). To say that a dissenting board member should resign before speaking out publicly is very different from saying that the board itself should not act when it (meaning the majority of its members) believe there is a problem. As I am reading the events here, Toner published her article before the board decided that there was something wrong and that action needed to be taken. I think everyone agrees that when the board concludes that something is wrong, it should act.
requiring that those who most have the need and the ability to scrutinise the power of a corporation do so the least.
I have no idea how you got that from what I said. The view of governance I am presenting is that the board should scrutinize the corporation, but behind closed doors, not out in public. Again, I'm not entirely confident that I agree with this view, but I do think it is normal for people involved in governance and therefor doesn't indicate much about Altman or openAI one way or the other.
Interesting. I'm not sure exactly what you mean by "fiscal sponsor", and I don't really want to go down that road. My understanding of nonprofit governance norms is that if a board member has concerns about their organization (and they probably do - they have a shit tone more access and confidential information than most people, and no organization is perfect) then they can express those concerns privately to the rest of the board, to the executive director, to the staff. They are a board member, they have access to these people, and they can maintain a much better working relationship with these people and solve problems more effectively by addressing their concerns privately. If a board member thinks something is so dramatically wrong with their organization that they can't solve it privately, and the public needs to be alerted, my understanding of governance norms is that the board member should resign their board seat and then make their case publicly. Mostly this is a cached view, and I might not endorse it on deep reflection. But I think a lot of people involved in governance have this norm, so I don't think that Sam Altman's enforcement of this norm against Toner is particularly indicative of anything about Sam.
By members of the board of directors specifically?
From my perspective, even rebuking Toner here is quite bad. It is completely inconsistent with the nonprofit’s mission to not allow debate and disagreement and criticism.
I can't imagine any board, for-profit or non-profit, tolerating one of its members criticizing its organization in public. Toner had a very privileged position, she got to be in one of the few rooms where discussion of AI safety matters most, where decisions that actually matter get made. She got to make her criticisms and cast her votes there, where it mattered. That is hardly "not allow[ing] debate and disagreement and criticism". The cost of participating in debate and disagreement and criticism inside the board room is that she gave up the right to criticize the organization anywhere else. That's a very good trade for her, if she could have kept to her end of the deal.
It did show up in the podcast, which I believe is just filtered by upvotes?
I don't think I see the problem. Chevron deference is, as you say, about whether courts defer to agencies interpretations statutes. It comes up when an agency thinks one interpreation is best, and a court thinks a different interpretation is the best reading of the statute, but that the agencies prefered interpreation is still a plausible reading of the statute. In that case, under Chevron, the court defers to the agencies interpreation. Do away with Chevron, and the court will follow what it thinks is the best reading of the statute. This is, I should note, the background of what courts usually do and did before Chevron. Chevron is an anomaly.
In terms of implications, I think it is true that agencies will tend to interpret their mandates broadly, and so doing away with Chevron deference will, at the margin, reduce the scope of some agencies powers. But I don't see how it could lead to the end of the administrative state as we know it. Agencies will still have jobs to do that are authorized statute, and courts will still let agencies do those jobs.
So what does AI regulation look like? If it looks like congress passing a new statute to either create a new agency or authorize an existing agency to regulate AI, then whether Chevron gets overturned seems irrelevant - congress is quite capable of writing a statute that authorizes someone to regulate AI, with or without Chevron. If it looks like an existing agency reading an existing statute correctly to authorize it to regulate some aspect of AI, then again, that should work fine with or without Chevron. If, on the other hand, it looks like an existing agency over-reading an existing statute to claim authority it does not have to regulate AI, then (1) that seems horribly undemocratic, though if the fate of humanity is on the line then I guess that's ok, and (2) maybe the agency does it anyway, and it takes years to get fought out in court, and that buys us the time we need. But if the court ruling causes the agency to not try to regulate AI, or if the years long court fight doesn't buy enough time, we might actually have a problem here. I think this argument needs more details fleshed out. What particular agency do we think might over-read what particular statute to regulate AI? If we aren't already targeting a particular agency with arguments about a particular statute, and have a reasonable chance of getting them to regulate for AI safety rather than AI ethics, then worrying about the courts seems pointless.
In the models making the news and scaring people now, there aren't identified separate models for modeling the world and seeking the goal. It's all inscrutible model weights. Maybe if we understood those weights better we could separate them out. But maybe we couldn't. Maybe it's all a big jumble as actually implemented. That would make it incoherent to speak about the relative intelligence of the world model and the goal seeker. So how would this line of thinking apply to that?
For the less cryptographically inclined, or those predicting the failure of computing technology, there is always the old school method: write your prediction on a peace of paper, literally seal it in an envelope, and mail it to yourself. The postal marking they put over the stamp includes the date.
I think many people should be less afraid of lawsuits, though I'm not sure I'd say "almost everyone."
I wouldn't draw much from the infrequency of lawsuits being filed. Many disputes are resolved in the shadow of the legal system, without an actual court being involved. For example, I read a number of cases in law school where one person sued another after a car accident. Yet when I actually got into a car accident myself, no lawsuit was ever filed. I talked to my insurance company, the other driver presumably talked to their insurance company, the two companies talked to each other, money flowed around, things were paid for. Much more efficient than bringing everybody into a courtroom, empaneling a jury, and paying lawyers in fancy suits to make arguments. The insurance companies knew what the law was, knew who would have to pay money to who, and so they were able to skip over the whole courtroom battle step, and go directly to the payment step. This is what usually happens when an area of law is mature - the potential parties, sometimes with good advice from their attorneys, reach similar conclusions about the likely outcome of a potential lawsuit, and this allows them to reach an agreement outside of court. Lawsuits are much more likely to happen when the law is more ambiguous, and therefor the parties can have significantly different estimations of the outcome of the suit. So the frequency of lawsuits is often a measure of how much disagreement there is about an area of law. Other times it reflects a requirement to actually go to court to do something (like debt collection or mortgage foreclosure). But I don't think it is a good measure of the likelihood of having to pay out money for some arguable violation of the law.
Also, many contracts contain arbitration clauses, which also prevent conflicts from making it into a courtroom.
The notion of lawyers being overly conservative I think is also an incomplete description of that dynamic. A good lawyer will tell you how much you can expect a potential lawsuit to cost, and therefor whether it is more or less than the expected benefit of the action. If your lawyer won't do this, you should fire them and hire someone else. As an illustration, think about universities violating the free speech and due process rights of their students, and getting sued for it. They do this because the cost of not doing it (in public relations, angry students/faculty/donors, Title IX lawsuits) is more than the cost of a potential constitutional lawsuit, and they know it. How do they know it? Because their lawyers told them so.
I think sometimes people don't want to take the advice of lawyers they perceive as overly conservative, even when they should. People trying to build something or make a deal will often get very excited about it, and only want to see the ways it can go well. Lawyers have seen, or at least studied, many past conflicts, and so they can often see more clearly what conflicts might arise as a result of some project, and advise clients on how to avoid them. That is often what clients pay lawyers for. But to the client, it can often feel like the lawyer putting an unnecessary damper on the shiny project they are excited about.
There is also the moral aspect. Laws often have a moral point behind them. Sometimes when people refrain from doing things to avoid being sued, they are refraining from doing immoral things. And sometimes when people disregard legal advice, do a thing, and get sued, they actually did an immoral thing. To take an example that I watched closely at the time, and that connects to one of Alyssa's examples, during the 2014-2015 school year Rolling Stone published an article, based on a single young woman's account, of gang rape being used as a form of fraternity initiation at UVA. Rolling Stone did not do the sort of fact checking that is standard in journalism. (If memory serves the Columbia School of Journalism put out a report detailing the failures here). Over the course of several months, the story fell apart, it turned out to be a complete fabrication. And Rolling Stone was sued, and had to pay out. I can imagine Rolling Stone's lawyers advising them not to publish that article without doing some more fact checking, and those lawyers would have been right on the law. But more fact checking also would have been the morally correct thing to do. Even in the case of abuse/rape, defamation law does have a moral point to make - you shouldn't make up stories about being abused/raped and present them as the truth.
Finally, as an ex-lawyer, I unreservedly endorse Alyssa's advice not to take on six figures of debt to go to law school without researching the job market.
When you steal a newspaper from a kiosk, you are taking paper and ink that do not belong to you. The newspaper is harmed because it now has less paper and ink. When you bypass a paywall, the newspaper still has all the same computers and servers that it had before, it hasn't lost any physical object.
When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups. So at the very least I would use a different word for that, though I'm not sure which one. I also suspect that work on optimizing group decision making will look rather different from work on optimizing individual decision making, possibly to the point that we should think of them as separate cause areas.
When I think about some of humanities greatest advances in this area, I think of things like probability theory and causal inference and expected values - things that I associate with academic departments of mathematics and economics (and not philosophy). This makes me wonder how nascent this really is?
I find this position rather disturbing, especially coming from someone working at a university. I have spent the last sixish years working mostly with high school students, occasionally with university students, as a tutor and classroom teacher. I can think of many high school students who are more ready to make adult decisions than many adults I know, whose vulnerability comes primarily from the inferior status our society assigns them, rather than any inherent characteristic of youth.
As a legal matter (and I believe the law is correct here), your implication that someone acts in loco parentis with respect to college students is simply not correct (with the possible exception of the rare genius kid who attends college at an unusually young age). College students are full adults, both legally and morally, and should be treated as such. College graduates even more so. You have no right to impose a special concern on adults just because they are 18-30.
I think one of the particular strengths of the rationalist/EA community is that we are generally pretty good at treating young adults as full adults, and taking them and their ideas seriously.
Given that you already have reductive explanations of A,B ,C, you can infer that there is a probility of having reductive explanations of D and E in the future. Not a certainty, because induction doesnt work that way.
So you haven't shown that intuition isn't needed to accept the validity of a reductive explanation.
So because something is based on induction and therefor probabilistic, it is somehow based on intuition? That is not how induction and probability theory work. Anyone with a physics education should know that. And if it were how that worked, then all of science would rely on intuition, and that is just crazy. You have devolved into utter absurdity. I am done with you.
Shooting a civilian is murder, whether or not the action is correct.
Shooting a civilian is not murder if it is self-defense or defense of others, which I think is a very good approximation to the set of circumstances where shooting a civilian is the correct choice.
I don't think it's correct to call the bombing of cities in WW2 a war crime. Under the circumstances I think it was the correct choice. One of the key circumstances was the available targeting technology at the time - the human eye. They didn't have plane-based radar, much less GPS. They didn't have the capacity to target military production specifically, all they could do was target the cities where military production was occurring. The alternative was a greater risk of loosing the war, and all of the evils that that entailed. So yes, bombing cities with civilians in them sucks, but it sucked less than the other options that were available at the time.
Same as you, physics degree. I'm curious why you picked now to bring that up. I don't think anything I've said particularly depends on it.
Hair colour merely belongs to a subject..and that's not the usual meaning of "subjective". Experiences are only epistemically accessible by a subject .. and that is the usual meaning of "subjective".
It may be more difficult to get evidence about another person's experiences than about their hair color, but there is no fundamental epistemic difference. You can in principle, and often in practice, learn about the experiences of other people.
A lot attempts to, but often fails. Where it succeeds, it is because both speaker and hearer have had the same experience. Describing novel experiences is generally impossible..."you don't know", "you had to be there". and so on.
Taken literally, those kinds of statements are just false. Sometimes they come from people who just want to be overly dramatic. Sometimes they really mean "explaining it would take more time than I want to invest in this conversation." But they are never literally true statements about what a person can know or how they can know it.
And my argument is, still, that intuition is always involved in accepting that some high level phenomenon is reductively explicable,because we never have fully detailed quark-level reductions.
Why on earth do you presume that we need to know how in order to know that? Of course we almost never have quark-level or even atom-level reductions. So what? Why on earth would that mean that we need intuition to accept that something can be explained in terms of known physics? We use induction just like we do on many other things in science - most stuff that people have tried to explain in terms of known physics has turned out to be explainable, therefor we infer that whatever phenomenon we are looking at is also explainable. There is no intuition involved in that reasoning, just classic textbook inductive reasoning.
All you are doing there is contrasting naive, uninformed intuition with informed intuition.
How can intuition be more or less informed on something like experience? That doesn't even make sense to me.
And if it were then case that 100% of scientists were qualiaphobes, you would be into something ... but it isn't. Many scientists agree that we don't have a satisfactory reductive explanation of consciousness.
I agree that we don't have a satisfactory explanation of consciousness. As explained above, that does not justify taking seriously the position that there isn't one in terms of the already known laws of physics.
And yet some things still can't be explained in terms of our currentunderstanding. You are not advancing the argument at all.
This is not a point on which we disagree. The fact that we don't currently have an explanation for some things is not a reason for thinking there isn't one.
"Appear" is just an appeal to your own intuition.
No, it is an appeal to the inductive reasoning explained above.
I am appealing to the arguments made by Chalmers and other qualiphilic philosophers, as well as those by by qualiaphilic scientists. You have not refuted any of them. You have so far only made a false claimthat theyn dont exist.
You have yet to actually appeal to any such argument, or to even name a scientist who you think is "qualiaphilic". Present one, and we can talk about why it is wrong. As I have said before, the burden is on you.
There is a difference. One is objective and of describable, the other is subjective and ineffable.
Calling experience "subjective" and "ineffable" isn't doing any work for you - experiences are subjective only in the sense that hair color is subjective - mine might be different from yours - but there is an objective truth about my hair color and about your hair color. And yes, experiences are effable, a lot of language is for describing experiences. You seem to be using the words to do nothing more than invoke an unjustified feeling of mysteriousness, and that isn't an argument.
I don't see how that's a falsification of reductionism.
I'm not sure if reductionism is exactly the right word, I don't find it useful to think in the vocabulary of philosophers. But your basic argument is that because you can't intuitively see how human experience can be explained in terms of the laws of physics, therefor we should take seriously the idea that it can't be. That would only make sense in a world where intuition was a good guide to what is explainable in terms of the laws of physics, which is the hypothetical falsification I presented. My point is that intuition is a terrible guide to what is explainable in terms of the laws of physics, as anyone who has spent any time studying those laws knows.
You keep talking about understanding phenomena in ternpns of laws alone. As I tried to emphasize last time, that doesn't work, because you also need facts about how things are configured, about starting states. And then you can intuitively see how reductive expanations work...where they work. The basis of reductionism, as a general claim, is the success of specific instances, not an act of faith.
You are the one who seems to be going on unjustified faith in your intuitions.
I never rejected considering starting states of a system. Where I disagree, as I keep trying to point out, is with "then you can intuitively see how reductive explanations work" - NO YOU CAN'T! Even when you understand the laws and the starting states, it is still usually very very unintuitive how the reductive explanations work. It often takes years of study, if you can get there at all. This is what scientists spend their lives on. Do you not see how incredibly arrogant it is for you to think that you can just intuit it?
And another was overturning the laws of physics of the time. Of you retroactively apply the rule that "any phenomenon iwhich appears inexplicable I terms of the currently known physics must be rejected out of hand", you don't get progress in physics.
I am not suggesting such a rule. The point I was making was that the trained physicists of the time couldn't intuitively see how the laws of physics that they knew could explain Brownian motion. If they had done what you want to do with qualia, to conclude that it couldn't be explained in terms of the known laws of physics, they would have been wrong. Not seeing how a phenomenon can be explained is not a reason for thinking it can't be, most explanations are not apparent. We don't have a reason for thinking a phenomenon can't be explained in terms of the laws of physics until someone points out "the laws of physics say that this thing can't exist, and here is my reasoning...".
I am simply pointing out that the phenomenon in question, human experience, does NOT appear inexplicable in terms of the currently known laws of physics. You seem to take as a given that it is, without presenting any argument that it is, and that is what I have been objecting to this whole time.
Nobody is denying consciousness. I'm just denying that there is any serious argument for consciousness not being explainable in terms of the laws of physics that are already well known and accepted.
There is no meaningful difference between a table and a qualia here, so yes, what Chalmers is doing is exactly like that.
Is the presumption falsifiable? In principle, yes. But consider what that falsification would look like. It would look like trained physicists (at the very least, possibly many more people) being able to look at a new phenomenon and immediately intuitively see how it falls out of the laws of physics. And we know that they can't do that. One of Einstein's greatest achievements was explaining Brownian motion, which he did purely in terms of the laws of physics that were already well known and accepted at the time. It was a great achievement because none of the other great physicists of the time could see how the observed phenomenon could be explained. This sort of thing happens repeatedly in the history of physics. So yes, in principle, the presumption is falsifiable, just as the presumptions that pigs don't fly and that the moon isn't made of cheese are in principle falsifiable. For all practical purposes though, it is still correct to laugh at people who reject the presumption.
It's a game two can play at rhetorically, but only one side of this game has ever landed on the moon, improved the length and quality of the human lifespan, etc.
The alternative is a presumption that everything we observe in the universe is explainable by the laws of physics as we know them, until someone presents a logical argument, starting from the laws of physics as we know them, not relying on intuition, and leading to the conclusion that the observed thing cannot exist. I would have thought this presumption was part of basic scientific literacy. You seem to have been against it all along, how do you not see it? If we didn't have this presumption, we would have to question whether the existence of chairs was explainable within the known laws of physics, since there is no chair term in any of the equations, and it is not intuitively apparent how you can get something as complicated and useful as a chair from such simple equations. The silliness and wasted intellectual effort of Chalmers and his ilk has no more substance to it.
I think what I see you doing is applying the argumentative norms of professional philosophers, and those norms are the reason that philosophy, as a discipline, hasn't made any real progress on anything. It keeps having the same old arguments over and over again, because it can't ever move any position into the category of things that we should laugh at rather than take seriously. But given our finite lifetimes, if we are going to make progress and understand the world around us, there has to be a point at which we stop taking an position seriously and just laugh at it. The so-called "hard" problem I think is in that category. You can't just assert that it isn't. If you think I'm wrong, that the so-called "hard" problem is not in that category, you need to give me a reason for thinking so, and so far you haven't even attempted to do that. Again, either put up or shut up.
You want to tell me what arguments you are referring to? Cause you haven't mentioned any here.
Maybe here's where we aren't connecting. You seem to be working on the implicit assumption that when somebody organizes words into sentences and paragraphs and publishes them, that we as critical thinkers should necessarily treat that as an argument and engage with it. I don't think that that is the case. I think that when their words boil down to a pure appeal to intuition, we should not engage with it, we should laugh at it. I don't think there is an actual argument in the literature, and you have not pointed to one. You have simply repeatedly asserted the existence of one, and that is just getting annoying. Either put up or shut up.
No, they don't have them. All they have is their unjustified intuition that there is somehow something special about qualia that places it outside the laws of physics. They have no actual argument for it.
I don't know of anyone doing rigorous studies on this. I don't know why anyone would, there is no profit motive for it.
That said, to the extent that the vaccines are the same, it shouldn't matter. To the extent that they are different, we should want to combine the effects, since we know they all work.
For myself, my first two doses were Pfizer/BioNTech. My third dose was Moderna. In terms of side effects, the third dose felt pretty much like the second.
I'm not trying to make progress on solving the hard problem! As I have said, there is no hard problem. There is nothing there to explain. Unless you can point at something needing explaining, you aren't contributing anything.
If I interpret your questions as trying to point at something in need of an explanation, I still just don't see it. When I introspect, I don't perceive anything to be going on inside my head besides the computation. With regard to (3), so long as a computation is occurring (and empirically computations generally do occur in human heads), there couldn't not be an "inside". With regard to (1), it also wouldn't make sense for a given computation to feel a different way. Can happiness feel like sadness? It's just an incoherent question. With regard to (2), how could a computation not feel some way from the inside? Again, what you are proposing is just incoherent.
Your basic assumption seems to be that the mapping between computations and experiences is somehow arbitrary, or at least could have been different. And I don't think that that is the case. Why would it be? That presupposes that the experience is something different from the computation, and I don't think it is. I think the experience is just a different way of describing the same computation. Taking the English language as fixed, I feel like you are repeatedly asking "why should the integer between two and four be three?"
Isn't the burden always on the person who says that something is incompatible? Like, there just isn't any reason to think the two things would be incompatible. If you tell me that any two things are incompatible, the burden is on you to tell me why, and if you can't, then I am right to laugh you out of the room.
And all Chalmers has to suggest incompatibility with physicalism is intuition, and that is no argument at all. It should be laughed out of the conversation.
I think the so-called "hard problem" is really an imaginary problem. I don't see any reason to think that experience is anything more than what certain sorts of computation feel like from the inside.
If you want to have public goods funded by the users, why not ask them explicitly before you build the public good? This is usually called "crowdfunding". It works pretty well on relatively small scale projects already, and should really be scaled up.
If a thing says "you will win" and this causes you to bet on the Red Sox and loose, then this thing, whatever it is, is simply not an oracle. It has failed the defining property of an oracle, which is to make only true statements. It is true that there may be cases where an oracle cannot say anything at all, because any statement it makes will change reality in such a way as to make the statement false. But all this means is that sometimes an oracle will be silent. It does not mean that an oracle's statements are somehow implicitly conditioned on a particular world.
Put another way, your assumption that "an Oracle tells you whether you’re going to win your next bet" is not a valid way to constrain an oracle. An actual Oracle could just as easily say "The Red Sox will win" or "The Yankees will win" or whatever.
If a supposed-oracle claims that you will win your bet, and this causes you to bet on the Red Sox and loose, then the actually existing world is the one where you bet on the Red Sox and lost. The world where you didn't hear the prediction, bet on the Yankees, and won, that is the hypothetical world. So saying that an oracle's predictions need only be true in the actual world doesn't resolve your paradox. To resolve it, the oracle's predictions would have to be true only in the hypothetical world where you did not hear the prediction.