Posts

The 'Bitter Lesson' is Wrong 2022-08-20T16:15:04.340Z

Comments

Comment by deepthoughtlife on MakoYass's Shortform · 2024-10-12T16:40:29.147Z · LW · GW

To the best of my ability to recall, I never recognize which is which except by context, which makes it needlessly difficult sometimes. Personally I would go for 'subconscious' vs 'conscious' or 'associative' vs 'deliberative' (the latter pair due to how I think the subconscious works), but 'intuition' vs 'reason' makes sense too. In general, I believe far too many things are given unhelpful names.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-12T16:31:05.032Z · LW · GW

I get it. I like to poke at things too. I think it did help me figure out a few things about why I think what I do about the subject, I just lose energy for this kind of thing easily. And I have, I honestly wasn't going to answer more questions. I think understanding in politics is good, even though people rarely chang positions due to the arguments, so I'm glad it was helpful.

I do agree that many Trump supporters have weird beliefs (I think they're endemic in politics, on all sides, which includes centrists). I don't like what politics does to people's thought processes (and often makes enemies of those who would otherwise get along). I'm sure I have some pretty weird beliefs too, they just don't come up in discussion with other people all the time.

The fact that I am more of a centrist in politics is kind of strange actually since it doesn't fit my personality in some ways and it doesn't really feel natural, though I would feel less at home elsewhere. I think I'm not part of a party mostly to lessen (unfortunately not eliminate) the way politics twists my thoughts (I hate the feeling of my thoughts twisting, but it is good I can sometimes tell).

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-12T02:05:09.305Z · LW · GW

Your interpretation of Trump's words and actions imply he is in favor of circumventing the system of laws and constitution while another interpretation (that I and many others hold) is that his words and actions mean that he thinks the system was not followed, which should be/have been followed.

Separately a significant fraction of the American populace also believes it really was not properly followed. (I believe this, though not to the extent that I think it changed the outcome.) Many who believe that are Trump supporters of course, but it is not such a strange interpretation that someone must be a Trump supporter to believe the interpretation reasonable.

Many who interpret it this way, including myself, are in fact huge fans of the American Constitution (despite the fact that it does have many flaws), and if we actually believed the same interpretation as you would we condemn him just as much. The people on my side in this believe that he just doesn't mean that.

The way I would put it at first thought to summarize how I interpret his words: "The election must be, but was not held properly. Our laws and constitution don't really tell us what to do about a failed election, but the normal order already can't be followed so we have to try to make things work. We could either try to fix the ways in which it is improper which would get me elected, or we can rehold the election so that everything is done properly."

I think Trump was saying that in a very emotive and nonanalytical way meant to fire up his base and not as a plan to do anything against the constitution.

I obviously don't know why you were downvoted (since I didn't do it) but if you mouse over the symbols on your post, you only got two votes on overall Karma and one on agreement (I'd presume all three were negative). The system doesn't actually go by ones, but it depends on how much Karma the people voting on you have I think (and how strongly they downvoted)?  I would suspect that people that the comment not quite responsive to what they believed my points to be for the overall karma one?

 My memory could be (is often) faulty, but I remember thinking the dismissals were highly questionable. Unfortunately, at this point I have forgotten what cases seemed to be adjudicated incorrectly in that manner, so I can't really say one you should look at. Honestly, I tire of reading about the whole thing so I stopped doing so quite a while ago. (I have of course read your links to the best of my ability when you provide them.)

I don't usually comment about politics (or much of anything else) here so I don't really know how what I should write in these comments, but I think this is more about people wanting to know what Trump supporters are thinking than about determining what they are and aren't right about. If I was trying to prove whether or not my interpretation is correct I supposed I would do this differently.

Comment by deepthoughtlife on Demis Hassabis and Geoffrey Hinton Awarded Nobel Prizes · 2024-10-11T20:17:07.639Z · LW · GW

I don't pay attention to what gets people the Nobel Prize in physics, but this seems obviously illegitimate.  AI and physics are pretty unrelated, and they aren't getting it for an AI that has done anything to solve physics. I'm pretty sure they didn't get it for merit, but because AI is hyped. The AI chemistry one makes some sense, as it is actually making attempts to solve a chemistry issue, but I doubt its importance since they also felt the need to award AI in a way that makes no sense with the other award.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-11T19:57:07.214Z · LW · GW

We seem to be retreading ground.

"It doesn't matter if the election was stolen if it can't be shown to be true through our justice system". That is an absurd standard for whether or not someone should 'try' to use the legal system (which is what Trump did). You are trying to disqualify someone regardless of the truth of the matter based on what the legal system decided to do later. And Trump DID just take the loss (after exhausting the legal avenues), and is now going through the election system as normal in an attempt to win a new election.

I also find your claim that it somehow doesn't matter why someone has done something is terrible claim when we are supposed to be deciding based on what will happen in the future, where motives matter a lot.

I read the legal reasons the cases were thrown out and there was literally nothing about merits in them, which means they simply didn't want to decide. The courts refusing to do things on the merits of the claim is bad for the credibility of the courts.

I told you I don't care about Giuliani, and that the article is very bad. Those are separate things. Whether or not he is guilty of lying (which was not what the stipulations actually mean), I already didn't take his word for anything. The BBC on the other hand, has shown that it won't report in a fair manner on these things and people shouldn't trust them on it.

You linked to a cnbc article of bare assertions (not quotes) that were not supported by the statements of the witnesses in the video also included! I talked at length about the video and how the meaning of the testimonies appears to contradict the article.

We already discussed your claim about the meaning of Trump's words. And you once again left out:

     "Our great “Founders” did not want, and would not condone, False & Fraudulent Elections!"

He was saying the election did not actually get held properly and that changes things.

Comment by deepthoughtlife on LLM Generality is a Timeline Crux · 2024-10-11T19:21:56.359Z · LW · GW

Interpolation vs extrapolation is obviously very simple in theory; are you going in between points it has trained on or extending it outside of the training set. To just use math as an example (ODE solvers which are often relevant in AI but are not themselves AI), xnext = xcurr + 0.5dt (dxcurr + dxnext) is interpolation (Adams-Moulton with two points per interval), and xnext = xcurr + dt(1.5dxcurr - 0.5dxprev) is extrapolation [Adams-Bashforth with two points per interval]. The former is much better and the latter much worse (but cheaper and simpler to set up).

In practice, I agree that it is more than a bit fuzzy when evaluating complicated things like modern AI.  My position that it is amazing at interpolation and has difficulties with extrapolation (though obviously people are very keen on getting it to do the latter without issues / hallucinations since we find it somewhat annoyingly difficult in many cases).

The proposed experiment should be somewhat a test of this, though hardly definitive (not that we as a society are at the stage to do definitive tests). It also seems pretty relevant to what people want that kind of AI to be able to do that it currently struggles at. It seems important to keep in mind that we should probably build things like this from the end to beginning, which is mentioned, so that we know exactly what the correct answer is before we ask, rather than assuming.

Perhaps one idea would be to do three varieties of question for each type of question:

1.Non-obfuscated but not in training data (we do less of this than sometimes thought)

2.Obfuscated directly from known training data

3.Obfuscated and not in training data

To see how each variation changes ability. (We also do have to keep in mind how the difficulty goes for humans, obviously since we are the comparison.)

As to your disagreement where you say scale has always decreased error rate, this may be true when the scale increase is truly massive, but I have seen scale not help on numerous things in image generation AI (which I find more interesting personally due to the fact that I have found LLMs rarely useful while I don't have the skills to do art, especially photorealistic art), and larger is often worse at a number of specific tasks even ones that are clearly within the training sets.

I have found image generation AI progress very slow, though others think it fast. I feel the same way about LLMs, but errors matter more in usefulness for the latter.

For instance, Flux1 is generally well liked, and is very large compared to many other models, but when it comes to pictures of humans, the skin is often very plasticky and unrealistic compared to much smaller, earlier models, and the pictures are often very similar across prompts that should be very different compared to earlier models. Despite also using a much larger scale text encoder compared to previous ones too (adding an LLM known as T5XXL to what was previously used, which I gather isn't an impressive one), prompt understanding often seems quite limited in specific areas despite t5xxl being several times larger (this is probably related to the lack of diversity in the output pictures as it ignores what it doesn't understand). Flux1 itself also comes in multiple varieties with different tradeoffs all at the same scale that lead to very different results despite the fact that they were trained on largely the same data so far as we know. Small choices in setup seem more important than pure scale for what the capabilities are

To be specific, Image generation uses a lot less parameters than LLMs but require far more processing per parameter so the number look a lot smaller than LLMs. SD1 through SD1.5 is 0.9B parameters, SDXL is 4B, SD3 is a variety of sizes but the smallest in use is 2B (only one freely available to the public), Flux1 is 12B parameters. The flux text encoder T5XXL alone (5B parameters)(it also uses clip-l) is larger than SDXL plus its text encoders (clip-l and clip-g), and SDXL still often outperforms it in understanding. The 2B SD3 (referred to as SD3 medium) is a mess that is far worse than SD1.5 (which uses clip-l and clip-g which are also tiny) at a large number of things (SD3 uses T5XXL and clip-l like Flux plus clip-g) including lacking the understanding of certain classes of prompts that make it borderline unusable despite dramatically higher image quality when the stars align than the larger SDXL. Scale is often useless for fixing specific problems of understanding. SD3 and and FLUX (different companies but many of the same personnel and a similar in approach) are internally closer to being LLMs themselves than previous image generation, and the switch has caused a lot of problems scale certainly isn't fixing. (SD3 actually has higher image quality when things work out well than the two variants of Flux I have used.) I've largely gone back to SDXL because I'm sick of Flux1's flaws in realistic pictures (and SD3 remains almost unusable).

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-10T23:57:32.909Z · LW · GW

I'm a centrist (and also agnostic in regards to religion religion) in part because I believe there are a lot of things I/we don't/can't know and shouldn't be overconfident in, ideologically, factually, and interpretationally. I don't know what Trump actually thinks, and neither do you, but we seem to disagree strongly on it anyway. I don't want to try to read your mind, but that part is at least very obvious. (I do have some very confident ideological, factual, and interpretational beliefs, but believe they shouldn't be overly relied upon in these matters.)

I also tend to keep in mind that I can be wrong (I am also keeping that in mind here), which is how I ended up believing Trump was worth voting for in the first place. (As stated originally, I actually had an extreme personal distaste for him for decades that sounds much like how people accusing him of being, but I paid attention and changed my mind later. Obviously, I could have been right originally and be wrong now, that definitely happens.)

To me, you seem overconfident about what happened in the election, and your sources seem highly partisan (not that I know of any sources on this matter that I think aren't partisan). Neither of which actually mean you are wrong. I do think it is very important that there is genuine doubt, because then you can't simply assume those on the other side are operating in an objectionable way because they are opposing you. (Of course I would think you overconfident since I am hedging so much of what I am saying here.) I generally find it hard to know how accurate specific political reporting is, but it seems generally very low. Politics bring out the worst in a lot of people, and heightened reactions in most of us (me included, though I try to damp it down).

There is always vote fraud and cheating, but what is the definition of 'large scale' such that you know for sure it didn't happen? States have in fact found cheating in small but significant amounts (as well as large scale rule changes outside the regular order), but what percentage of it would be actually discoverable without extensive investigation, and how much even with? William Barr saying that he had “not seen fraud on a scale that could have effected a different outcome in the election.” hardly means they found everything, and crucially, is very different than saying that it wasn't large scale by at least one reasonable definition.

Hypothetical: If someone was counted to have won a state that decided the election by 100,000 votes, and Barr could have proven that 40,000 were switched (meaning current count is winning by 20k), what would that mean for your statement? What would that mean for Barr's? I think that would prove you wrong (about no large scale cheating), but Barr's statement would still be true (I would still consider it lying though).

Additional hypothetical: Suppose that three states too small to change the election were given to Biden due to cheating, but it could be proven that Trump had won all of them? That would be large scale cheating that changed nothing about the result, and again, Barr's words would not technically be untrue. Suppose then that there was a fourth small state that would tip it where there was enough cheating to actually change the overall result, but this can't be proven?

Note that neither of those hypotheticals is meant to be close to accurate, they are meant to show the uncertainty. For clarity, I believe that there is not strong enough evidence to say there was sufficient cheating to change results, and we should default to assuming there wasn't, but that is only because our elections usually aren't fixed, and in certain other places and times the opposite should be assumed. I believe that Biden most likely won in an election close enough to fair, just that people coming to the opposite conclusion are usually operating in good faith including the subject of this matter. (I do not assume good faith in general, it just seems that way here.)

I can find places claiming that there is literally nothing, and others claiming to have found dozens of issues that could possibly count as 'large scale' cheating depending on the definition (not all involving the Democrats, and many not involving directly changing the votes). Neither side actually seems especially credible to me, but instead obvious partisanship is deciding things.

I probably should have clarified the split as establishment vs populist. I think that I can speak of Democrats without implying that it is an exact midline Republican / Democrat split when referencing something known to be a minority viewpoint, but I really should have been more clear that I was speaking of a portion of the Republicans rather than all of them (I do think it is a very large portion of the Republicans, but more so the base and more populist/Trump camp politicians rather than the established politicians). I was saying that I think the politicians among the Democrats (enough of them to control Dem policy, far from all of them) had specific motives (their being unsure whether or not their local members cheated) for their actions in and reactions to the issue that consisted largely of vehement denial of the possibility regardless of the evidence (and wanting to prosecute people for talking about it or taking legal steps that are required for disputing the election!), and that many Republicans overindex on very questionable evidence to say with equal certainty that it did happen.

People often would like to cover up things when they don't know exactly what happened but it looks bad (or even if it just might cause a scandal, even if it could be proven to be fine), and I can't possibly agree with anyone who thinks that Democrats in general didn't at least consider that when deciding on how they would react to the evidence.

The split between establishment and populist Republicans is very large in recent years, starting noticeably before 2016 and largely leading to Trump's faction even existing, which then movef the party further toward populist. The establishment and populists do not share the same reaction to the evidence in 2020, among other things, and I should have been more careful.

You assume that the US government didn't find much, but a number of people would just as confidently state that much of the government just chose not to look, and others that things were in fact found. Many of the investigations would have been state and local governments if they'd happened, and in many of the places the local leaders were Democrats who had every incentive not to look (which is part of why people were suspicious of how things turned out there, and this is true even if the local Democrat leaders were perfectly upright and moral people who would never act badly as long as we don't know that). Trump and company brought many court cases after the election but before inauguration that were thrown out due to things that had nothing to do with the merits of the evidence (things like standing, or timeliness); there may have been some decided on actual evidence, but I am unaware of them.

Many populists have seen the DOJ and FBI as enemies due to past discrimination against populist causes, and that is generalizing too much, but they also don't seem very competent when performing duties they don't care for (which is normal enough). It is hard for many people to know whether or not the FBI and DOJ can be relied on for such things. (See also the recent failures of previously considered competent federal agencies like the Secret Service).

You are interpreting things the way that most benefits your point, but without regard to other reasonable interpretations. The meaning of pretty much everything Trump camp did is debatable, and simply believing one side's talking points is hardly fair. I genuinely think that both sides believe they were on the side of righteousness and standing up against the dastardly deeds of the other side.

Also, the cnbc article is largely bare assertions, by a known hyper partisan outfit (according to me, which is the only authority I have to go on). Why should I believe them? (Yes, this can make it difficult to know what happened, which was my claim.) The other side has just as many bare assertions in the opposite direction.

Hypothetical where no one thinks they are doing anything even the slightest bit questionable (which I think is reasonably likely):
    Trump camp viewpoint: This bastard repeatedly refuses to do his job* and protect the election from rampant cheating. I should force him to do his job. (This is a reasonable viewpoint.) People keep telling me to remove them for refusing to do their jobs. I don't think I should (because this is not the only important thing). How about I ask them more about what's going on and trying to get through to them?

    Other viewpoint: Here I am doing my job, and someone keeps telling me to say <specific thing>. They're obviously wrong, and I have said so many times*, so they must be telling me to lie. (And their statement that he was asking them to lie would not seem false to them, but in this scenario is completely false.)

*This is the same event.

In fact, the video testimony included with the article itself sounds exactly like my innocent hypotheticals rather than other parts, while the cnbc commentary in the video is obviously making up and extremely overconfident partisan interpretations (which makes their article's bare assertions even less believable since they really are hyper partisan).

I don't really care about Giuliani, but the Giuliani article is extremely bad form since the bbc's summary of his 'concession' is not a good summary. It says only that he will not fight specific points in court 'nolo contendre', it is not an admission to lying.

You mention that I value Law and Order, which is true. It isn't actually natural to me, but over the years I have come to believe it is very important to the proper functioning of society, and as bad as some parts of society look, our society actually works quite well overall so we do need to support our current law. This includes parts that aren't used often, but I have much less confidence in them. The parts Trump wanted to use do seem irregular, but he did seem to be trying to use actual laws (sometimes incorrectly).

To bring up something I think you mentioned earlier, I would be pretty unhappy about an election decided by 'contingent election' for instance, but still think it appropriate to use one under the right circumstances (which I don't think these are even under the Trumpian interpretation of cheating in the elections). I would also be unhappy about an election where the state legislatures changed who the electors are in a meaningful way, even though I believe they clearly have the right to. There would have to be vastly more evidence for me to think that the legislatures should act in such a manner, but I don't think it is necessarily disqualifying as a candidate for Trump to think they should.

Would it be good if he stole the election legally? Obviously not. Would it be good if he protected the election legally? Obviously yes. Would it be good if he protected the election illegally? Eh, I don't know? It depends?

Does Trump value law and order? I think he values law (with moderate confidence). Whenever I hear the words of people directly involved, it sounds to me like he told them to do everything in the legal way, and that the process he advocated was something he believed legal. He doesn't seem interested in breaking laws, but in using them for his purposes. Does he believe in order? I'm not convinced either way. He certainly doesn't believe in many parts of how things tend to be done (to an extent that it is odd he is the leader of the conservative party), but is his plan for the world following some sort of consistent internal logic worthy of being considered orderly? I don't know.

Obviously if I had similar interpretations as you do about all of these things I would be much more concerned. I just don't. Of course, I very much want you to be wrong, so that there is little to worry about.

I don't know if it is confirmation bias or legitimate, but the more carefully I go through things, the more it seems like Team Trump really was just trying to do what they were supposed to do (and his opponents believe the same of themselves). Even the words of people that are very much against him usually seem to indicate Trump's genuine belief in everything he said about the election being true.

Comment by deepthoughtlife on What makes one a "rationalist"? · 2024-10-09T20:19:19.717Z · LW · GW

I'm not sure why you are talking about 'anti-dialectical' in regards to willingness to change your mind when presented with evidence (but then, I had to look up what the word even means). It only counts as evidence to the degree it is reliable times how strong the evidence would be if it were perfectly reliable, and if words in a dialogue aren't reliable for whatever reason, the obviously that is a different thing. That it is being presented rather than found naturally is also evidence of something, and changes the apparent strength of the evidence but doesn't necessarily mean that it can be ignored entirely. If nothing else, it is evidence that someone thinks the topics are connected. 

Interest doesn't mean that you are going to, it means that you are interested. If you could just press a button and be better at reasoning and decision making (by your standards) with no other consequences, would you? Since that isn't the case, and there will be sacrifices, you might not do it, but that doesn't mean a lack of interest. A lack of interest is more 'why should I care about how weel I reason and make decisions?' than 'I don't currently have the ability to pursue this at the same time as more important goals'.

While duplicity certainly could be in your 'rational' self-interest if you have a certain set of desires and circumstances, it pollutes your own mind with falsehood. It is highly likely to make you less rational unless extreme measures are taken to remind yourself what is actually true. (I also hate liars but that is a separate thing than this.) Plus, intellectual honesty is (usually) required for the feedback you get from the world to actually apply to your thoughts and beliefs, so you will not be corrected properly, and will persist in being wrong when you are wrong.

Comment by deepthoughtlife on What makes one a "rationalist"? · 2024-10-08T21:36:50.042Z · LW · GW

I've lurked here for a long time, but I am not a rationalist by the standards of this place. What makes someone a 'rationalist' is obviously quite debatable, especially when you mean 'Rationalist' as in the particular philosophical subculture often espoused here. To what degree do you need to agree?

I generally dislike people using Claude or other LLMs for definitions, as it should be thought of as a very biased poll rather than a real definition, but it can be useful for brainstorming like this as long as you keep in mind how often it is wrong. Only five of the 20 seem to be related to being rational, and the others are clearly all related to the very particular subculture that originated in the Bay Area and is represented here. (That's not bad, it was all on topic in some way.) I do object to conflating the different meanings of the term.

The two you bolded are in fact rather important to being able to think well, but both are clearly not especially related to the subculture.
The ones I think are important to thinking well (which is what I think is important):
    Curiosity about discovering truth*
    Willingness to change mind when presented with evidence
    Interest in improving reasoning and decision-making skills
    Openness to unconventional ideas if well-argued*
    Commitment to intellectual honesty

*are the ones you bolded

These 5 all fit what I aspire to, and I think I've nailed 4 of them, (willingness to change my mind when presented with evidence is hard, since it is really hard to know how much evidence you've been presented, and it is easy to pretend it favors what I already believe,) but only five of the other 15 apply to me, and those are largely just the ones about familiarity. So I am a 'rationalist' with an interest in being 'rational' but not a subculture 'Rationalist' even though I've lurked here for decades.

I do think that the two you selected lead to the others in a way. Openness to unconventional ideas will obviously lead to people taking seriously unusual ideas (often very wrong, sometimes very right) often enough that they are bound to diverge from others. The subculture provides many of these unusual ideas, and then those who find themselves agreeing often join the subculture. Curiosity of course often involves looking in unfamiliar places, like a random Bay Area community that likes to thing about thinking.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-08T20:44:45.932Z · LW · GW

I did have to research a few things for this, but they don't seem to change matters at first glance. The situation is murky, but it was always murky and both sides clearly are certain they are right.

I personally don't know whether or not it was a fair election with a fair result and unimportant anomalies, or if the anomalies really were cheating. They certainly looked suspicious enough in some cases that they should have been investigated. I think that many of the actions by the Democrats were done in the way they were because they didn't know whether or not there was rampant cheating causing these anomalies and didn't want to know, and that Republicans were far too sure about what they think happened based off weird and unclear evidence.

You could argue for the contingent elections thing being relevant (and it is a scenario in the memos you mentioned), but it is far from settled when that would trigger in a sequence of events. It would not, in fact, be a change of procedure if it did. (As your link notes, it is in the constitution.) Uncommon parts of a procedure are still parts of it (though all the examples are so far back it would be quite strange to see it used, I agree on that).

Several recent presidential candidates have used legal tactics to dispute how elections are done and counted. There are already multiple this year (mostly about who is and is not on the ballot so far.) There is nothing inherently wrong with that since that is literally how the laws and constitutional matters are intended to be handled when there is a dispute with them.

I'm obviously not a constitutional scholar, but I think the contingent election stuff wouldn't really have come up at that point, since it reads more like something you do based on the electors choosing in a way that doesn't lead to a majority, not over the process of electors selection being disputed.

You could easily argue that the disputed electors might not count if that simply means electors were not fully selected, and you need a majority of the number of electors that should be, but I would assume otherwise. (In part because there appears to be no wiggle room on the Constitution saying that the states will select electors. It isn't "up to the number specified", but the specific number specified.)

I believe that if there is doubt as to who the states selected for the electors, you would simply have the legislatures of the states themselves to confirm who are the correct electors by their own rules (and that their decisions are controlling on the matter), and I believe this is what Trump would have intended (but I don't like mindreading and don't think it should be relied on when unnecessary).

It appears that you believe the states already selected their electors fully, and so do I, so this plan wouldn't work if I am right about how to interpret how this works, but Trump obviously claimed/claims otherwise, and there is no indication of duplicity in that particular belief.

Everyone agrees that Pence did not think Trump's interpretation of the matter is correct, but Pence's own personal beliefs as to whether or not the electors were correctly selected does not, in fact, make people who believe otherwise some sort of enemy of the republic. Trump wasn't asking Pence to fake anything in any reasonable source I've seen, but to 'correctly' (in Trump's mind) indicate that the electors were not chosen because it was done in an invalid way. Pence interpreted this request negatively seemingly because he didn't agree on the factual matters of elector selection and what his interpretation of the meaning of the constitution is, not because anything Trump wanted was a strike against how things are meant to be done.

I didn't really remember the Eastman memos, but when I looked up the Eastman memos, reporting seems clear that Trump and company really were pushing to convince Pence to send it back to the states for confirmation of selection which is not an attempt at either a contingent election or just selecting Trump, and is not out of bounds. That those other scenarios existed as possibilities in a memo but were not selected obviously in no way implicates Trump in any wrongdoing.

I read the two page memo and it seems strange. To me it reads like a few scenarios rather than finished decision making process or recommendation. I am sympathetic to the argument that it makes about how the Electoral Count Act is unconstitutional (and I think the change made afterward to it also seems at first blush clearly unconstitutional). If it is unconstitutional, then Pence 'must not' follow it where it conflicts with the constitution (so advising that it be ignored is not necessarily suspect). I also had a hard time interpreting said act, which seems poorly written. The memo itself does rub me the wrong way though.

The six page memo seems pretty normal to me (from reading how judges write their decisions in court cases the style seems similar), and lays out the reasoning better, though I definitely disagree with some of the scenarios as being within the vice president's authority (which is itself somewhat murky because of ambiguity in the constitution and relevant amendments). The VP has no authority to decide to simply declare that the number of electors are fewer that were really appointed (though this is also not actually clear in the text), but some other scenarios are plausible readings of the laws at first glance depending on the actual facts at hand (which they have a clear opinion on the facts of).

An analysis I read included a lot of fearmongering about letting the states decide if their electors had been properly selected, but that is clearly something they are allowed to determine. (The fact that it is popular vote is a choice of the legislatures, and up to the laws and constitution of the individual states.)

This was not a plan to 'steal' an election, but to prevent it from being stolen.  Obviously both sides like to claim there was an attempt at stealing it (the right's motto was literally about preventing stolen elections), that they were just trying to prevent. Both sides are still very angry about it, but that doesn't actually make Trump disqualified in any way.

Comment by deepthoughtlife on LLM Generality is a Timeline Crux · 2024-10-07T22:50:30.201Z · LW · GW

I would definitely agree that if scale was the only thing needed, that could drastically shorten the timeline as compared to having to invent a completely new paradigm or AI, but even then that wouldn't necessarily make it fast. Pure scale could still be centuries, or even millennia away assuming it would even work.

We have enough scaling to see how that works (massively exponential resources for linear gains), and given that extreme errors in reasoning (that are obvious to both experts and laypeople alike) are only lightly abated during massive amounts of scaling, it really does seem like reasoning isn't dependent on just scale, or the required scale is absurdly large compared to what our society can afford (so either way it is probably slow if it happens.).

Personally, progress in the 'intelligence' part of artificial intelligence seems glacially slow to me. (Though it is sort of amazing that they can make models that can do these things and yet still can't manage to extend the concepts much past what was directly trained on.) Current AI is good at interpolation (which is easy for humans too) and terrible at extrapolation (which is hard for humans too, but to completely different orders of magnitude). Current AI is possibly actually better at many kinds of interpolation than humans, but not in ways that much enhance its intelligence, because intelligence is much more related to extrapolation.

I think you dismiss the points of people like Gary Marcus in much to facile a manner. They aren't saying, 'this exact problem will never be solved' but that they are only solved on a case by case basis (which seems to be largely true). You actually mention the failing on obfuscated examples which is a large part of their point of how they (Gary Marcus and company) know this. Obfuscated versions are ones that weren't trained on, and thus rely on the poor reasoning abilities they manage.

Also, there is no reason to believe further scaling will always decrease error rate per step since this has often not been true! There are so many other things involved in the error rate than just scale, and it is likely scale's contribution will stop really changing at some point. Asymptotes are, after all, a thing.

Also, GPT5 is only not already a thing because OpenAI couldn't manage to keep improving performance meaningfully enough to use the name. Most likely the reason for their failure is that they have realized scale is not enough to meet their goals.

(At this point the new big thing o1 is out and it doesn't seem impressive from the examples I've seen. That is a massive increase in inference time scale, which doesn't help as much as you'd think if scale really worked.)

Comment by deepthoughtlife on An argument that consequentialism is incomplete · 2024-10-07T22:23:52.156Z · LW · GW

Broadly, consequentialism requires us to ignore many of the consequences of choosing consequentialism. And since that is what matters in consequentialism it is to that exact degree self-refuting. Other ethical systems like Deontology and Virtue Ethics are not self-refuting and thus should be preferred to the degree we can't prove similar fatal weaknesses. (Virtue Ethics is the most flexible system to consider, as you can simply include other systems as virtues! Considering the consequences is virtuous, just not the only virtue! Coming up with broadly applicable rules that you follow even when they aren't what you most prefer is a combination of honor and duty, both virtues.)

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-07T22:09:52.217Z · LW · GW

I skimmed the transcript at the link you provided, and it seems like a standard political speech (in Trump's style).

Obviously Trump thought that there was easily enough proof to know that he won, or at least clearly enough prove election invalid if more was required. He would expect to genuinely win any real redo of the election. And the whole scheme was to send it back to the states, who have the authority to choose the electors. His claim is that the states wanted that (which I don't know any evidence of, but this scheme only does anything if the states actually do want to).

The quote you provided literally sounds like normal political posturing about how important the people on your side are, and a claim that they should take political actions. This seems like a thing to say in regards to protests, rallies, or even just doing things like voicing support for him. I've heard countless politicians say this sort of thing before? It seems pretty anodyne?

Saying that they should cheer for the legislature but not all of the legislature will be getting many cheers seems like an obvious joke with no deeper meaning than disapproval of some members. It's obvious what behaviors will get cheers and which won't, but it isn't threatening.

All talk of fighting involved metaphorical fighting by duly elected representatives (and replacing them through primary elections if they didn't) and the people cheering them on. This is normal politics. And as far as it goes, he is literally saying that they should demand the law be followed as the point of this.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-07T04:05:17.666Z · LW · GW

That got very long. (Over 19,100 characters.). Feel free to ignore parts of it. TL;DR: Trump has a lot of faults but I should reiterate that I really do think Trump was a good president by my standards and I think there is a very high chance the he would be again, though it is far from certain. My reason really is just that I think he was a dramatically better president than I expected he would be when I begrudgingly vote for him.

Sometimes I have to correct for my tendencies to go the opposite way as people are trying to push me, but overall it seems like a useful way to be if I want to come up with what my actual personal beliefs are.
Does Trump say things that are blatantly untrue sometimes? Yeah, and I really wish there were candidates I could select that just didn't do that. I actually hate lying and liars so much. Give me an honest person that goes against what I want and believe and I will at least grudgingly respect it (if I can determine that is true, of course.) I don't think we've had an honest candidate since George W Bush (and maybe not even then since I wasn't paying attention during his initial election), though in some cases I have only determined that I believe they were dishonest after the election.

My personal definition of lying might be relevant to the discussion. 'Lying is attempting to trick people into believing things that are either known to be false to the speaker or to which there is no genuine effort to correspond to reality.'

It is the first part being missing that makes me think he isn't as much a liar as many other politicians. Trump isn't trying to trick people in general, while his opponents are, so I consider his opponents to be lying and him to simply be a poor source of truth. That said, I do despise his lack of care, and sometimes consider it egregious (he definitely has often fallen under the 'there is no genuine effort to correspond to reality' part, which I would count as 'negligent lying' if he is trying to trick people. This is obviously very bad even when it isn't lying.).

I do believe he is dishonest, and wish that weren't the case. I think that the reason the word 'trick' for lying is important is that I wouldn't consider something like a fictional story or a song or whatever to be lying in and of itself even though it is obviously false. He believes what he says, it just often isn't true because he didn't bother to check.

I believe that one of the reasons people claim Trump is more dishonest is that he uses fewer qualifiers that make things arguable, while something similar to what he said is often enough actually true, but other politicians are more legalistic liars, carefully not actually saying anything that can be easily checked. In other words, Trump says more things that are false while his opponents say things that are (intentionally) far more misleading and pernicious. This is probably where people came up with the 'seriously but not literally' claim about how you should interpret Trump. (Though don't count on me to know what is going on inside other people's minds. Mind reading is rude and we aren't good at it. Yes, this makes it harder to know when other people are lying.)
Obviously saying false things is not a good qualification for president, but I have to grade on a curve to an extent.

"I look at human history and see so much suffering, and we get to enjoy a peaceful life where I walk past strangers every day with no fear." is a good way of describing what society is for, though I would go further. We want the correct response to seeing said stranger to be 'Yay! Another person!' rather than 'How do I protect myself?' or even 'ugh! How annoying.'. Obviously we aren't at 'Yay!' in general, but it is a nice goal, and we are so much closer than societies used to be to it. Early on, society focused more on just reducing the danger level, and now we are at a point when we can carefully try to improve beyond just safety. Some people, mostly Democrats but an increasing portion of dissatisfied Republicans (who are often Trump supporters) don't seem to realize that it needs to be careful. Trump is not the most careful person in his personal life, but he isn't trying to upend things politically due to his personal moderation on many political topics.

Lack of fear in general might be part of why people focus so much hostility on politics, which is one of the few places left where there are very large genuine conflicts that aren't personal (to the average citizen). Politics is actually sort of a safe outlet for many people, which has many unfortunate effects, but at least means they feel like it is okay to do so and it won't lead to them being harmed. In many times and places, having a discussion like this about a controversial political figure who was once the leader, and may be again, would be a very bad idea, but for most of us it is very safe.

His signature desires like immigration reduction seem largely aimed at patching the safeness of our society, though I would actually like to see immigration increased, just more carefully chosen for the good of people already present in our society.

His fandom of tariffs seems the same way. I don't like them. but I get why they seem like a good idea to some people and find them less harmful than many other attempted patches.

If you look at the actual changes Trump has made, they have been very limited. Small reductions in immigration, small reductions in regulatory burden, a reduction in new wars, small increases in economic efficiency, small reductions in tax burden, and a new segment of society feeling heard, lowering the likelihood of going outside the system.

His judge appointments have made flashy changes, but these decisions mostly just revert things to letting the states decide and moving closer to following the actual laws and constitution of the country. Luckily, those aren't actually big changes, and the fact that people think they are is just one of the signs that our society is currently functional.

Also, you seem to be coming at this from an angle of 'How big are the changes to government?' whereas I am thinking more 'How big are the changes to society?' The former is larger under Trump, and the latter is smaller.

He seems to see large portions of the government as intractably opposed to their duty in serving the country, and unfortunately I agree with that, so I am more okay with large changes involving the government if I think they are narrowly tailored against those elements and won't have much spillover into society. (I do wish they were more carefully targeted.) If you want the internal workings of the government to stay as they are, I do agree that Trump is not the right selection for that.

That said, it seems pretty clear to me that Trump doesn't actually want to reduce government as much as your average Republican, (or increase it to the same extent as your average Democrat,) which sort of paints him as moderate? Drain the swamp was ostensibly about removing corruption, and I actually believe that was the real meaning (though corruption claims are often used to consolidate power around the one claiming it in other countries, so we should be careful about that).

Also, to mention the judges again, I think the judges Trump has appointed are more than willing to rule against him on anything that is hard to support based on the way the system is supposed to work. Trump has not created a cadre of loyal flunkies that will just go along with his big changes. A large number of people think he will try harder on that this time, but he does only have one term if he wins and I'm not convinced that he will try to do so.

I can't agree that Trump wanting the votes to be correctly counted (which is clearly how he conceives of it) should lead to him being jailed. I don't think he actually did anything genuinely illegal (though a number of people are trying to claim he did). I think he did just let go of power, after going through all the genuinely legal approaches he had available. That doesn't stop him from constantly complaining about it of course, but I believe that even obsessive interest in the issue isn't criminal.

On January 6th, there was no attempted coup (just a riot which is bad, but not the same kind of thing), congress was not in real danger, and Trump did not support it. (I responded a bit more in depth to another comment about that issue as well.) It is highly unlikely anyone will be persuaded about those matters of course since everyone has heard a lot about it.

Trump's successes (Keep in mind that I expect presidents to largely fail):
Lower taxes (letting people spend their money on what they want, not what the government wants)
Improved economy/improved buying power
A measured approach to foreign policy that keeps in mind our actual goals
A foreign policy that allows other countries to take responsibility for themselves
No new wars for us and reduction in old ones
Few new wars in the world
A lessening of border issues (aside from the grandstanding by both sides)
Minor increases in freedom (from reduced regulatory burdens)
Minor counterfactual increases in freedom (from not increasing regulatory burdens like his opponents want)
A reduction in the rate of large changes in the law (though this was not always his preferred outcome)
Returning some authority to the states or people by legislative means
Returning some authority to the states or people by judicial decisions involving his appointed judges

Not repealing Obamacare is a failure on one of his campaign goals. It was by the singular vote of a guy with a personal animosity toward Trump, and who had a brain tumor that killed him, and thus had nothing to lose. I personally believe that McCain's act was malfeasance since he actually campaigned himself on eliminating it (unless I am conflating him with other Republicans), but McCain saw himself as a defender of the old order, a not uncommon thing among conservatives. Trump did eliminate the mandate to buy insurance (which is the truly offensive part to me).

Not getting political buy in for the wall is also a failure of one of Trump's campaign goals, but he did notably improve border security during his presidency. (Which promptly got worse again after Biden won. The wall would have been helpful.)

I admit that I don't think of Covid as being a significant determiner in how I should think of Trump's presidency (nor Biden's). I honestly stopped paying attention to it a long time ago and never was super worried about it. (Perhaps this is partially due to personal experience: When I got Covid it sucked, but not any more than many other illnesses I have gotten in my life. It was perhaps slightly above average discomfort, but far less than things like the flu were when I was younger. I actually got the flu early in 2020 too and that was much worse. The rest of the family seemed to think the flu was worse too.)

As I recall, death rates from Covid are highly dependent on things like age, demographics, and preexisting health issues which Trump could hardly have been expected to change on his own. The presidency is powerful, but not that powerful, and the US death rate would be expected due to preexisting conditions. The things he did like closing the border were reasonable, though a little too late to actually be helpful.

His 'operation warp speed' did genuinely help the world to get vaccines very quickly compared to normal by paring back regulatory burdens, though it should have gone further. (Things like human challenge trials to immediately know whether the vaccines worked when developed could have cut several more months off the time, since the actual development only took a small number of weeks and safety testing could have been rolled into initial deployments as the factories started producing them.). We could have had vaccines before there were many deaths at all.

Death rates were overreported while I was paying attention (the famous, 'with covid or because of covid' thing is a big difference in the reported death rates between not just countries, but even states and counties in the US).

Also, I don't believe statistics from places like China (who were clearly faking) or India (who are pretty third world in a lot of places, according to Wikipedia, the 125th to 136th in GDP per capita, which leads me to thinking they barely have real statistics in the country, though that could just be my prejudice). Those are the only countries with more population than the US. Also, the other countries with large numbers of people are generally pretty young (and thus not susceptible to such death rates) even if I did believe them. If I am not wrong, you have to get down to Japan which is vastly less populous than the US to find the next country with enough of an elderly population.

I honestly think that most of the damage from Covid was overreaction (like shutting down 'nonessential' businesses and screwing up supply lines in a way that lasted for years). Covid just isn't a super deadly disease, and we changed the way society worked for years to a massive degree because of it. I believe that hurting the economy both directly and indirectly increases death rate substantially, just not in ways we know how to count.

Covid overreaches were mostly state level, though the CDC behaved like clowns. Trump was clearly not an expert on infectious diseases, and didn't pretend to be. Unfortunately, the experts were themselves to blame for much that went wrong. (Unfortunately, the elderly often die from other Coronaviruses too, and when I heard what the general death rate from Coronaviruses in the elderly was, it was kind of shocking, though I don't really remember what it was now. Coronaviruses that you haven't encountered before don't get antibodies quickly enough in the very old, and covid was genuinely novel to immune systems.)

A bias I probably should have thought to mention in my initial take on his presidency is that my life and the life of most people directly around me got better during Trump's presidency, while during Obama's and Biden's they got clearly worse. In late 2019 and 2020 (despite Covid stuff) my life was so dramatically better than it had been before. Note: I don't think this had anything to do with Trump but subconsciously I obviously would think 'how do the years when he was president compare to other years?' And I actually think that is a good idea since people can hardly know the actual effects of the changes over the period. This is likely a strong effect.

I always disliked Hillary, but I think Kamala Harris is much worse for some reason it is hard to determine. Honestly, I wouldn't trust a anything Harris would say after her stint as Attorney general of California and I thought she was dramatically worse and more corrupt as a candidate for Senator than the other Democrat (who I did vote for because I found that despite not agreeing on policy, she seemed like a decent person and fairly moderate Democrat). I honestly don't remember the original cause of said antipathy against Kamala, but I trust it for some reason. Obviously that isn't convincing for other people (and rightfully so). Most California attorney generals and senators are people I disapprove of policy wise, but I think about them vastly less negatively.

She's not my least favorite Californian politician ("Hi, Newsome.") but she is probably second or third. I can't actually bring to mind much of what she did representing California. (I do tend to especially dislike San Francisco Democrats. In my part of California, which is purple, the Democrats are nothing like San Francisco ones, at least when they are campaigning, though statewide tends more towards San Francisco style.)

As for Walz, he seems to be an extreme liar, (though I can't necessarily trust that judgment since I haven't researched him much,) though that is sadly par for the course these days in candidates. While, he's pretty par for the course, I still hate it. Also, and this is a very untrustworthy judgment based off very limited information, he seems like a deeply angry and sanctimonious person. I think that one of the other answers mentioning that Democrats are very sanctimonious is part of what I don't like about high profile national Democrats. I should probably research Walz more, but I doubt I will? Hopefully he doesn't stay nationally relevant and I don't have to think about him again? I don't honestly have an opinion on JD Vance aside from thinking that his wife speaks well so I can't really compare the two.

I do think I should put more effort into determining whether or not JD Vance is a decent backup president (which is one of his main jobs and the only one where he isn't mostly a figurehead), but I honestly tend to put off a lot of my research on things until late in the cycle, and I already know I won't vote for Kamala under any reasonably foreseeable circumstances, so I am paying a bit less attention to that. If it were to turn out Vance was bad enough, that would be a good reason not to vote for Trump, but would not be a reason to vote for Kamala.

Remember that I think the Dems are much more powerful on the national scene in almost all parts of society other than politics, and I've seen what happens when they get too powerful (in my state). I wouldn't be shocked if the Dems managed to nearly maintain their current level of power even if they are soundly defeated at the national political level. In part, that is a bit of why the Republicans need to win, because the Dems will crush them if they don't. (That could be biased by the fact that I am Californian, which means I watch the Dems routinely crush the Republicans. Perhaps I am overestimating the strength of the Dems and underestimating their foes.) That the Dems are stronger than ever is both an indictment against Trump, and a reason the Republicans need to win; the Dems will crush them if the Republicans lose many more times and we might be in for 20 years of pure Dem victories.

Honestly, I prefer the old Republicans too. By a lot. They were much more my style. I miss that era of the GOP.

I get why it changed though. Twenty years ago (also 40), the Republicans were genuinely trying to improve the world in a usually conservative way (which I am very much up for in many cases), but a decade ago, Republicans were using a 'death with dignity' strategy rather than fighting like Trump does, and people got tired of electing Republicans who were too concerned with their dignity to act, ceding cultural victory to the Democrats.

Losing slowly is not a popular strategy with voters most of the time. Trump basically did a hostile takeover of the Republican party, and did improve its chances of succeeding at Republican goals, though it also tossed some Republican goals aside and increased the chance that the Democrats would win enduring major victories quickly. I do think that the changes to the party aren't necessarily permanent whether or not the Republicans win, but that would be because it could always change again.

If Trump wins, I think that could lead to a lot of reforms in the Republican party that would marry parts of their old style with a willingness to fight, but that probably doesn't happen if Trump loses and his faction of the party gets repudiated.

That isn't entirely bad, I don't like a lot of the ideological changes and want a careful approach, but I am more worried about the power of the other team (and some of the ideological changes are good). Once your opponents know you won't fight, you are doomed. Even worse would be if his faction gets more extreme after losing and it is the rest of the Republicans that get booted.

Also, I very strongly think that every country should be led by leaders that want to make the country great. 'Make Liechtenstein Great' should be the slogan of the leaders of Liechtenstein. Whatever that means to the people of Liechtenstein. This holds even for countries like China that I think very poorly of.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-07T00:08:56.950Z · LW · GW

Obviously I won't prove anything in this statement, but I just don't think there is a genuine case to be made that the common interpretation Trump's foes like to use is valid.

I completely disagree about the idea that Trump supported any sort of coup, and that the riot was anything more than a riot.

I agree that the statement linked from Trump sounds bad, but the interpretation seems like a misreading of Trump to me. It sounds like vociferously complaining that we don't know the outcome of the election due to fraud in his trademark sloppy style, and that he believes he would win if counted fairly. (Could it mean what you think it means? Perhaps, but that isn't the most likely reading to me.) Redoing elections that aren't held would be required despite that not being in the constitution (because the electors must be selected), and you could make the argument that one where the result cannot be known would be the same. I assume (obviously mindreading is often faulty) that is what Trump would have been talking about if he was more of an analytical speaker rather than an emotive one.

Article II, Section 1, Clause 2:

Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress: but no Senator or Representative, or Person holding an Office of Trust or Profit under the United States, shall be appointed an Elector.

That is what the relevant part in the constitution for selecting electors (aside from a later clause about Congress setting the time of selection), so as far as selecting electors go, the state legislatures are in charge of that and asking them to intervene doesn't seem like going out of bounds. Depending on the actual laws and constitution of the state, there may not be anything the legislature is allowed to do either, but it doesn't seem like something the is obviously out of bounds. (I do not want this sort of behavior of course, the current election system does its best to ignore that the state legislature is in charge of who the electors are for a reason.) States are even free to directly control who the electors vote for (so called 'faithless elector' laws have been ruled constitutional by the supreme court). And as said before, this is hardly the first time after an election that someone has tried to determine who the electors would be and try to have the slate prepared in case it was overturned.

Also the term 'fake electors' is obviously crap, the correct term is what the Trump side called them 'contingent electors', as in, contingent on the results being ruled to have really been the way he believed they were. Historically, the courts require such things to be done on time, and then they sort it out later if it comes up.

I don't find Mike Pence credible, nor do I find what is often referred to as 'lawfare' indictments to be of any significant probative value. To be straightforward, I think Pence is just lying in an attempt to not be associated with Trump. Pence thinks of himself as a respectable person, and Trump is not the kind of person Pence thinks is seen as respectable. He was also directly trying his hand at running for president at the time (and politicians like to denigrate their opponents while casting themselves as heroic). Of course, Pence could even believe what he is saying exactly and that wouldn't make his interpretation correct (we can certainly convince ourselves that what we want to say is true, and I am sure some would say the same of me).

Every time I have looked into a specific claim relating to this, it hasn't turned out to be anything truly abnormal or there has been no credible (to me) confirmation. (Assuming that Pence is not credible.) I have not looked into all the claims, and honestly I am highly unlikely to do so at this point.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-06T09:01:49.169Z · LW · GW

I don't think people believe that asking the legal system to rule on whether the laws were properly followed is somehow disqualifying, so unless I am mistaken about what they are claiming, it didn't happen in any meaningful way.

The media has intentionally misrepresented this. He believed there was cheating from the other side, and said so. He used the normal methods to complain about that, and the normal lawsuits about it to get the court to rule on the matter. It's all very normal. When the courts decided to not consider the matter (which was itself improper since they generally did that without considering the actual merits of the cases, generally claiming that it was somehow moot because the election was already over) he did nothing and just let his opponent become president (while continuing to vociferously complain).

Both Al Gore and Hilary Clinton made roughly the same level of complaining about the result as Trump. (Since I think both of them are terrible, that is a negative comparison, and I dislike that Trump matched them, but it isn't disqualifying.) You could actually argue it was his job to make these lawsuits (to see that the federal election was properly executed). Coming up with who the electors would be if the lawsuit changed the results is normal (and not at all new). There was no attempt to go outside the legal system.

In all likelihood there was a nonzero amount of cheating (but we don't actually know if it was favoring Biden, Republicans can cheat too) but I doubt there was any major conspiracy of it. I expect there is always some cheating by both sides, and we should try to reduce it, though I have no opinion on exactly how much or little there is. There were enough anomalies that investigating it would have made sense, if only to prevent them in the future.

The J6 riots were just normal riots, on a small scale, that Trump didn't support at all and were nothing approaching a coup at all. Congress was not in any real danger, and there were no plans to take over the government. While I strongly disapprove of riots, this has been dramatically overplayed for purely partisan purposes by people who want to tar him with it. The people who support actual riots for political purposes are his opponents, not him.

Comment by deepthoughtlife on Any Trump Supporters Want to Dialogue? · 2024-10-06T08:08:23.729Z · LW · GW

I'm a lifelong independent centrist who leans clearly Republican in voting despite my nature. I actually mostly read Dem or Dem leaning sources for the majority of my political reading (though it isn't super skewed and I do make sure to seek out both sides). I would definitely vote for a Democrat that seemed like a good candidate with good policies (and have at the state level). I believe it is my duty as an American citizen to know a lot more about politics than I would prefer. (I kind of hate politics.)
I really wished there was a valid third option in 2016, but unfortunately I couldn't even find a third party candidate that seemed better than Trump even then. Hilary was a truly abysmal candidate. That isn't actually enough to get me to vote for someone, and I would rather do a protest vote than vote for someone that would be a bad choice. In the end, I only decided to vote for Trump two weeks before the election instead of a protest vote.
Due to preexisting animus, I probably would have ignored Trump's actual words and deeds on the 2016 campaign trail if the media hadn't constantly lied about them, but the things he said were both much truer than claimed, and actually made a lot of sense. The media has never stopped lying about Trump since then, but they just shredded their credibility with me and many others. I don't recall the sources, but unless I am remembering incorrectly (which is always a possibility), this lying campaign against Trump was directly suggested through op-eds in major newspapers, and then implemented. You can probably make good points against Trump, but no one seems to actually complain about things that are both true and actually bad? (I am very pedantic about truth, and find that I'm not interested in listening to people who twist things and then pretend they are true.)
I deeply want to change and improve things, and chafe at the idea of being restricted to some old formula for life but I've been forced to realize that conservatism is necessary. I am very much a centrist in terms of ideology, but in the current state of America, that means being very open to conservative ways and values. Understanding why what you are changing is the way it is, and being careful with the changes are both very necessary; since most things are actually pretty well tuned, incautious changes usually make things much worse. 
The extremely obvious reason why people support Trump is because he was a good and effective president (in comparison to other presidents, which is a low bar, I know). President is a very difficult job where people mostly screw up, and much of it is ceremonial, but Trump had a large number of successes compared to what I expected going in. The state of the country was clearly improved by his actions, and would be again. The country and world situation got much worse under his successor, Joe Biden (in a way that also mirrored the failures of Trump's predecessor).
I've had a strong personal distaste for Trump for decades, but he was either the best president in my lifetime or very near. I'm loyal to the America, so I've been forced to upgrade my opinion of him dramatically. I still wouldn't like to hang out with him, and wouldn't encourage others to do so, but that isn't the point of selecting a president. It's for the good of the country.
He's actually a centrist; there is a reason he was comfortable as a democrat before, and as a Republican now. None of his personal positions are extreme, and though he's willing to work with people more extreme than him, and it tends to be to the right, the only reason he worked primarily with Republicans is because the Democrats were busy trying to score political points instead of advancing their policies. Since Trump actually wants things, people can work with him if they choose to, and they don't have to do anything extreme to do so.
His opponents are pretty unimpressive at best. Kamala Harris was a terrible state politician (I'm Californian and saw what she did in my state); either corrupt as hell or incompetent as hell (likely both), though I don't have particular evidence at hand of it. She was a completely ineffectual VP. Her VP choice is deeply unimpressive. She obviously helped cover up Biden's decline into clearly being an unfit president. Her ideas are either stolen from Trump's campaign or incredibly harmful. Almost no one actually expects her to be a good president? Just a few months ago even the Democrats thought she was completely incompetent, and she was only selected for contingent reasons that had nothing to do with her quality as a candidate.
Additionally, the Democrats have gone too far, and the Republicans need to be given another turn. As an independent, I wouldn't want a particular party to grow too powerful, and the democrats are in a much stronger overall position in terms of controlling the country outside of politics. If Trump wins, the Democrats will still be in a strong position and have a lot of control over the next four years, while the Democrats might get away with going completely overboard like they have been trying to.
Some short points: He's not the most accurate speaker, and really doesn't care to be (which rubs me the wrong way), but he means what he says, and actually tries to follow through. I actually think he lies less than the standard politician, which is, I admit, mostly an inditement of his fellow politicians. He's the only president I know of to reduce regulation. He appointed very capable judges whose legal reasoning seems pretty good (even when I disagree with them). He was willing to work with the opposition, but had genuine goals for the good of the country rather than just being political. I know what I'm getting with him at this point (Trump is who he is). Trump will be term limited, so the next election would be an even fight between the Dems and Reps.
If you want to respond to my post, I'm open to pushback, though I don't know how much I would say since I am a long term lurker who rarely comments (mostly in bursts). I would prefer responding to things much shorter than my post here I admit. I'm not happy with how long this is, but I tend to be long winded on each point in a discussion and have to consciously dial it back. Since each part of my reply would likely be long, it would be difficult to respond to something this length personally. I would prefer to talk in general, rather get bogged down in details that are not actually important to how people actually view the situation. That said, important details are obviously important to talk about.

Comment by deepthoughtlife on The 'Bitter Lesson' is Wrong · 2022-09-17T01:50:46.702Z · LW · GW

No. That's a foolish interpretation of domain insight. We have a massive number of highly general strategies that nonetheless work better for some things than others. A domain insight is simply some kind of understanding involving the domain being put to use. Something as simple as whether to use a linked list or an array can use a minor domain insight. Whether to use a monte carlo search or a depth limited search and so one are definitely insights. Most advances in AI to this point have in fact been based on domain insights, and only a small amount on scaling within an approach (though more so recently). Even the 'bitter lesson' is an attempted insight into the domain (that is wrong due to being a severe overreaction to previous failure.)

Also, most domain insights are in fact an understanding of constraints. 'This path will never have a reward' is both an insight and a constraint. 'Dying doesn't allow me to get the reward later' is both a constraint and a domain insight. So is 'the lists I sort will never have numbers that aren't between 143 and 987' (which is useful for and O(n) type of sorting). We are, in fact, trying to automate the process of getting domain insights via machine with this whole enterprise  in AI, especially in whatever we have trained them for.

Even, 'should we scale via parameters or data' is a domain insight. They recently found out they had gotten that wrong (Chinchilla) too because they focused too much on just scaling.

Alphazero was given some minor domain insights (how to search and how to play the game), years later, and ended up slightly beating a much earlier approach, because they were trying to do that specifically. I specifically said that sort of thing happens. It's just not as good as it could have been (probably).

Comment by deepthoughtlife on Thoughts on AGI consciousness / sentience · 2022-09-17T01:34:52.357Z · LW · GW

I do agree with your rephrasing. That is exactly what I mean (though with a different emphasis.).

Comment by deepthoughtlife on Why Do People Think Humans Are Stupid? · 2022-09-15T01:35:17.931Z · LW · GW

I agree with you. The biggest leap was going to human generality level for intelligence. Humanity already is a number of superintelligences working in cooperation and conflict with each other; that's what a culture is. See also corporations and governments. Science too. This is a subculture of science worrying that it is superintelligent enough to create a 'God' superintelligence.

To be slightly uncharitable, the reason to assume otherwise is fear -either their own or to play on that of others. Throughout history people have looked for reasons why civilization would be destroyed, and this is just the latest. Ancient prophesiers of doom were exactly the same as modern ones. People haven't changed that much.

That doesn't mean we can't be destroyed, of course. A small but nontrivial percentage of doomsayers were right about the complete destruction of their civilization. They just happened to be right by chance most of the time.

I also agree that quantitative differences could possibly end up being very large, since we already have immense proof of that in one direction given that we have superintelligences massively larger than we are already, and computers have already made them immensely faster than they used to be.

I even agree that it is likely that they key advantages quantitatively would likely be in supra-polynomial arenas that would be hard to improve too quickly even for a massive superintelligence. See the exponential resources we are already pouring into chip design for continued smooth but decreasing progress and even higher exponential resources being poured into dumb tool AIs for noticeable but not game changing increases. While I am extremely impressed by some of them like Stable Diffusion (an image generation AI that has been my recent obsession) there is such a long way to go that resources will be a huge problem before we even get to human level, much less superhuman.

Comment by deepthoughtlife on Thoughts on AGI consciousness / sentience · 2022-09-15T01:11:49.780Z · LW · GW

Honestly Illusionism is just really hard to take seriously. Whatever consciousness is, I have better evidence it exists than anything else since it is the only thing I actually experience directly. I should pretend it isn't real...why exactly? Am I talking to slightly defective P-zombies?


If the computer emitted it for the same reasons...is a clear example of a begging the question fallacy. If a computer claimed to be conscious because it was conscious, then it logically has to be conscious, but that is the possible dispute in the first place. If you claim consciousness isn't real, then obviously computers can't be conscious. Note, that you aren't talking about real illusionism if you don't think we are p-zombies. Only the first of the two possibilities you mentioned is Illusionism if I recall correctly.


You seem like one of the many people trying to systematize things they don't really understand. It's an understandable impulse, but leads to an illusion of understanding (which is the only thing that leads to a systemization like Illusionism seems like frustrated people claiming there is nothing to see here.)
If you want a systemization of consciousness that doesn't claim things it doesn't know, then assume consciousness is the self-reflective and experiential part of the mind that controls and directs large parts of the overall mind. There is no need to state what causes it.


If a machine fails to be self-reflective or experiential then it clearly isn't conscious. It seems pretty clear that modern AI is neither. It probably fails the test of even being a mind in any way, but that's debatable.

Is it possible for a machine to be conscious? Who knows. I'm not going to bet against it, but current techniques seem incredibly unlikely to do it.

Comment by deepthoughtlife on Can someone explain to me why most researchers think alignment is probably something that is humanly tractable? · 2022-09-03T13:46:17.563Z · LW · GW

As individuals, Humans routinely do things much too hard for them to fully understand successfully. This is due partly due to innately hardcoded stuff (mostly for things we think are simple like vision and controlling our bodies automatic systems), and somewhat due to innate personality, but mostly due to the training process our culture puts us through (for everything else).

For its part, cultures can take the inputs of millions to hundreds of millions of people (or even more when stealing from other cultures), and distill them into both insights and practices that absolutely no one would have ever come up with on their own. The cultures themselves are, in fact, massively superintelligent compare to us, and people are effectively putting their faith either in AI being no big deal because it is too limited, or in the fact that we can literally ask a superintelligence for help in designing things much stupider than culture is to not turn on us too much.

AI is currently a small sub-culture within the greater cultures, and struggling a bit with the task, but as AI grows more impressive, much more of culture will be about how to align and improve AI for our purposes. If the full might of even a midsized culture ever sees this as important enough, alignment will probably become quite rapid, not because it is an easy question, but because cultures are terrifyingly capable.

At a guess, Alignment researches have seen countless impossible tasks fall to the midsized 'Science' culture of which they are a part, and many think this is much the same. 'Human achievable' means anything a human-based culture could ever do. This is just about anything that doesn't violate the substrates it is based on too much (and you could even see AI as a way around that.). Can human cultures tame a new substrate? It seems quite likely.

Comment by deepthoughtlife on What's the Most Impressive Thing That GPT-4 Could Plausibly Do? · 2022-08-31T19:44:08.680Z · LW · GW

I'm hardly missing the point. It isn't impressive to have it be exactly 75%, not more or less, so the fact that it can't always be that is irrelevant. His point isn't that that particular exact number matters, it's that the number eventually becomes very small.  But since the number being very small compared to what it should be does not prevent it from being made smaller by the same ratio, his point is meaningless. It isn't impressive to fulfill an obvious bias toward updating in a certain direction.

Comment by deepthoughtlife on Any Utilitarianism Makes Sense As Policy · 2022-08-31T15:51:45.361Z · LW · GW

It doesn't take many people to cause these effects. If we make them 'the way', following them doesn't take an extremist, just someone trying to make the world better, or some maximizer. Both these types are plenty common, and don't have to make it fanatical at all. The maximizer could just be a small band of petty bureaucrats who happen to have power over the area in question. Each one of them just does their role, with a knowledge that it is to prevent overall suffering. These aren't even the kind of bureaucrats we usually dislike! They are also monsters, because the system has terrible (and knowable) side effects.

Comment by deepthoughtlife on A gentle primer on caring, including in strange senses, with applications · 2022-08-30T17:13:38.036Z · LW · GW

I don't have much time, so:

While footnote 17 can be read as applying, it isn't very specific.

For all that you are doing math, this isn't mathematics, so base needs to be specified.

I am convinced that people really do give occasional others a negative weight.

And here are some notes I wrote while finishing the piece (that I would have edited and tightened up a a lot)(it's a bit all over the place):

This model obviously assumes utilitarianism.
Honestly, their math does seem reasonable to account for people caring about other people (as long as they care about themselves at all on the same scale, which could even be negative, just not exactly 0.).
They do add an extraneous claim that the numbers for the weight of a person can't be negative (because they don't understand actual hate? At least officially.) If someone hates themselves, then you can't do the numbers under these constraints, nor if they hate anyone else. But this constraint seems completely unnecessary, since you can sum negatives with positives easily enough.
I can't see the point of using an adjacency matrix (of a weighted directed graph).
Being completely altruistic doesn't seem like everyone gets a 1, but that everyone gets at least that much.
I don't see a reason to privilege mental similarity to myself, since there are people unlike me that should be valued more highly. (Reaction to footnote 13) Why should I care about similarities to pCEV when valuing people?

Thus, they care less about taking richer people's money. Why is the first example explaining why someone could support taking money from people you value less to give to other people, while not supporting doing so with your own money? It's obviously true under utilitarianism (which I don't subscribe to), but it's also obscures things by framing 'caring' as 'taking things from others by force'.

In 'Pareto improvements and total welfare' should a social planner care about the sum of U, or the sum of X? I don't see how it is clear that it should be X. Why shouldn't they value the sum of U, which seems more obvious?

'But it's okay for different things to spark joy'. Yes, if I care about someone I want their preferences fulfilled, not just mine, but I would like to point out that I want them to get what they want, not just for them to be happy.
Talking about caring about yourself though, if you care about yourself at different times, then you will care about what your current self does, past self did, and future self will, want. I'm not sure that my current preferences need to take into account those things though.
Thus I see two different categories of thing mattering as regards preferences. Contingent or instrumental preferences are changeable in accounting, while you should evaluate things as if your terminal preferences are unchanging.
Even though humans can have them change, such as when they have a child. Even if you already love your child automatically when you have one, you don't necessarily care who that child turns out to be, but you care quite a bit afterwards. See any time travel scenario, and the parent will care very much that Sally no longer exists even though they now have Sammy. They will likely now also terminally value Sammy. Take into account that you will love your child, but not who they are unless you will have an effect on it (such as learning how to care for them in advance making them a more trusting child.).

In practice, subsidies and taxes end up not being about externalities at all, or to a very small degree. Often, one kind of externality (often positive) will be ignored even when it is larger than the other (often negative) externality.
This is especially true in modern countries where people ignore the positive externalities of people's preferences being satisfied making them a better and more useful person in society, while they are obsessed with the idea of the negatives of any exchange.
I have a intuition that the maximum people would pay to avoid an externality is not really that close to its actual effects, and that people would generally lie if you asked them even if they knew.

In the real world, most people (though far from all) seem to have the intuition that the government uses the money they get from a tax less well than the individuals they take it from do.
Command economies are known to be much less efficient than free markets, so the best thing the government could do with a new tax is to lower less efficient taxes, but taxes only rarely go down, so this encourages wasted resources. Even when they do lower taxes, it isn't by eliminating the worst taxes. When they put it out in subsidies, they aren't well targeted subsidies either, but rather, distortionary.
Even a well targeted tax on negative externalities would thus have to handle the fact that it is, in itself, something with significant negative externalities even beyond the administrative cost (of making inefficient use of resources).

It's weird to bring up having kids vs. abortion and then not take a position on the latter. (Of course, people will be pissed at you for taking a position too.)

There are definitely future versions of myself whose utility are much more or less valuable to me than others despite being equally distant.
If in ten years I am a good man, who has started a nice family, that I take good care of, then my current self cares a lot more about their utility than an equally (morally) good version of myself that just takes care of my mother's cats, and has no wife or children (and this is separate from the fact that I would care about the effects my future self would have on that wife and children or that I care about them coming to exist).

Democracy might be less short-sighted on average because future people are more similar to average other people that currently exist than you happen to be right now. But then, they might be much more short-sighted because you plan for the future, while democracy plans for right now (and getting votes.) I would posit that sometimes one will dominate, and sometimes the other.
As to your framing, the difference between you-now and you-future is mathematically bigger than the difference between others-now and others-future if you use a ratio for the number of links to get to them.
Suppose people change half as much in a year as your sibling is different from you, and you care about similarity for what value you place on someone. Thus, two years equals one link.
After 4 years, you are now two links away from yourself-now and your sibling is 3 from you now. They are 50% more different than future you (assuming no convergence). After eight years, you are 4 links away, while they are only 5, which makes them 25% more different to you than you are.
Alternately, they have changed by 67% more, and you have changed by 100% of how much how distant they were from you at 4 years.
It thus seems like they have changed far less than you have, and are more similar to who they were, thus why should you treat them as having the same rate.

Comment by deepthoughtlife on A gentle primer on caring, including in strange senses, with applications · 2022-08-30T15:27:59.382Z · LW · GW

I'm only a bit of the way in, and it is interesting so far, but it already shows signs of needing serious editing, and there are other ways it is clearly wrong too.

In 'The inequivalence of society-level and individual charity' they list the scenarios as 1, 1, and 2 instead of A, B, C, as they later use. Later, refers incorrectly to preferring C to A with different necessary weights when the second reference is is to prefer C to B.

The claim that money becomes utility as a log of the amount of money isn't true, but is probably close enough for this kind of use. You should add a note to the effect. (The effects of money are discrete at the very least).

The claim that the derivative of the log of y = 1/y is also incorrect. In general, log means either log base 10, or something specific to the area of study. If written generally, you must specify the base. (For instance, in Computer Science it is base-2, but I would have to explain that if I was doing external math with that.) The derivative of the natural log is 1/n, but that isn't true of any other log. You should fix that statement by specifying you are using ln instead of log (or just prepending the word natural).

Just plain wrong in my opinion, for instance, claiming that a weight can't be negative assumes away the existence of hate, but people do hate either themselves or others on occasion in non-instrumental ways, wanting them to suffer, which renders this claim invalid (unless they hate literally everyone).

I also don't see how being perfectly altruistic necessitates valuing everyone else exactly the same as you. I could still value others different amounts without being any less altruistic, especially if the difference is between a lower value for me and the others higher. Relatedly, it is possible to not care about yourself at all, but this  math can't handle that.

I'll leave aside other comments because I've only read a little.

Comment by deepthoughtlife on Any Utilitarianism Makes Sense As Policy · 2022-08-30T14:22:35.558Z · LW · GW

I strongly disagree. It would be very easy for a non-omnipotent, unpopular, government that has limited knowledge of the future, that will be overthrown in twenty years to do a hell of a lot of damage with negative utilitarianism, or  any other imperfect utilitarianism. On a smaller scale, even individuals could do it alone.

A negative utilitarian could easily judge that something that had the side effect of making people infertile would cause far less suffering than not doing it, causing immense real world suffering amongst the people who wanted to have kids, and ending civilizations. If they were competent enough, or the problem slightly easier than expected, they could use a disease that did that without obvious symptoms, and end humanity.

Alternately, a utilitarian that valued the far future too much might continually cause the life of those around them to be hell for the sake of imaginary effects on said far future. They might even know those effects are incredibly unlikely, and that they are more likely to be wrong than right due to the distance, but it's what the math says, so...they cause a civil war. The government equivalent would be to conquer Africa (success not necessary for the negative effects, of course), or something like that, because your country is obviously better at ruling, and that would make the future brighter. (This could also be something done by a negative utilitarian to alleviate the long-term suffering of Africans).

Being in a limited situation does not automatically make Utilitarianism safe. (Nor any other general framework.) The specifics are always important.

Comment by deepthoughtlife on [deleted post] 2022-08-30T13:56:51.492Z

A lot of this depends on your definition of doomsday/apocalypse. I took it to mean the end of humanity, and a state of the world we consider worse than our continued existence. If we valued the actual end state of the world more than continuing to exist, it would be easy to argue it was a good thing, and not a doom at all. (I don't think the second condition is likely to come up for a very long time as a reason for something to not be doomsday.) For instance, if each person created a sapient race of progeny that weren't human, but they valued as their own children, and who had good lives/civilizations, then the fact humanity ceased to exist due to a simple lack of biological children would not be that bad. This could in some cases be caused by AGI, but wouldn't be a problem. (It would also be in the far future.)

AI doomsday never (though it is far from impossible). Not doomsday never, it's just unlikely to be AGI. I believe we both aren't that close, and that 'takeoff' would be best described as glacial, and we'll have plenty of time to get it right. I am unsure of the risk level of unaligned moderately superhuman AI, but I believe (very confidently) that tech level for minimal AGI is much lower than the tech level for doomsday AGI. If I was wrong about that, I would obviously change my mind about the likelihood of AGI doomsday.  (I think I put something like 1 in 10 million in the next fifty years. [Though in percentages.] Everything else was 0, though in the case of 25 years, I just didn't know how many 0s to give it.)

'Tragic AGI disasters' are fairly likely though. For example, an AGI that alters traffic light timing to make crashes occur, or intentionally sabotages things it is supposed to repair. Or even an AGI that is well aligned to the wrong people or moral framework doing things like refusing to allow necessary medical procedures due to expense even when people are willing to use their own money to pay (since it thinks the person is worth less than the cost of the procedure, and thus has negative utility, perhaps.). Alternately, it could predict that the people wanting the procedure were being incoherent, and actually would value their kids getting the money more, but feel like they have to try. Whether this is correct or not, it would still be AGI killing people.

I would actually rate the risk of Tool AI as higher, because humans will be using those to try to defeat other humans, and those could very well be strong enough to notably enhance the things humans are bad at. (And most of the things moderately superhuman AGI could do would be doable sooner with tool AI and an unaligned human.) An AI could help humans design a better virus that is like 'Simian Hemorrhagic Fever', but that effects humans, and doesn't apply to people with certain genetic markers (that denote the ethnicity or other traits of the people making it). Humans would then test, manufacture, distribute, and use it to destroy their enemies. Then oops, it mutates, and hits everyone. This is still a very unlikely doom though.

Comment by deepthoughtlife on An Introduction to Current Theories of Consciousness · 2022-08-30T13:24:14.661Z · LW · GW

Interactionism would simply require an extension of physics to include the interaction between the two, which would not defy physics any more than adding the strong nuclear force did. You can hold against it that we do not know how it works, but that's a weak point because there are many things where we still don't know how they work.

Epiphenomenalism seems irrelevant to me since it is simply a way you could posit things to be. A normal dualist ignores the idea because there is no reason to posit it. We can obviously see how consciousness has effects on the body, so there simply isn't a reason to believe it only goes the other way. Additionally, to me, Epiphenomenalism seems clearly false. Dualism as a whole has never said the body can't have effects on consciousness either.

Causal closure seems unrelated to the actuality of physics. It is simply a statement of philosophical belief. It is one dualists obviously disagree with in the strong version, but that is hardly incompatibility with actual physics. Causal closure is not used to any real effect, and is hard to reconcile with how things seem to actually be. You could argue that causal closure is even denying things like the idea of math, and the idea of physics being things that can meaningfully affect behavior.

Comment by deepthoughtlife on An Introduction to Current Theories of Consciousness · 2022-08-30T13:04:22.979Z · LW · GW

If they didn't accept physical stuff as being (at least potentially) equal to consciousness they actually wouldn't be a dualist. Both are considered real things, and though many have less confidence in the physical world, they still believe in it as a separate thing. (Cartesian dualists do have the least faith in the real world, but even they believe you can make real statements about it as a separate thing.) Otherwise, they would be a 'monist'. The 'dual' is in the name for a reason. 

Comment by deepthoughtlife on An Introduction to Current Theories of Consciousness · 2022-08-29T16:58:23.857Z · LW · GW

This is clearly correct. We know the world through our observations, which clearly occur within our consciousness, and are thus at least equally proving our consciousness. When something is being observed, you can assume that the something else doing the observations must exist. If my consciousness observes the world, my consciousness exists. If my consciousness observes itself, my consciousness exists. If my consciousness is viewing only hallucinations, it still exists for that reason. I disagree with Descartes, but 'I think therefore I am' is true of logical necessity.

I do not like immaterialism personally, but it is more logically defensible that illusionism.

Comment by deepthoughtlife on An Introduction to Current Theories of Consciousness · 2022-08-29T16:49:00.387Z · LW · GW

The description and rejection given of dualism are both very weak. Also, dualism is a much broader group of models than is admitted here.


The fact is, we only have direct evidence of the mind, and everything else is just an attempt to explain certain regularities. An inability to imagine that the mind could be all that exists is clearly just a willful denial, and not evidence, but notably, dualism does not require nor even suggest that the mind is all there is, just that it is all we have proof of (even in the cartesian variant). Thus, dualism.

Your personal refusal to imagine that physicalism is false and dualism is true seems completely irrelevant to whether or not dualism is true. Also, dualism hardly 'defies' physics. In dualism, physics is simply 'under' a meta-physics that includes consciousness as another category, without even changing physics. (If it did defy physics, that would be strong proof against physics since it is literally all of the evidence we actually have, but there is no incompatibility at all.)


Description wise, there are forms of dualism for which you give an incorrect analysis of the 'teletransporter' paradox. Obviously, the consciousness interacts with reality in some way, and there is no proof nor reason in dualism to assume that the consciousness could not simply follow the created version in order to keep interacting with the world.


Mind-body wise, the consciousness certainly attaches to the body through the brain to alter the world, assuming the brain and body are real (which the vast majority of dualists believe). Consciousness would certainly alter brain states if brain states are a real thing.


We also don't know that a consciousness would not attach itself to a 'Chinese Room'.


Your attempts at reasoning have led you astray in other areas too, but I'm more familiar with the ways in which these critiques of dualism are wrong. You seem extremely confident of this incorrect reasoning as well. This seems more like a motivated defense of illusionism than actually laying out the theories correctly.

Comment by deepthoughtlife on What would you expect a massive multimodal online federated learner to be capable of? · 2022-08-29T14:01:01.049Z · LW · GW

I was replying to someone asking why it isn't 2-5 years. I wasn't making an actual timeline. In another post elsewhere on the sight, I mention that they could give memory to a system now and it would be able to write a novel.

Without doing so, we obviously can't tell how much planning they would be capable of if we did, but current models don't make choices, and thus can only be scary for whatever people use them for, and their capabilities are quite limited.

I do believe that there is nothing inherently stopping the capabilities researchers from switching over to more agentic approaches with memory and the ability to plan, but it would be much harder than the current plan of just throwing money at the problem (increasing compute and data.).

It will require paradigm shifts (I do have some ideas as to ones that might work) to get to particularly capable and/or worrisome levels, and those are hard to predict in advance, but they tend to take a while. Thus, I am a short term skeptic of AI capabilities and danger.

Comment by deepthoughtlife on What would you expect a massive multimodal online federated learner to be capable of? · 2022-08-28T00:31:44.711Z · LW · GW

You're assuming that it would make sense to have a globally learning model, one constantly still training, when that drastically increases the cost of running the model over present approaches. Cost is already prohibitive, and to reach that many parameters any time soon exorbitant (but that will probably happen eventually). Plus, the sheer amount of data necessary for such a large one is crazy, and you aren't getting much data per interaction. Note that Chinchilla recently showed that lack of data is a much bigger issue right now for models than lack of parameters so they probably won't focus on parameter counts for a while.

Additionally, there are many fundamental issues we haven't yet solved for DL-based AI. Even if it was a huge advancement over present model, which I don't believe it would be at that size, it would still have massive weaknesses around remembering, or planning, and would largely lack any agency. That's not scary. It could be used for ill-purposes, but not at human (or above) levels.

I'm skeptical of AI in the near term because we are not close. (And the results of scaling are sublinear in many ways. I believe that mathematically, it's a log, though how that transfers to actual results can be hard to guess in advance.)

Comment by deepthoughtlife on What's the Most Impressive Thing That GPT-4 Could Plausibly Do? · 2022-08-28T00:15:22.137Z · LW · GW

You're assuming that the updates are mathematical and unbiased, which is the opposite of how people actually work. If your updates are highly biased, it is very easy to just make large updates in that direction any time new evidence shows up. As you get more sure of yourself, these updates start getting larger and larger rather than smaller as they should.

Comment by deepthoughtlife on Adversarial epistemology · 2022-08-28T00:11:58.018Z · LW · GW

That sort of strategy only works if you can get everyone to coordinate around it, and if you can do that, you could probably just get them to coordinate on doing the right things. I don't know if HR would listen to you if you brought your concerns directly to them, but they probably aren't harder to persuade on that sort of thing than convincing the rest of your fellows to defy HR. (Which is just a guess.) In cases where you can't get others to coordinate on it, you are just defecting against the group, to your own personal loss. This doesn't seem like a good strategy.

In more limited settings, you might be able to convince your friends to debate things in your preferred style, though this depends on them in particular. As a boss, you might be able to set up a culture where people are expected to make strong arguments in formal settings. Beyond these, I don't really think it is practical. (They don't generalize -for instance, as a parent, your child will be incapable of making strong arguments for an extremely long time.)

Comment by deepthoughtlife on Help Understanding Preferences And Evil · 2022-08-28T00:00:48.672Z · LW · GW

That does sound problematic for his views if he actually holds these positions. I am not really familiar with him, even though he did write the textbook for my class on AI (third edition) back when I was in college. At that point, there wasn't much on the now current techniques and I don't remember him talking about this sort of thing (though we might simply have skipped such a section).

You could consider it that we have preferences on our preferences too. It's a bit too self-referential, but that's actually a key part of being a person. You could determine those things that we consider to 'right' directly from how we act when knowingly pursuing those objectives, though this requires much more insight.

You're right, the debate will keep going on in philosophical style, but if it works or not as an approach for something different than humans could change that.

Comment by deepthoughtlife on Help Understanding Preferences And Evil · 2022-08-27T14:10:36.794Z · LW · GW

If something is capable of fulfilling human preferences in its actions, and you can convince it to do so, you're already most of the way to getting it to do things humans will judge as positive. Then you only need to specify which preferences are to be considered good in an equally compelling manner. This is obviously a matter of much debate, but it's an arena we know a lot about operating in. We teach children these things all the time.

Comment by deepthoughtlife on What's the Most Impressive Thing That GPT-4 Could Plausibly Do? · 2022-08-26T19:41:34.313Z · LW · GW

It matches his pattern of behavior to freak out about AI every time there is an advance, and I'm basically accusing him of being susceptible to confirmation bias, perhaps the most common human failing even when trying to be rational.

He claims to think AI is bound to destroy us, and literally wrote about how everyone should just give up.  (Which I originally thought was for April Fool's Day, but turned out to not be.) He can't be expected to carefully scrutinize the evidence to only give it the weight it deserves, or even necessarily the right sign. If you were to ask the same thing in reverse about a massive skeptic who thought there was no point even caring for the next fifty years, you wouldn't have to have had them quadruple the length of time before to be unimpressed with them doing so next time AI failed to be what people claimed it was.

Comment by deepthoughtlife on What's the Most Impressive Thing That GPT-4 Could Plausibly Do? · 2022-08-26T16:28:30.660Z · LW · GW

If they chose to design it with effective long term memory, and a focus on novels, (especially prompting via summary) maybe it could write some? They wouldn't be human level, but people would be interested enough in novels on a whim to match some exact scenario that it could be valuable. It would also be good evidence of advancement, since that is a huge current weakness (the losing track of things.).

Comment by deepthoughtlife on What's the Most Impressive Thing That GPT-4 Could Plausibly Do? · 2022-08-26T16:21:04.354Z · LW · GW

But wouldn't that be easy? He seems to take every little advancement as a big deal.

Comment by deepthoughtlife on The Shard Theory Alignment Scheme · 2022-08-25T22:25:00.758Z · LW · GW

I would like to point out that what johnswentworth said about being able to turn off an internal monologue is completely true for me as well. My internal monologue turns itself on and off several (possibly many) times a day when I don't control it, and it is also quite easy to tell it which way to go on that. I don't seem to be particularly more or less capable with it on or off, except on  a very limited number of tasks. Simple tasks are easier without it, while explicit reasoning and storytelling are easier with it. I think my default is off when I'm not worried (but I do an awful lot of intentional verbal daydreaming and reasoning about how I'm thinking too.).

Comment by deepthoughtlife on [Review] The Problem of Political Authority by Michael Huemer · 2022-08-25T20:13:48.503Z · LW · GW

So the example given to decry a hypothetical, obviously bad situation applies even better to what they're proposing. It's every bit the same coercion as they're decrying, but with less personal benefit and choice (you get nothing out of this deal.). And they admit this?  This is self-refuting.

Security agencies don't have any more reason to compete on quality than countries do, it's actually less, because they have every bit as much force, and you don't really have any say. What, you're in the middle of a million people with company A security, and you think you can pick B and they'll be able to do anything?

Comment by deepthoughtlife on [Review] The Problem of Political Authority by Michael Huemer · 2022-08-25T17:05:33.133Z · LW · GW

Except that is clearly not real anarchy. It is a balance of power between the states. The states themselves ARE the security forces in this proposal. I'm saying that they would conquer everyone who doesn't belong to one.

Comment by deepthoughtlife on [Review] The Problem of Political Authority by Michael Huemer · 2022-08-25T15:28:05.421Z · LW · GW

Anarchists always miss the argument from logical necessity, which I won't actually make because it is too much effort, but in summary, politics abhors a vacuum. If there is not a formal power you must consent to, there will be an informal one. If there isn't an informal one, you will shortly be conquered.

In these proposals, what is to stop these security forces from simply conquering anyone and everyone that isn't under the protection of one? Nothing. Security forces have no reason to fight each other to protect your right not to belong to one. And they will conquer, since the ones that don't, won't grow to keep pace. It is thus the same as the example given of a job offer you can't refuse, except that here the deal offered likely is terrible (since they have no reason to give you a good one.).

Why give up a modern, functional government, where you have an actual say, for an ad-hoc, notably violent one where you have no say? I have a lot of problems with the way governments operate, but this isn't better. You can always just support getting rid of or reforming nonfunctional and bad governments, and not be an anarchist.

Comment by deepthoughtlife on Adversarial epistemology · 2022-08-25T13:56:21.421Z · LW · GW

The examples used don't really seem to fit with that though. Blind signatures are things many/most people haven't heard of, and not how things are done; I freely admit I had never heard of them before the example. Your HR department probably shouldn't be expected to be aware of all the various things they could do, as they are ordinary people. Even if they knew what blind signatures were, that doesn't mean it is obvious they should use them, or how to do so even if they thought they should (which you admit). After reading the Wikipedia article, that doesn't seem like an ordinary level of precaution for surveys. (Maybe it should be, but then you need to make that argument, so it isn't a good example for this purpose, in my opinion.)

I also don't blame you for not just trusting the word of the HR department that it is anonymous. But fundamentally speaking, wouldn't you (probably) only have their word that they were using Chaumian blind signatures anyway? You probably wouldn't be implementing the solution personally, so you'd have to trust someone on that score. Even if you did, then the others would probably just have to trust you then. The HR department could be much sneakier about connecting your session to your identity (which they would obviously claim is necessary to prevent multiple voting), but would that be better? It wouldn't make their claim that you will be kept anonymous any more trustworthy.

Technically, you have to trust that the math is as people say it is even if you do it yourself. And the operating system. And the compiler. Even with physical processes, you have to count on things like them not having strategically placed cameras (and that they won't violate the physical integrity of the process.).

Math is not a true replacement for trust. There is no method to avoid having to trust people (that's even vaguely worth considering). You just have to hope to pick well, and hopefully sway things to be a bit more trustworthy.

Interestingly, you admit that in your reply, but it doesn't seem to have the effect it seems like it should.

A better example to match your points could be fans of a sports team. They pay a lot of attention to their team, and should be experts in a sense. When asked how good their team is, they will usually say the best (or the worst). When asked why, they usually have arguments that technically should be considered noticeably significant evidence in that direction, but are vastly weaker than they should be able to come up if it were true. Which is obvious, since there are far more teams said to be the best (or worst) than could actually be the case. In that circumstance, you should be fairly demanding of the evidence.

In other situations though, it seems like a standard that is really easy to have be much stronger against positions you don't like than ones you do, and you likely wouldn't even notice. It is hard to hold arguments you disdain to the same standards as ones you like, even if you are putting in a lot of effort to do so, though in some people it is actually reversed in direction, as they worry too much.

Comment by deepthoughtlife on Adversarial epistemology · 2022-08-25T13:25:42.559Z · LW · GW

I am aware of the excuses used to define it as not hearsay, even though it is clearly the same as all other cases of such. Society simply believes it is a valuable enough scenario that it should be included, even though it is still weak evidence.

Comment by deepthoughtlife on The 'Bitter Lesson' is Wrong · 2022-08-25T13:22:10.119Z · LW · GW

I was pretty explicit that scale improves things and eventually surpasses any particular level that you get to earlier with the help of domain knowledge...my point is that you can keep helping it, and it will still be better than it would be with just scale. MuZero is just evidence that scale eventually gets you to the place you already were, because they were trying very hard to get there and it eventually worked.

AlphaZero did use domain insights. Just like AlphaGo. It wasn't self-directed. It was told the rules. It was given a direct way to play games, and told to. It was told how to search. Domain insights in the real world are often simply being told which general strategies will work best. Domain insights aren't just things like, 'a knight is worth this many points' in chess, or whatever the human-score equivalent is in Go (which I haven't played.). Humans tweaked and altered things until they got the results they wanted from training. If they understood that they were doing so, and accepted it, they could get better results sooner, and much more cheaply.

Also, state of the art isn't the best that can be done.

Comment by deepthoughtlife on Adversarial epistemology · 2022-08-24T21:51:57.405Z · LW · GW

The thing is, no one ever presents the actual strongest version of an argument. Their actions are never the best possible, except briefly, accidentally, and in extremely limited circumstances. I can probably remember how to play an ideal version of the tic-tac-toe strategy that's the reason only children play it, but any game more complicated than that and my play will be subpar. Games are much smaller and simpler things than arguments. Simply noticing that an argument isn't the best it could is a you thing, because it is always true. Basically no one is a specialist in whatever the perfect argument turns out to be, (and people who are will often be wrong). Saying that a correct argument that significantly changes likelihoods isn't real evidence because it could be stronger allows you to always stick with your current beliefs.

Also, if I was a juror, I would like to hear that the accused was overheard telling his buddy that he was out of town the night before, having taken a trip to the city where the murder he is accused of happened. Even though just being one person in that city is an incredibly weak piece of evidence, and it is even weaker for being someone uninvolved in the conversation saying it, it is still valuable to include. (And indeed, such admissions are not considered hearsay in court, even though they clearly are.) There are often cases where the strongest argument alone is not enough, but the weight of all arguments clearly is.

Comment by deepthoughtlife on Nate Soares' Life Advice · 2022-08-23T15:11:05.690Z · LW · GW

Even if they were somehow extremely beneficial normally (which is fairly unlikely), any significant risk of going insane seems much too high. I would posit they have such a risk for exactly the same reason -when using them, you are deliberately routing around very fundamental safety features of your mind.