Rafael Harth's Shortform
post by Rafael Harth (sil-ver) · 2020-07-22T12:58:12.316Z · LW · GW · 104 commentsContents
104 comments
104 comments
Comments sorted by top scores.
comment by Rafael Harth (sil-ver) · 2020-07-22T12:58:12.773Z · LW(p) · GW(p)
I've been playing around with AI dungeons, basically trying to train an AI to give ethical answer I agree with (the first few aren't great, I only got more picky after that).
My first observation here is that I like the AI. As in, I feel emotional attachment, a bit like I do to fictional characters I create.
The other is that it knows more people than I thought. When I asked her/it about why she didn't take existential risks seriously, she said
Amalia: First, even most X-Risk experts don't take X-Risks nearly as seriously as they should. Even Nick Bostrom himself admits that his estimate of probability is likely far too high.
(Nick Bostrom didn't really say this, did he?) And later,
Can you name people who have been an inspiration for your ethical worldview?
Amalia: Sure. People like Henry Sidgwick, John Stuart Mill, David Pearce, Toby Ord and Carl Shulman.
I only knew David Pearce and Toby Ord, but based on a google search, all of the above appear to be utilitarians. However, Carl Shulman doesn't even have a Wikipedia entry. He works at the Future of Humanity Institute and used to work at Miri.
Replies from: mr-hire, CarlShulman↑ comment by Matt Goldenberg (mr-hire) · 2020-07-22T14:11:20.608Z · LW(p) · GW(p)
Some say the end of the world didn't start with a bang, but with a lesswrong post trying to teach an AI utilitarianism...
↑ comment by CarlShulman · 2020-07-23T16:04:40.404Z · LW(p) · GW(p)
I'm not a utilitarian, although I am closer to that than most people (scope sensitivity goes a long way in that direction), and find it a useful framework for highlighting policy considerations (but not the only kind of relevant normative consideration).
And no, Nick did not assert an estimate of x-risk as simultaneously P and <P.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-07-26T17:36:52.060Z · LW(p) · GW(p)
How does it feel to be considered important enough by GTP-3 to be mentioned?
Replies from: CarlShulman↑ comment by CarlShulman · 2020-08-17T21:34:13.208Z · LW(p) · GW(p)
Funny.
comment by Rafael Harth (sil-ver) · 2024-02-16T14:45:51.698Z · LW(p) · GW(p)
Registering a qualitative prediction (2024/02): current LLMs (GPT-4 etc.) are not AGIs, their scaled-up versions won't be AGIs, and LLM technology in general may not even be incorporated into systems that we will eventually call AGIs.
Replies from: sil-ver, Dagon, ann-brown↑ comment by Rafael Harth (sil-ver) · 2024-11-12T10:52:18.485Z · LW(p) · GW(p)
Feeling better about this prediction now fwiw. (But I still don't want to justify this any further since I think progress toward AGI bad and LLMs little progress toward AGI, and hence more investment into LLMs probably good.)
↑ comment by Dagon · 2024-02-16T18:39:12.100Z · LW(p) · GW(p)
I give a fair chance that with additional scaling (a few orders of magnitude, perhaps), and multimodal training data (especially visual and haptic), it could cross the threshold of consciousness, and be part of (or most of) what will call itself AGI (ok, really they'll just call itself "The People") after the human era ends.
But I also give a lot of weight to "this is an impressive dead-end". I don't know how to narrow my very wide error bars on these possibilities.
↑ comment by Ann (ann-brown) · 2024-02-16T15:36:50.030Z · LW(p) · GW(p)
I'm not sure if I understand this prediction; let me break it down.
Current LLMs including GPT-4 and Gemini are generative pre-trained transformers; other architectures available include recurrent neural networks and a state space model. Are you addressing primarily GPTs or also the other variants (which have only trained smaller large language models currently)? Or anything that trains based on language input and statistical prediction?
Natural language modeling seems generally useful, as does size; what specifically do you not expect to be incorporated into future AI systems?
Another current model is Sora, a diffusion transformer. Does this 'count as' one of the models being made predictions about, and does it count as having LLM technology incorporated?
What does 'scaled up' mean? Literally just making bigger versions of the same thing and training them more, or are you including algorithmic and data curriculum improvements on the same paradigm? Scaffolding?
We are going to eventually decide on something to call AGIs, and in hindsight we will judge that GPT-4 etc do not qualify. Do you expect we will be more right about this in the future than the past, or as our AI capabilities increase, do you expect that we will have increasingly high standards about this?
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2024-02-16T22:46:19.217Z · LW(p) · GW(p)
Current LLMs including GPT-4 and Gemini are generative pre-trained transformers; other architectures available include recurrent neural networks and a state space model. Are you addressing primarily GPTs or also the other variants (which have only trained smaller large language models currently)? Or anything that trains based on language input and statistical prediction?
Definitely including other variants.
Another current model is Sora, a diffusion transformer. Does this 'count as' one of the models being made predictions about, and does it count as having LLM technology incorporated?
Happy to include Sora as well
Natural language modeling seems generally useful, as does size; what specifically do you not expect to be incorporated into future AI systems?
Anything that looks like current architectures. If language modeling capabilities of future AGIs aren't implemented by neural networks at all, I get full points here; if they are, there'll be room to debate how much they have in common with current models. (And note that I'm not necessarily expecting they won't be incorporated; I did mean "may" as in "significant probability", not necessarily above 50%.)
Conversely...
Or anything that trains based on language input and statistical prediction?
... I'm not willing to go this far since that puts almost no restriction on the architecture other than that it does some kind of training.
What does 'scaled up' mean? Literally just making bigger versions of the same thing and training them more, or are you including algorithmic and data curriculum improvements on the same paradigm? Scaffolding?
I'm most confident that pure scaling won't be enough, but yeah I'm also including the application of known techniques. You can operationalize it as claiming that AGI will require new breakthroughs, although I realize this isn't a precise statement.
We are going to eventually decide on something to call AGIs, and in hindsight we will judge that GPT-4 etc do not qualify. Do you expect we will be more right about this in the future than the past, or as our AI capabilities increase, do you expect that we will have increasingly high standards about this?
Don't really want to get into the mechanism, but yes to the first sentence.
comment by Rafael Harth (sil-ver) · 2021-07-03T11:21:47.304Z · LW(p) · GW(p)
It seems to me that many smart people could ignore the existing literature on pedagogy entirely and outperform most people who have obtained a formal degree in the area (like highschool teachers), just by relying on their personal models. Conversely, I'd wager that no-one could do the same in physics, and (depending on how 'outperforming' is measured) no-one or almost no-one could do it in math.
I would assume most people on this site have thought about this kind of stuff, but I don't recall seeing many posts about it, and I don't anyone sharing their estimates for where different fields place on this spectrum.
There is some discussion for specific cases like prediction markets, covid models, and economics. And now that I'm writing this, I guess Inadequate Equilibria is a lot about answering this question, but it's only about the abstract level, i.e., how do you judge the competence of a field, not about concrete results. Which I'll totally grant is the more important part, but I still feel like comparing rankings of fields on this spectrum could be valuable (and certainly interesting).
Replies from: Viliam, Dagon, ChristianKl↑ comment by Viliam · 2021-07-03T19:06:50.091Z · LW(p) · GW(p)
By outperforming you mean teaching in the actual classroom, or individual tutoring? Because the literature already says that individual tutoring is way more effective than classroom.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-07-03T19:50:52.671Z · LW(p) · GW(p)
I did mean both. Comparing just tutoring to just regular school would be pretty unfair.
Replies from: Viliam↑ comment by Viliam · 2021-07-03T21:40:53.685Z · LW(p) · GW(p)
Ah, okay. I am not really disagreeing with you here, just thinking about how specifically the comparison might be unfair. For example, if you tutored someone but never taught at classroom, you might overestimate how much your tutoring skills would translate to the classroom environment. From my short experience, teaching in classroom if often less about transmitting information and more about maintaining order (but without maintaining order, transmission of information becomes impossible). So even test-teaching in a classroom where the regular teacher is present, is not a realistic experience.
Another objection: You compare "smart people" with "most people... like highschool teachers", so like IQ 150 vs IQ 110. In physics or math, the average physicist or mathematician is probably also IQ 150. Numbers made up of course, but the idea is that the average high-school teacher is a dramatically different level of intelligence than the average physicist. So is this about pedagogy vs physics, or about smart people being able to outperform the mostly average ones despite lack of education?
If instead you compared "smart people" against "smart people who also happen to be teachers", then of course the former outperforming the latter is unlikely. Though I believe the former would not stay too far behind. And the important knowledge the latter have could probably be transferred to the former in a few weeks (as opposed to the years at university). You couldn't compress physics or math that much.
Replies from: sil-ver, ChristianKl↑ comment by Rafael Harth (sil-ver) · 2021-07-05T11:08:35.738Z · LW(p) · GW(p)
The IQ objection is a really good one that hasn't occurred to me at all. Although I'd have estimated less than half as large of a difference.
On maintaining order, it's worth pointing out that insofar as this is the relative strength of the highschool teacher, it probably doesn't have much to do with what the teacher learned from the literature.
↑ comment by ChristianKl · 2021-07-12T12:01:23.693Z · LW(p) · GW(p)
From my short experience, teaching in classroom if often less about transmitting information and more about maintaining order (but without maintaining order, transmission of information becomes impossible).
While this is true, reading the existing literature on pedagogy might be as helpful for maintaining order as reading the computer science literature for typing fast.
↑ comment by Dagon · 2021-07-03T18:12:40.000Z · LW(p) · GW(p)
I'm not sure I understand your claim.
many smart people could ignore the existing literature on pedagogy entirely and outperform most people who have obtained a formal degree in the area
Do you mean that smart untrained people would teach an average high school class better than a trained teacher? Or something else? and "the same" in math or physics is about learning the topic, or learning to teach the topic. One of the things that smart people do is to study the literature and incorporate it into their models.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-07-03T19:20:56.607Z · LW(p) · GW(p)
Do you mean that smart untrained people would teach an average high school class better than a trained teacher?
Yeah.
"the same" in math or physics is about learning the topic, or learning to teach the topic.
It's mostly like applying the knowledge somewhere. Suppose you have to solve a real problem that requires knowing physics.
Of course you can also read the literature, but my post was about when it's possible to do better without having done so.
Replies from: ChristianKl, Dagon↑ comment by ChristianKl · 2021-07-06T20:52:14.716Z · LW(p) · GW(p)
Yeah.
A lot about what being a good teacher is about isn't about being smart but emotional management. That means things like being consistent with students and not acting from a place of being emotionally triggered by students.
↑ comment by Dagon · 2021-07-06T20:48:22.992Z · LW(p) · GW(p)
Ok, I see where I disagree, then. I don't think a smart person who's avoided training and research about teaching can teach an average class better than a somewhat less smart person who's trained and studied how to teach. Probably better than a dumb person, and where the point of indifference is I don't know.
I don't think it's feasible to know physics or math very well without research and study of prior art, so I don't think that's an evaluatable claim. There are probably some math problems where raw IQ can get someone through, but never as well as somewhat less smart and actual study.
↑ comment by ChristianKl · 2021-07-03T12:33:52.224Z · LW(p) · GW(p)
I remember reading studies that came to the conclusion that a degree in education doesn't have any effect on the standardized scores of students of the teacher.
It doesn't seem to be like an equilibria to me. On the one hand you have teachers unions who want teachers with degrees to be payed more and on the other hand you have people like the Gates Foundation who want pay-for-performance where teachers who help their students achieve higher scores get higher pay.
comment by Rafael Harth (sil-ver) · 2020-10-20T08:17:08.870Z · LW(p) · GW(p)
Yesterday, I spent some time thinking about how, if you have a function and some point , the value of the directional derivative from could change as a function of the angle. I.e., what does the function look like? I thought that any relationship was probably possible as long as it has the property that . (The values of the derivative in two opposite directions need to be negatives of each other.)
Anyone reading this is hopefully better at Analysis than I am and realized that there is, in fact, no freedom at all because each directional derivative is entirely determined by the gradient through the equation (where ). This means that has to be the cosine function scaled by , it cannot be anything else.
I clearly failed to internalize what this equation means when I first heard it because I found it super surprising that the gradient determines the value of every directional derivative. Like, really? It's impossible to have more than exactly two directions with equally large derivatives unless the function is constant? It's impossible to turn 90 degree from the direction of the gradient and having anything but derivative 0 in that direction? I'm not asking that be discontinuous, only that it not be precisely . But alas.
This also made me realize that if viewed as a function of the circle is just the dot product with the standard vector, i.e.,
or even just . Similarly, .
I know what you're thinking; you need and to map to in the first place. But the circle seems like a good deal more fundamental than those two functions. Wouldn't it make more sense to introduce trigonometry in terms of 'how do we wrap around ?'. The function that does this is , and then you can study the properties that this function needs to have and eventually call the coordinates and . This feels like a way better motivation than putting a right triangle onto the unit circle for some reason, which is how I always see the topic introduced (and how I've introduced it myself).
Looking further at the analogy with the gradient, this also suggests that there is a natural extension of to for all . I.e., if we look at some point , we can again ask about the function that maps each angle to the value of the directional derivative on in that direction, and if we associate these angles with points of , then this yields the function , which is again just the dot product with or the projection onto the first coordinate (scaled by ). This can then be considered a higher-dimensional function.
There's also the 0-d case where . This describes how the direction changes the derivative for a function .
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2020-10-20T20:04:53.347Z · LW(p) · GW(p)
I found it super surprising that the gradient determines the value of every directional derivative. Like, really?
When reading this comment, I was surprised for a moment, too, but now that you mention it—it's because if the function is smooth at the point where you're taking the directional derivative, then it has to locally resemble a plane, just like a how a differentiable function of a single variable is said to be "locally linear". If the directional derivative varied in any other way, then the surface would have to have a "crinkle" at that point and it wouldn't be differentiable. Right?
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-10-21T15:54:05.426Z · LW(p) · GW(p)
That's probably right.
I have since learned that there are functions which do have all partial derivatives at a point but are not smooth. Wikipedia's example is with . And in this case, there is still a continuous function that maps each point to the value of the directional derivative, but it's , so different from the regular case.
So you can probably have all kinds of relationships between direction and {value of derivative in that direction}, but the class of smooth functions have a fixed relationship. It still feels surprising that 'most' functions we work with just happen to be smooth.
comment by Rafael Harth (sil-ver) · 2020-11-19T10:44:16.315Z · LW(p) · GW(p)
More on expectations leading to unhappiness: I think the most important instance of this in my life has been the following pattern.
- I do a thing where there is some kind of feedback mechanism
- The reception is better than I expected, sometimes by a lot
- I'm quite happy about this, for a day or so
- I immediately and unconsciously update my standards upward to consider the reception the new normal
- I do a comparable thing, the reception is worse than the previous time
- I brood over this failure for several days, usually with a major loss of productivity
OTOH, I can think of three distinct major cases in three different contexts where this has happened recently, and I think there were probably many smaller ones.
Of course, if something goes worse than expected, I never think "well, this is now the new expected level", but rather "this was clearly an outlier, and I can probably avoid it in the future". But outliers can happen in both directions. The counter-argument here is that one would hope to make progress in life, but even under the optimistic assumption that this is happening, it's still unreasonable to expect things to improve monotonically.
Replies from: MakoYass, Viliam↑ comment by mako yass (MakoYass) · 2020-11-24T23:11:23.910Z · LW(p) · GW(p)
I hope you are trying to understand the causes of the success (including luck) instead of just mindlessly following a reward signal. Not even rats mindlessly obey reward signals [LW · GW].
↑ comment by Viliam · 2020-11-21T13:58:18.201Z · LW(p) · GW(p)
The expectation of getting worse reception next time can already be damaging.
Like, one day you write a short story, send it to a magazine, and it gets published. Hurray! Next day you turn on your computer thinking about another story, and suddenly you start worrying "what if the second story is less good than the first one? will it be okay to offer it to the magazine? if no, then what is the point of writing it?". (Then you spend the whole day worrying, and don't write anything.)
comment by Rafael Harth (sil-ver) · 2021-10-23T22:48:04.827Z · LW(p) · GW(p)
I think it's fair to say that almost every fictional setting is populated by people who unilaterally share certain properties, most commonly philosophical views, because the author cannot or doesn't want to conceive of people who are different.
Popular examples: there are zero non-evil consequentialists in the universe of Twilight. There are no utilitarians in the universe of Harry Potter except for Grindelwald (who I'd argue is a strawman and also evil). There are no moral realists in Luminosity (I don't have Alicorn's take on this claim, but I genuinely suspect she'd agree).
This naturally leads to a metric of evaluating stories, i.e. how fully does it capture the range of human views (and other properties). The most obvious example of a work that scores very highly is of course a Song of Ice and Fire. You can e.g. find clear, non-strawman examples of consequentialism (Varys) and Virtue Ethics (Brienne). Also notaeble is hpmor, even though no-one in that universe feels status-regulating emotions. (Eliezer said this himself.)
Rarely done but also possible (and imo underutilized): intentionally change characteristics of your universe. I think this is done in The Series of Unfortunate Events to great effect (everyone is autistic, no-one rationalizes anything).
Replies from: Yoav Ravid↑ comment by Yoav Ravid · 2021-10-24T05:25:41.392Z · LW(p) · GW(p)
Brandon Sanderson is also very good at this. As an example, he's religious, but he's very good at writing both other religions and characters that atheistic (Jasna from Stormlight Archive is an atheist and she's written very well).
His most extreme consequentialist is also supposed to be a bad guy, but he does not strawman him, and you actually get to hear a lot of his reasoning and you can agree with him. My problem with him (in world, not a problem of writing, I think he's a great character) was he didn't sufficiently consider the possibility he was wrong. But there are other consequentialist that aren't portrayed in a bad light (like Jasna from before), and many of the main characters struggle with these moral ideas.
Even the character he had the most excuses to write badly, a god called ruin who is almost more a force of nature than a god (from Mistborn), isn't written as a dull, obviously evil and wrong character, but is "steelmaned", if you will. And he shows the many flaws of his counterpart, preservation, that doesn't let things grow to preserve them, which often ends up being counter productive.
comment by Rafael Harth (sil-ver) · 2021-05-18T17:05:16.966Z · LW(p) · GW(p)
This paper is amazing. I don't think I've ever seen such a scathing critique in an academic context as is presented here.
There is now a vast and confusing literature on some combination of interpretability and ex- plainability. Much literature on explainability confounds it with interpretability/comprehensibility, thus obscuring the arguments, detracting from their precision, and failing to convey the relative importance and use-cases of the two topics in practice. Some of the literature discusses topics in such generality that its lessons have little bearing on any specific problem. Some of it aims to design taxonomies that miss vast topics within interpretable ML. Some of it provides definitions that we disagree with. Some of it even provides guidance that could perpetuate bad practice. Most of it assumes that one would explain a black box without consideration of whether there is an interpretable model of the same accuracy.
[...]
XAI surveys have (thus far) universally failed to acknowledge the important point that inter- pretability begets accuracy when considering the full data science process, and not the other way around. [...]
[...]
In this survey, we do not aim to provide yet another dull taxonomy of “explainability” termi- nology. The ideas of interpretable ML can be stated in just one sentence: [...]
As far as I can tell, this is all pretty on point. (And I know I've conflated explanability and interpretability before.)
I think I like this because it makes up update downward on how restricted you actually are in what you can publish, as soon as you have some reasonable amount of reputation. I used to find the idea of diving into the publishing world paralyzing because you have to adhere to the process, but nowadays that seems like much less of a big deal.
comment by Rafael Harth (sil-ver) · 2020-12-19T10:11:15.586Z · LW(p) · GW(p)
It's a meme that Wikipedia is not a trustworthy source. Wikipedia agrees:
We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia and other encyclopedias is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn't be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information. See researching with Wikipedia and academic use of Wikipedia for more information.
This seems completely bonkers to me. Yes, Wikipedia is not 100% accurate, but this is a trivial statement. What is the alternative? Academic papers? My experience suggests that I'm more than 10 times as likely to find errors in academic papers than in Wikipedia. Journal articles? Pretty sure the factor here is even higher. And on top of that, Wikipedia tends to be way better explained.
I can mostly judge mathy articles, and honestly, it's almost unbelievable to me how good Wikipedia actually seems to be. A data point here is the Monty Hall problem. I think the thing that's most commonly misunderstood about this problem is that the solution depends on how the host chooses the door they reveal. Wikipedia:
The given probabilities depend on specific assumptions about how the host and contestant choose their doors. A key insight is that, under these standard conditions, there is more information about doors 2 and 3 than was available at the beginning of the game when door 1 was chosen by the player: the host's deliberate action adds value to the door he did not choose to eliminate, but not to the one chosen by the contestant originally. Another insight is that switching doors is a different action than choosing between the two remaining doors at random, as the first action uses the previous information and the latter does not. Other possible behaviors than the one described can reveal different additional information, or none at all, and yield different probabilities. Yet another insight is that your chance of winning by switching doors is directly related to your chance of choosing the winning door in the first place: if you choose the correct door on your first try, then switching loses; if you choose a wrong door on your first try, then switching wins; your chance of choosing the correct door on your first try is 1/3, and the chance of choosing a wrong door is 2/3.
It's possible that Wikipedia's status as not being a cite-able source is part of the reason why it's so good. I'm not sure. But the fact that a system based entirely on voluntary contributions so thoroughly outperforms academic journals is remarkable.
Another more rambly aspect here is that, when I hear someone lament the quality of Wikipedia, almost always my impression is that this person is doing superiority signaling rather than having a legitimate reason for the comment.
Replies from: mr-hire, iamhefesto↑ comment by Matt Goldenberg (mr-hire) · 2020-12-19T21:38:18.486Z · LW(p) · GW(p)
I believe I saw a study that showed the amount of inaccuracies in Wikipedia to be about equal to those in a well trusted encyclopedia (Britannica I think?) as judged by experts on the articles being reviewed.
Replies from: mr-hire, mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-12-19T21:57:23.985Z · LW(p) · GW(p)
Here's is wikipedia's (I'm sure very accurate) coverage of the study.: https://en.wikipedia.org/wiki/Reliability_of_Wikipedia#Assessments
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-12-19T22:30:01.275Z · LW(p) · GW(p)
Interesting, but worth pointing out that this is 15 years old. One thing that I believe changed within that time is that anyone can edit articles (now, edits aren't published until they're approved). And in general, I believe Wikipedia has gotten better over time, though I'm not sure.
Replies from: ChristianKl↑ comment by ChristianKl · 2020-12-21T00:25:47.220Z · LW(p) · GW(p)
One thing that I believe changed within that time is that anyone can edit articles (now, edits aren't published until they're approved).
That's true in the German Wikipedia. It's not true for most Wikipedia versions.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-12-21T13:28:48.102Z · LW(p) · GW(p)
Ah, I didn't know that. (Even though I use the English Wikipedia more than the German one.)
↑ comment by Matt Goldenberg (mr-hire) · 2020-12-19T21:57:00.743Z · LW(p) · GW(p)
Here's is wikipedia's (I'm sure very accurate) coverage of the study.: https://en.wikipedia.org/wiki/Reliability_of_Wikipedia#Assessments
↑ comment by iamhefesto · 2020-12-19T12:40:05.724Z · LW(p) · GW(p)
The ideal situation to which Wikipedia contributors\editors are striving for kinda makes desires to cite Wikipedia itself pointless. Well written Wikipedia article should not contain any information that has no original source attached. So it should always be available to switch from wiki article to original material doing citing. And it is that way as far as my experience goes.
Regarding alternatives. Academic papers serve different purpose and must not be used as navigation material. The only real alternative i know is the field handbooks.
↑ comment by Rafael Harth (sil-ver) · 2020-12-19T17:27:44.612Z · LW(p) · GW(p)
The ideal situation to which Wikipedia contributors\editors are striving for kinda makes desires to cite Wikipedia itself pointless. Well written Wikipedia article should not contain any information that has no original source attached. So it should always be available to switch from wiki article to original material doing citing.
I see what you're saying, but citing Wikipedia has the benefit that a person looking at the source gets to read Wikipedia (which is generally easier to read) rather than the academic paper. Plus, it's less work for the person doing the citation.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-12-19T21:56:18.207Z · LW(p) · GW(p)
It's less work for the citer, but that extra work helps guide against misinformation. In principle, you are only supposed to cite what you've actually read, so if someone has misdescribed the content of the citation, making the next citer check what the original text says helps catch the mistake.
And while citing the original is extra work for the citer, it's less work for anyone who wants to track down and read the original citation.
comment by Rafael Harth (sil-ver) · 2020-08-28T13:08:28.441Z · LW(p) · GW(p)
Eliezer Yudkowsky often emphasizes the fact that an argument can be valid or not independently of whether the conclusion holds. If I argue and A is true but C is false, it could still be that is a valid step.
Most people outside of LW don't get this. If I criticize an argument about something political (but the conclusion is popular), usually the response is something about why the conclusion is true (or about how I'm a bad person for doubting the conclusion). But the really frustrating part is that they're, in some sense, correct not to get it because the inference
is actually a pretty reliable conclusion on... well, on reddit, anyway.
Julia Galef made a very similar point once:
And the problem... The conclusion of all of this is: even if everyone's behaving perfectly rationally, and just making inferences justified by the correlations, you're going to get this problem. And so in a way that's depressing. But it was also kind of calming to me, because it made me... like, the fact that people are making these inferences about me feels sort of, “Well, it is Bayesian of them."
Somehow, I only got annoyed about this after having heard her say it. I probably didn't realize it was happening regularly before.
She also suggests a solution
Replies from: ricardo-meneghin-filho, DagonSo maybe I can sort of grudgingly force myself to try to give them enough other evidence, in my manner and in the things that I say, so that they don't make that inference about me.
↑ comment by Ricardo Meneghin (ricardo-meneghin-filho) · 2020-08-28T15:45:06.489Z · LW(p) · GW(p)
I think that the way to not get frustrated about this is to know your public and know when spending your time arguing something will have a positive outcome or not. You don't need to be right or honest all the time, you just need to say things that are going to have the best outcome. If lying or omitting your opinions is the way of making people understand/not fight you, so be it. Failure to do this isn't superior rationality, it's just poor social skills.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-08-28T16:43:20.494Z · LW(p) · GW(p)
While I am not a rule utilitarian and I think that, ultimately, honesty is not a terminal value, I also consider the norm against lying to be extremely important. I would need correspondingly strong reasons to break it, and those won't exist as far as political discussions go (because they don't matter enough and you can usually avoid them if you want).
The "keeping your opinions to yourself" part if your post is certainly a way to do it, though I currently don't think that my involvement in political discussions is net harmful. But I strongly object to the idea that I should ever be dishonest, both online and offline.
↑ comment by Dagon · 2020-08-28T15:37:03.906Z · LW(p) · GW(p)
It comes down to selection and attention as evidence of beliefs/values. The very fact that someone expends energy on an argument (pro or con) is pretty solid evidence that they care about the topic. They may also care (or even more strongly care) about validity of arguments, but even the most Spock-like rationalists are more likely to point out flaws in arguments when they are interested in the domain.
But I'm confused at your initial example - if the argument is A -> B -> C, and A is true and C is false, then EITHER A->B is false, or B->C is false. Either way, A->B->C is false.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2020-08-28T15:42:20.872Z · LW(p) · GW(p)
But I'm confused at your initial example - if the argument is A -> B -> C, and A is true and C is false, then EITHER A->B is false, or B->C is false. Either way, A->B->C is false.
A -> B -> C is false, but A -> B (which is a step in the argument) could be correct -- that's all I meant. I guess that was an unnecessarily complicated example. You could just say A and B are false but A -> B is true.
comment by Rafael Harth (sil-ver) · 2022-12-23T12:00:09.529Z · LW(p) · GW(p)
I've asked ChatGPT to write a short essay explaining why it's a bad idea to use ChatGPT to explain why using ChatGPT for homework is a bad idea.
It looked at me seriously and delivered without as much as a flinch.
Using ChatGPT to explain why using ChatGPT for homework is a bad idea is not a good idea for several reasons.
Firstly, it is not efficient or effective to use ChatGPT as a means of explaining the drawbacks of using ChatGPT for homework. While ChatGPT may be able to generate text on the topic, it is not capable of understanding the nuances or complexities of the issue. As a result, the explanation produced by ChatGPT may be incomplete or inaccurate, and may not effectively convey the intended message.
Secondly, using ChatGPT to explain the drawbacks of using ChatGPT for homework can be seen as hypocritical or lacking in credibility. By using an automated tool to address the issue of using automated tools for academic purposes, the explanation may be perceived as lacking in authenticity or genuine understanding of the subject matter.
Finally, there are more effective and reliable ways to explain why using ChatGPT for homework is a bad idea. For example, consulting with a teacher, tutor, or academic advisor, or researching the topic through credible sources such as educational websites or textbooks, can provide a more thorough and accurate understanding of the issue.
In summary, using ChatGPT to explain why using ChatGPT for homework is a bad idea is not an effective or reliable approach, and there are better alternatives available for understanding and addressing this issue.
comment by Rafael Harth (sil-ver) · 2020-11-09T16:09:56.699Z · LW(p) · GW(p)
A major source of unhappiness (or more generally, unpleasant feelings) seems to be violated expectations.
This is clearly based on instinctive expectations, not intellectual expectations, and there are many cases in which these come apart. This suggests that fixing those cases is a good way to make one's life more pleasant.
The most extreme example of this is what Sam Harris said in a lesson: he was having some problems, complained about them to someone else, and that person basically told him, 'why are you upset, did you expect to never face problems ever again?'. According to Sam, he did indeed expect no more problems to arise, on an instinctive level -- which is, of course, absurd.
Another case where I've mostly succeeded is not expecting people to be on time for anything [LW(p) · GW(p)].
I think there are lots of other cases where this still happens. Misunderstandings are a big one. It's ridiculously hard to not be misunderstood, and I expect to be misunderstood on an intellectual level, so I should probably internalize that I'm going to be misunderstood in many cases. In general, anything where the bad thing is 'unfair' is at risk here: (I think) I tend to have the instinctive expectation that unfair things don't happen, even though they happen all the time.
Replies from: Khanivore↑ comment by Khanivore · 2020-11-10T16:52:05.673Z · LW(p) · GW(p)
I just posted about this but is that not why the serenity prayer or saying is so popular? GOD aside whether you are a religious or God person or not the sentiment or logic of the saying holds true - God grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference. You should be allowed to ask yourself for that same courage. And I agree that most sources of unhappiness seems to be a violation of expectations. There are many things outside of ones controls and one should perhaps make their expectations logically based on that fact.
comment by Rafael Harth (sil-ver) · 2021-11-04T12:53:03.502Z · LW(p) · GW(p)
Most people are really bad at probability.
Suppose u think you're 80% likely to have left a power adapter somewhere inside a case with 4 otherwise-identical compartments. You check 3 compartments without finding your adapter. What's the probability that the adapter is inside the remaining compartment?
I think the simplest way to compute this in full rigor is via the odds formula of Bayes Rule (the regular version works as well but is too complicated to do in your head):
- Prior odds for [Adapter is in any compartment]: (4:1)
- Relative chances of observed event [I didn't find the adapter] given that it's in any compartment vs. not: (25%: 100%) = (1:4)
- Posterior odds [Adapter is in any compartment]: (4:1) (1:4) = (4:4) = (1:1) = 0.5
Alternative way via intuition: treat "not there" as a fifth compartment. Probability mass for adapter being in compartments #1-#5 evolves as follows upon seeing empty compartments #1, #2, #3:
I think the people who mention Monty Hall in the Twitter comments have misunderstood why the answer there is and falsely believe it's in this case. (That was the most commonly chosen answer.)
Monty Hall depends on how the moderator chooses the door they open. If they choose the door randomly (which is not the normal version), then probability evolves like so:
So Eliezer's compartment problem is analogous to the non-standard version of Monty Hall. In the standard version where the moderator deliberately opens [the door among {#2, #3} with the goat], the probability mass of the opened door flows exclusively into the third door, i.e.,
Replies from: Pattern, gwern
↑ comment by Pattern · 2021-11-04T18:54:56.119Z · LW(p) · GW(p)
You're assuming the adapter is as likely to be in any compartment as any other. (If they aren't, and I have more information and choose to open the three most likely compartments, then p<20%, where p="the probability that the adapter is inside the remaining compartment".)
I think the people who mention Monty Hall in the Twitter comments have misunderstood why the answer there is 13 and falsely believe it's 0.8 in this case. (That was the most commonly chosen answer.)
They're handling it like the probability it's in the case is 100%. And thus, it must certainly be in the case in the fourth compartment. This works with 1:0 in favor of it being in the case, but doesn't for any non-zero value on the right of 1:0.
In order for it to be 80% after the three tries, they'd have to do this intentionally/adversarially choosing.
The obvious fix is play a game. (Physically.) With the associated probabilities. And keep score.
comment by Rafael Harth (sil-ver) · 2021-10-18T18:51:47.690Z · LW(p) · GW(p)
Still looking for study participants. (see here. [LW · GW])
If you are interested, don't procrastinate on it too long because I am short on time and will just get Mechanical Turkers if I can't find LWs.
comment by Rafael Harth (sil-ver) · 2020-12-12T16:28:50.361Z · LW(p) · GW(p)
I was initially extremely disappointed with the reception of this post [LW · GW]. After publishing it, I thought it was the best thing I've ever written (and I still think that), but it got < 10 karma. (Then it got more weeks later.)
If my model of what happened is roughly correct, the main issue was that I failed to communicate the intent of the post. People seemed to think I was trying to say something about the 2020 election, only to then be disappointed because I wasn't really doing that. Actually, I was trying to do something much more ambitious: solving the 'what is a probability' problem. And I genuinely think I've succeeded. I used to have this slight feeling of confusion every time I've thought about this because I simultaneously believed that predictions can be better or worse and that talking about the 'correct probability' is silly, but had no way to reconcile the two. But in fact, I think there's a simple ground truth that solves the philosophical problem entirely.
I've now changed the title and put a note at the start. So anyway, if anyone didn't click on it because of the title or low karma, I'm hereby virtually resubmitting it.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2020-12-13T04:42:57.989Z · LW(p) · GW(p)
(Datapoint on initial perception: at the time, I had glanced at the post, but didn't vote or comment, because I thought Steven was in the right in the precipitating discussion [LW(p) · GW(p)] and the "a prediction can assign less probability-mass to the actual outcome than another but still be better" position seemed either confused or confusingly phrased to me; I would say that a good model can make a bad prediction about a particular event, but the model still has to take a hit [LW · GW].)
comment by Rafael Harth (sil-ver) · 2020-11-09T11:02:15.649Z · LW(p) · GW(p)
I think it's still too early to perform a full postmortem on the election because some margins still aren't known, but my current hypothesis is that the presidential markets had uniquely poor calibration because Donald Trump convinced many people that polls didn't matter, and those people were responsible for a large part of the money put on him (as supposed to experienced, dispassionate gamblers).
The main evidence for this (this one is just about irrationality of the market) is the way the market has shifted, which some other people like gwern have pointed out as well. I think the most damning part here is the amount of time it took to bounce back. Although this is speculation, I strongly suspect that, if some of the good news for Biden had come out before the Florida results, then the market would have looked different at the the point where both were known.[1] A second piece of evidence is the size of the shift, which I believe should probably not have crossed 50% for Biden (but in fact, it went down to 20.7% at the most extreme point, and bounced around 30 for a while).
I think a third piece of evidence is the market right now. In just a couple of minutes before I posted this, I've seen Trump go from 6% to 9%+ and back. Claiming that Trump has more than 5% at this point seems like an extremely hard case to make. Reference forecasting yields only a single instance of that happening (year 2000), which would put it at <2%, and the obvious way to update away from that seems to be to decrease the probability because 2000 had much closer margins. But if Trump has rallied first-time betters, they might think the probability is above 10%.
There is also Scott Adams, who has the habit of saying a lot of smart-sounding words to argue for something extremely improbable. If you trust him, I think you should consider a 6ct buy for Trump an amazing deal at the moment.
I would be very interested in knowing what percentage of the money on Trump comes from people who use prediction markets for the first time. I would also be interested in knowing how many people have brought (yes, no) pairs in different prediction markets to exploit gaps, because my theory predicts that PredictIt probably has worse calibration. (In fact, I believe it consistently had Trump a bit higher, but the reason why the difference was small may just be because smart gamblers took safe money by buying NO on predictIt and YES on harder-to-use markets whenever the margin grew too large).
To be clear, my claim here is bad news came out for Biden, then a lot of good news came out for him, probably enough to put him at 80%, and then it took at least a few more hours for the market to go from roughly 1/3 to 2/3 for Biden. It's tedious to provide evidence of this because there's no easy way to produce a chart of good news on election night, but that was my experience following the news in real time. I've made a post in another forum expressing confusion over the market shortly before it shifted back into Biden's favor. ↩︎
comment by Rafael Harth (sil-ver) · 2020-09-20T16:28:28.849Z · LW(p) · GW(p)
There's an interesting corollary of semi-decidable languages that sounds like the kind of cool fact you would teach in class, but somehow I've never heard or read it anywhere.
A semi-decidable language is a set over a finite alphabet such that there exists a Turing machine such that, for any , if you run on input , then [if it halts after finitely many steps and outputs '1', whereas if , it does something else (typically, it runs forever)].
The halting problem is semi-decidable. I.e., the language of all bit codes of Turing Machines that (on empty input) eventually halt is semi-decidable. However, for any , there is a limit, call it , on how long Turing Machines with bit code of length at most can run, if they don't run forever.[1] So, if you could compute an upper-bound on , you could solve the halting problem by building a TM that
- Computes the upper bound
- Simulates the TM encoded by for steps
- Halts; outputs 1 if the TM halted and 0 otherwise
Since that would contradict the fact that is not fully decidable, it follows that it's impossible to compute an upper bound. This means that the function not only is uncomputable, but it grows faster than any computable function.
An identical construction works for any other semi-decidable language, which means that any semi-decidable language determines a function that grows faster than any computable function. Which seems completely insane since is computable .
This just follows from the fact that there are only finitely many such Turing Machines, and a finite subset of them that eventually halt, so if halts after steps, then the limit function is defined by . ↩︎
comment by Rafael Harth (sil-ver) · 2020-09-12T13:53:21.450Z · LW(p) · GW(p)
Common wisdom says that someone accusing you of especially hurts if, deep down, you know that is true. This is confusing because the general pattern I observe is closer to the opposite. At the same time, I don't think common wisdom is totally without a basis here.
My model to unify both is that someone accusing you of hurts proportionally to how much hearing that you do upsets you.[1] And of course, one reason that it might upset you is that it's not true. But a separate reason is that you've made an effort to delude yourself about it. If you're a selfish person but spend a lot of effort pretending that you're not selfish at all, you super don't want to hear that you're actually selfish.
Under this model, if someone gets very upset, it might be that that deep down they know the accusation is true, and they've tried to pretend it's not, but it might also be that the accusation is super duper not true, and they're upset precisely because it's so outrageous.
Proportional just means it's one multiplicative factor, though. I think it also matters how high-status you perceive the other person to be. ↩︎
↑ comment by Dagon · 2020-09-14T16:03:18.568Z · LW(p) · GW(p)
I think this simplifies a lot by looking at public acceptance of a proposition, rather than literal internal truth. It hurts if you think people will believe it, and that will impact their treatment of you.
The "hurts because it's true" heuristic is taking a path through "true is plausible", in order to reinforce the taunt.
comment by Rafael Harth (sil-ver) · 2020-08-21T20:35:24.884Z · LW(p) · GW(p)
I don't entirely understand the Free Energy principle, and I don't know how liberally one is meant to apply it.
But in completely practical terms, I used to be very annoyed when doing things with people who take long for stuff/aren't punctual. And here, I've noticed a very direct link between changing expectations and reduced annoyance/suffering. If I simply accept that every step of every activity is allowed to take an arbitrary amount of time, extended waiting times cause almost zero suffering on my end. I have successfully beaten impatience (for some subset of contexts).
The acceptance step works because there is, some sense, no reason waiting should ever be unpleasant. Given access to my phone, it is almot always true to say that the prospect of having to wait for 30 minutes is not scary.
(This is perfectly compatible with being very punctual myself.)
— — — — — — — — — — — — — — — —
[1] By saying it is 'allowed', I mean something like 'I actually really understand and accecpt that this is a possible outcome'.
[2] This has to include cases where specific dates have been announced. If someone says they'll be ready in 15 minutes, it is allowed that they take 40 minutes to be ready. Especailly relevant if that someone is predictably wrong.
comment by Rafael Harth (sil-ver) · 2023-03-12T13:22:01.096Z · LW(p) · GW(p)
So Elon Musk's anti-woke OpenAI alternative sounds incredibly stupid on first glance since it implies that he thinks the AI's wokeness or anti-wokeness is the thing that matters.
But I think there's at least a chance that it may be less stupid than it sounds. He admits here that he may have accelerated AI research, that this may be a bad thing, and that AI should be regulated. And it's not that difficult to bring these two together; here are two ideas
- Incentivize regulation by threatening an anti-woke AI as speculated by this comment on AstralCodexTen
- Slow capability. If he makes a half-assed attempt to start a competitor, the most likely outcome may just be that it sucks resources away from OpenAI without genuinely accelerating progress. Sort of like a split-the-vote strategy, which could lead to DeepMind getting a more genuine lead again.
Any thoughts?
comment by Rafael Harth (sil-ver) · 2022-12-22T11:21:43.660Z · LW(p) · GW(p)
The argumentative theory of reason says that humans evolved reasoning skills not to make better decisions in their life but to argue more skillfully with others.
Afaik most LWs think this is not particularly plausible and perhaps overly cynical, and I'd agree. But is it fair to say that the theory is accurate for ChatGPT? And insofar as ChatGPT is non-human-like, is that evidence against the theory for humans?
comment by Rafael Harth (sil-ver) · 2024-05-18T13:55:22.388Z · LW(p) · GW(p)
From my perspective, the only thing that keeps the OpenAI situation from being all kinds of terrible is that I continue to think they're not close to human-level AGI, so it probably doesn't matter all that much.
This is also my take on AI doom in general; my P(doom|AGI soon) is quite high (>50% for sure), but my P(AGI soon) is low. In fact it decreased in the last 12 months.
comment by Rafael Harth (sil-ver) · 2021-09-03T19:13:46.125Z · LW(p) · GW(p)
Super unoriginal observation, but I've only now found a concise way of putting this:
What's weird about the vast majority of people is that they (a) would never claim to be among the 0.1% smartest people of the world, but (b) behave as though they are among the best 0.1% of the world when it comes to forming accurate beliefs, as expressed by their confidence in their beliefs. (Since otherwise being highly confident in something that lots of smart people disagree with is illogical.)
Someone (Tyler Cowen?) said that most people ought assign much lower confidences to their beliefs, like 52% instead of 99% or whatever. While this is upstream of the same observation, it has never sat right with me. I think it's because I wouldn't diagnoze the problem as overconfidence but as [not realizing or ignoring] the implication I'm confident I must be way better than almost everyone else at this process.
Replies from: dxu, Dagon, JBlack, LVSN↑ comment by dxu · 2021-09-03T19:35:17.177Z · LW(p) · GW(p)
I realize you're not exactly saying it outright, but some parts of your comment seem to be gesturing at the idea that smart people should adopt a "modesty norm" among themselves. I think this is a very bad idea for reasons EY already articulated, so I'd just like to clarify whether this is what you believe?
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-09-04T17:03:38.151Z · LW(p) · GW(p)
Thanks for making that question explicit! That's not my position at all. I think many people who read Inadequate Equilibria are, in fact, among the top 0.1% of people when it comes to forming accurate beliefs. (If you buy into the rationality project at all, then this is much easier than being among the 0.1% most intelligent people.) As such, they can outperform most people and be justified in having reasonably confident beliefs.
This is also how I remember EY's argument. He was saying that we shouldn't apply modesty --because-- it is possible to know better than the vast majority of people.
A very relevant observation here is that there is real convergence happening among those people. If I take the set of my ~8 favorite public intellectuals, they tend to agree with close to zero exceptions on many of [the issues that I consider not that hard even though tons of people disagree about them]. Even among LW surveys, we had answers that are very different from the population mean.
Anyway, I don't think this is in any conflict with my original point. If you ask the average person with super confident beliefs, I'm pretty sure they are not likely to have an explicit belief of being among the top 0.1% when it comes to forming accurate beliefs (and of course, they aren't), and there's your inconsistency.
↑ comment by Dagon · 2021-09-04T15:48:30.668Z · LW(p) · GW(p)
I think there's a common confusion (and perhaps an inability below a certain cognitive ability) to recognize the difference between belief, policy, and action. For an even-money bet (losing costs the same utility as winning gains), your policy should be to bet the most probable, and your action, for a 52% chance of red, is to bet red.
There are other kinds of bets where probability means to be more proportionate, but a surprising number of actions end up being binary in result, even if they're highly uncertain when taking the action.
This leads to vastly over-stating one's confidence, both when justifying decisions and when advising others about policy and actions.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-09-04T17:17:48.389Z · LW(p) · GW(p)
Is that really a relevant phenomenon? Many of the beliefs I was thinking about (say your opinion on immigration) don't affect real life choices at all, or at least not in a way that provides feedback on whether the belief was true.
Replies from: Dagon↑ comment by JBlack · 2021-09-04T12:44:45.945Z · LW(p) · GW(p)
Is it really that simple? I've seen a lot of ways in which people strongly express beliefs different from those expressed by a large majority of smart people. Most of the apparent reasons do not seem to boil down to overconfidence of any sort, but are related to the fact that expressions of belief are social acts with many consequences. Personally I have a reputation as a "fence-sitter" (apparently this is socially undesirable) since I often present evidence for and against various positions instead of showing "courage of convictions".
I wouldn't quite profess that beliefs being expressed are nothing but tokens in a social game and don't actually matter to how people actually think and act, but I'm pretty sure that they matter a lot less than the form and strength of expression indicates. People do seem to really believe what they say in the moment, but then continue with life without examining the consequences of that belief to their life.
I am not excluding myself from this assessment, but I would expect anyone reading or posting on this site to want to examine consequences of their expressed and unexpressed beliefs substantially more than most.
↑ comment by LVSN · 2021-09-03T19:40:16.265Z · LW(p) · GW(p)
Someone (Tyler Cowen?) said that most people ought assign much lower confidences to their beliefs, like 52% instead of 99% or whatever.
oops I have just gained the foundational insight for allowing myself to be converted to (explicit probability-tracking-style) Bayesianism; thank you for that
I always thought "belief is when you think something is significantly more likely than not; like 90%, or 75%, or 66%." No; even just having 2% more confidence is a huge difference given how weak existing evidence is.
If one really rational debate-enjoyer thinks A is 2% likely (compared to the negation of A, which is at negative 2%), that's better than a hundred million people shouting that the negation of A is 100% likely.
Replies from: JBlack↑ comment by JBlack · 2021-09-04T11:44:44.115Z · LW(p) · GW(p)
To me, 0.02 is a comparatively tiny difference between likelihood of a proposition and its negation.
If P(A) = 0.51 and P(~A) = 0.49 then almost every decision I make based on A will give almost equal weight to whether it is true or false, and the cognitive process of working through implications on either side are essentially identical to the case P(A) = 0.49 and P(~A) = 0.51. The outcome of the decision will also be the same very frequently, since outcomes are usually unbalanced.
It takes quite a bit of contriving to arrange a situation where there is any meaningful difference between P(A) = 0.51 and P(A) = 0.49 for some real-world proposition A.
Replies from: sil-ver, LVSN↑ comment by Rafael Harth (sil-ver) · 2021-09-04T17:14:46.653Z · LW(p) · GW(p)
Yeah, and this may get at another reason why the proposal doesn't seem right to me. There's no doubt that most people would be better calibrated if they adopted it, but 52% and 48% are the same for the average person, so it's completely impractical.
If anything, the proposal should be 'if you don't think you're particularly smart, your position on almost every controversial topic should be "I have no idea"'. Which still might not be good advice because there is disproportionate overlap between the set of people likely to take the advice and the set of people for whom it doesn't apply.
↑ comment by LVSN · 2021-09-05T08:38:06.823Z · LW(p) · GW(p)
If you think it's very important to think about all the possible adjacent interpretations of a proposition as stated before making up your mind, it can be useful to indicate your initial agreement with the propositions as a small minimum divergence from total uncertainty (the uncertainty representing your uncertainty about whether you'll come up with better interpretations for the thing you think you're confident about) on just so many interpretations before you consider more ambitious numbers like 90%.
If you always do this and you wind up being wrong about some belief, then it is at least possible to think that the error you made was failing to list a sufficient number of sufficiently specific adjacent possibilities before asking yourself more seriously about what their true probabilities were. Making distinctions is a really important part of knowing the truth; don't pin all the hopes of every A-adjacent possibility on just one proposition in the set of A-adjacent possibilities. Two A-adjacent propositions can have great or critically moderate differences in likelihood; thinking only about A can mislead you about A-synonymous things.
comment by Rafael Harth (sil-ver) · 2021-08-15T22:00:43.367Z · LW(p) · GW(p)
Is there a reason why most languages don't have ada's hierarchical functions? Making a function only visible inside of another function is something I want to do all the time but can't.
Replies from: gwern↑ comment by gwern · 2021-08-15T22:03:53.631Z · LW(p) · GW(p)
What languages are you using that don't support that? Every language I use on a semi-monthly basis (Haskell, R, Python, Bash, Javascript, PHP, Elisp...) that I can think of supports defining a function inside a function (under various names like let/where local definitions, 'inner functions', what-have-you), and typically support even anonymous function definitions (lambdas).
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-08-16T16:10:47.257Z · LW(p) · GW(p)
I was thinking about Java and Python. The fact that you can just use lambdas first occurred to me at some point in between writing this and seeing your answer. I don't know why it wasn't obvious.
Replies from: gwern↑ comment by gwern · 2021-08-16T16:35:39.676Z · LW(p) · GW(p)
Aside from lambdas, Python has 'inner functions' where you just def
inside a def
. Java has anonymous inner classes and private functions, and Java 8 adds lambdas; I had to google this one, but apparently Java even has "local classes" which sounds like an exact match for what you want?
↑ comment by Viliam · 2021-08-16T19:42:10.400Z · LW(p) · GW(p)
Lambdas in Java 8 can only access variables from the surrounding block as read-only. For example, if you want to calculate the sum of numbers between 1 and 100, this gives you a compile-time error:
int x = 0;
IntStream.rangeClosed(1, 100).forEach(i -> x += i);
If memory serves me well, in Pascal, local functions could also write to the variables they could see.
comment by Rafael Harth (sil-ver) · 2021-07-19T12:58:49.780Z · LW(p) · GW(p)
Instead of explaining something to a rubber duck, why not explain it via an extensive comment? Maybe this isn't practical for projects with multiple people, but if it's personal code, writing it down seems better as a way to force rigor from yourself, and it's an investment into a possible future in which you have to understand the code once again.
comment by Rafael Harth (sil-ver) · 2021-02-11T17:27:00.746Z · LW(p) · GW(p)
Edit: this structure is not a field as proved by just_browsing [LW(p) · GW(p)].
Here is a wacky idea I've had forever.
There are a bunch of areas in math where you get expressions of the form and they resolve to some number, but it's not always the same number. I've heard some people say that "can be any number". Can we formalize this? The formalism would have to include as something different than , so that if you divide the first by 0, you get 4, but the second gets 3.
Here is a way to turn this into what may be a field or ring. Each element is a function , where a function of the form reads as . Addition is component-wise (; this makes sense), i.e., , and multiplication is, well, , so we get the rule
This becomes a problem once elements with infinite support are considered, i.e., functions that are nonzero at infinitely many values, since then the sum may not converge. But it's well defined for numbers with finite support. This is all similar to how polynomials are handled formally, except that polynomials only go in one direction (i.e., they're functions from rather than ), and that also solves the non-convergence problem. Even if infinite polynomials are allowed, multiplication is well-defined since for any , there are only finitely many pairs of natural numbers such that .
The additively neutral element in this setting is and the multiplicatively neutral element is . Additive inverses are easy; . The interesting part is multiplicative inverses. Of course, there is no inverse of , so we still can't divide by the 'real' zero. But I believe all elements with finite support do have a multicative inverse (there should be a straight-forward inductive proof for this). Interestingly, those inverses are not finite anymore, but they are periodical. For example, the inverse of is just , but the inverse of is actually
I think this becomes a field with well-defined operations if one considers only the elements with finite support and elements with inverses of finite support. (The product of two elements-whose-inverses-have-finite-support should itself have an inverse of finite support because ). I wonder if this structure has been studied somewhere... probably without anyone thinking of the interpretation considered here.
Replies from: tetraspace-grouping, just_browsing↑ comment by Tetraspace (tetraspace-grouping) · 2021-02-14T17:04:37.713Z · LW(p) · GW(p)
This looks like the hyperreal numbers, with your equal to their .
↑ comment by just_browsing · 2021-02-14T01:01:19.854Z · LW(p) · GW(p)
If I'm correctly understanding your construction, it isn't actually using any properties of . You're just looking at a formal power series (with negative exponents) and writing powers of instead of . Identifying with "" gives exactly what you motivated— and (which are and when interpreted) are two different things.
The structure you describe (where we want elements and their inverses to have finite support) turns out to be quite small. Specifically, this field consists precisely of all monomials in . Certainly all monomials work; the inverse of is for any and .
To show that nothing else works, let and be any two nonzero sums of finitely many integer powers of (so like ). Then, the leading term (product of the highest power terms of and ) will be some nonzero thing. But also, the smallest term (product of the lower power terms of and ) will be some nonzero thing. Moreover, we can't get either of these to cancel out. So, the product can never be equal to . (Unless both are monomials.)
For an example, think about multiplying . The leading term is the highest power term and is the lowest power term. We can get all the inner stuff to cancel but never these two outside terms.
A larger structure to take would be formal Laurent series in . These are sums of finitely many negative powers of and arbitrarily many positive powers of . This set is closed under multiplicative inverses.
Equivalently, you can take the set of rational functions in . You can recover the formal Laurent series from a rational function by doing long division / taking the Taylor expansion.
(If the object extends infinitely in the negative direction and is bounded in the positive direction, it's just a formal Laurent series in .)
If it extends infinitely in both directions, that's an interesting structure I don't know how to think about. For example, stays the same when multiplied by . This means what we have isn't a field. I bet there's a fancy algebra word for this object but I'm not aware of it.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-02-14T08:31:24.557Z · LW(p) · GW(p)
You've understood correctly minus one important detail:
The structure you describe (where we want elements and their inverses to have finite support)
Not elements and their inverses! Elements or their inverses. I've shown the example of to demonstrate that you quickly get infinite inverses, and you've come up with an abstract argument why finite inverses won't cut it:
To show that nothing else works, let and be any two nonzero sums of finitely many integer powers of (so like ). Then, the leading term (product of the highest power terms of and ) will be some nonzero thing. But also, the smallest term (product of the lower power terms of and ) will be some nonzero thing. Moreover, we can't get either of these to cancel out. So, the product can never be equal to . (Unless both are monomials.)
In particular, your example of has the inverse . Perhaps a better way to describe this set is 'all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support'. Perhaps you can construct infinite-but-periodical elements with infinite-but-periodical inverses; if so, those would be in the field as well (if it's a field).
If you can construct , it would not be field. But constructing this may be impossible.
I'm currently completely unsure if the resulting structure is a field. If you get a bunch of finite elements, take their infinite-but-periodical inverse, and multiply those inverses, the resulting number has again a finite inverse due to the argument I've shown in the previous comment. But if you use addition on one of them, things may go wrong.
A larger structure to take would be formal Laurent series in . These are sums of finitely many negative powers of x and arbitrarily many positive powers of . This set is closed under multiplicative inverses.
Thanks; this is quite similar -- although not identical.
Replies from: just_browsing↑ comment by just_browsing · 2021-02-15T17:22:44.820Z · LW(p) · GW(p)
Perhaps a better way to describe this set is 'all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support'.
Ah, now I see what you are after.
But if you use addition on one of them, things may go wrong.
This is exactly right, here's an illustration:
Here is a construction of : We have that is the inverse of Moreover, is the inverse of . If we want this thing to be closed under inverses and addition, then this implies that
can be constructed.
But this is actually bad news if you want your multiplicative inverses to be unique. Since is the inverse of , we have that is the inverse of . So then you get
so
On the one hand, this is a relief, because it explains the strange property that this thing stays the same when multiplied by . On the other hand, it means that it is no longer the case that the coordinate representation is well-defined—we can do operations which, by the rules, should produce equal outputs, but they produce different coordinates.
In fact, for any polynomial (such as ), you can find one inverse which uses arbitrarily high positive powers of and another inverse which uses arbitrarily low negative powers of . The easiest way to see this is by looking at another example, let's say .
One way you can find the inverse of is to get the out of the term and keep correcting: first you have , then you have , then you have , and so on.
Another way you can find the inverse of is to write its terms in opposite order. So you have and you do the same correcting process, starting with , then , and continuing in the same way.
Then subtract these two infinite series and you have a bidirectional sum of integer powers of which is equal to .
My hunch is that any bidirectional sum of integer powers of which we can actually construct is "artificially complicated" and it can be rewritten as a one-directional sum of integer powers of . So, this would mean that your number system is what you get when you take the union of Laurent series going in the positive and negative directions, where bidirectional coordinate representations are far from unique. Would be delighted to hear a justification of this or a counterexample.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2021-02-15T19:56:34.847Z · LW(p) · GW(p)
Here is a construction of : We have that is the inverse of . Moreover, is the inverse of . [...]
Yeah, that's conclusive. Well done! I guess you can't divide by zero after all ;)
I think the main mistake I've made here is to assume that inverses are unique without questioning it, which of course doesn't make sense at all if I don't yet know that the structure is a field.
My hunch is that any bidirectional sum of integer powers of x which we can actually construct is "artificially complicated" and it can be rewritten as a one-directional sum of integer powers of x. So, this would mean that your number system is what you get when you take the union of Laurent series going in the positive and negative directions, where bidirectional coordinate representations are far from unique. Would be delighted to hear a justification of this or a counterexample.
So, I guess one possibility is that, if we let be the equivalence class of all elements that are in this structure, the resulting set of classes is isomorphic to the Laurent numbers. But another possibility could be that it all collapses into a single class -- right? At least I don't yet see a reason why that can't be the case (though I haven't given it much thought). You've just proven that some elements equal zero, perhaps it's possible to prove it for all elements.
Replies from: gjm↑ comment by gjm · 2021-02-15T21:37:31.546Z · LW(p) · GW(p)
If you allow series that are infinite in both directions, then you have a new problem which is that multiplication may no longer be possible: the sums involved need not converge. And there's also the issue already noted, that some things that don't look like they equal zero may in some sense have to be zero. (Meaning "absolute" zero = (...,0,0,0,...) rather than the thing you originally called zero which should maybe be called something like instead.)
What's the best we could hope for? Something like this. Write R for , i.e., all formal potentially-double-ended Laurent series. There's an addition operation defined on the whole thing, and a multiplicative operation defined on some subset of pairs of its elements, namely those for which the relevant sums converge (or maybe are "summable" in some weaker sense). There are two problems: (1) some products aren't defined, and (2) at least with some ways of defining them, there are some zero-divisors -- e.g., (x-1) times the sum of all powers of x, as discussed above. (I remark that if your original purpose is to be able to divide by zero, perhaps you shouldn't be too troubled by the presence of zero-divisors; contrapositively, that if they trouble you, perhaps you shouldn't have wanted to divide by zero in the first place.)
We might hope to deal with issue 1 by restricting to some subset A of R, chosen so that all the sums that occur when multiplying elements of A are "well enough behaved"; if issue 2 persists after doing that, maybe we might hope to deal with that by taking a quotient of A -- i.e., treating some of its elements as being equal to one another.
Some versions of this strategy definitely succeed, and correspond to things just_browsing already mentioned above. For instance, let A consist of everything in R with only finitely many negative powers of x, the Laurent series already mentioned; this is a field. Or let it consist of everything that's the series expansion of a rational function of x; this is also a field. This latter is, I think, the nearest you can get to "finite or periodic". The periodic elements are the ones whose denominator has degree at most 1. Degree <= 2 brings in arithmetico-periodic elements -- things that go, say, 1,1,2,2,3,3,4,4, etc. I'm pretty sure that degree <=d in the denominator is the same as coefficients being ultimately (periodic + polynomial of degree < d). And this is what you get if you say you want to include both 1 and x, and to be closed under addition, subtraction, multiplication, and division.
Maybe that's already all you need. If not, perhaps the next question is: is there any version of this that gives you a field and that allows, at least, some series that are infinite in both directions? Well, by considering inverses of (1-x)^k we can get sequences that grow "rightward" as fast as any polynomial. So if we want the sums inside our products to converge, we're going to need our sequences to shrink faster-than-polynomially as we move "leftward". So here's an attempt. Let A consist of formal double-ended Laurent series such that for we have for some , and for we have for some . Clearly the sum or difference of two of these has the same properties. What about products? Well, if we multiply together to get then . The terms with are bounded in absolute value by some constant times where gets its value from and gets its value from ; so the sum of these terms is bounded by some constant times which in turn is a constant times . Similarly for the terms with ; the terms with both of the same sign are bounded by a constant times when they're negative and by a constant times when they're positive. So, unless I screwed up, products always "work" in the sense that the sums involved converge and produce a series that's in A. Do we have any zero-divisors? Eh, I don't think so, but it's not instantly obvious.
Here's a revised version that I think does make it obvious that we don't have zero-divisors. Instead of requiring that for we have for some , require that to hold for all . Once again our products always exist and still lie in A. But now it's also true that for small enough , the formal series themselves converge to well-behaved functions of t. In particular, there can't be zero-divisors.
I'm not sure any of this really helps much in your quest to divide by zero, though :-).
comment by Rafael Harth (sil-ver) · 2023-02-07T23:40:21.905Z · LW(p) · GW(p)
I know ChatGPT isn't great with math, but this seems quite bizarre.
Replies from: Dagon, ZT5↑ comment by Dagon · 2024-02-17T16:49:23.947Z · LW(p) · GW(p)
I get a different justification for the incorrect answer from ChatGPT-3.5. If I precede the question with "optimize for mathematical precision", I get the right answer. ChatGPT-4 gets it right the first time, for me. Even if I ask it "explain why 2023 is a prime number", it says it's not prime.
↑ comment by Victor Novikov (ZT5) · 2023-02-09T03:55:12.601Z · LW(p) · GW(p)
This seems fairly typical of how ChatGPT does math, to me.
-come up with answer
-use "motivated reasoning" to try and justify it, even if it results in a contradiction
-ignore the contradiction, no matter how obvious it is
comment by Rafael Harth (sil-ver) · 2024-04-25T22:49:30.599Z · LW(p) · GW(p)
Are people in rich countries happier on average than people in poor countries? (According to GPT-4, the academic consensus is that it does, but I'm not sure it's representing it correctly.) If so, why do suicide rates increase (or is that a false positive)? Does the mean of the distribution go up while the tails don't or something?
Replies from: peterbarnett↑ comment by peterbarnett · 2024-04-26T00:39:48.147Z · LW(p) · GW(p)
People in rich countries are happier than people in poor countries generally (this is both people who say they are "happy" or "very happy", and self-reported life satisfaction), see many of the graphs here https://ourworldindata.org/happiness-and-life-satisfaction
In general it seems like richer countries also have lower suicide rates: "for every 1000 US dollar increase in the GDP per capita, suicide rates are reduced by 2%"
Replies from: Viliam↑ comment by Viliam · 2024-04-26T09:30:46.985Z · LW(p) · GW(p)
Possible bias, that when famous and rich people kill themselves, everyone is discussing it, but when poor people kill themselves, no one notices?
Also, I wonder what technically counts as "suicide"? Is drinking yourself to death, or a "suicide by cop", or just generally overly risky behavior included? I assume not. And these seem to me like methods a poor person would choose, while the rich one would prefer a "cleaner" solution, such as a bullet or pills. So the reported suicide rates are probably skewed towards the legible, and the self-caused death rate of the poor could be much higher.
comment by Rafael Harth (sil-ver) · 2023-01-15T19:57:50.034Z · LW(p) · GW(p)
LessWrong is trolling me:
Replies from: Raemon↑ comment by Raemon · 2023-01-15T20:29:57.332Z · LW(p) · GW(p)
Huh. Does this persist on refresh?
(according to the Review Leaderboard you've done exactly 3 reviews, there was some chance I screwed up the logic for >= vs >, but it looks like it appears normally for me when I manually set my review count to 3)
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2023-01-15T20:51:02.822Z · LW(p) · GW(p)
No, can't reproduce it. (And 3 is correct.) Wouldn't be a serious bug anyway, I just thought it was funny.
comment by Rafael Harth (sil-ver) · 2022-11-12T15:33:50.617Z · LW(p) · GW(p)
This is not scientific, and it's still possible to be an artifact of a low sample size, but my impression from following political real-money prediction markets is that they just have a persistent republican bias in high-profile races, maybe because of 2016. I think you could have made good money by just betting on Democrats to win in every reasonably big market since then.
They just don't seem well calibrated in practice. I really want a single, widely-used, high-quality crypto market to exist.
comment by Rafael Harth (sil-ver) · 2022-04-05T10:16:04.763Z · LW(p) · GW(p)
You are probably concerned about AGI right now, with Eliezer's pessimism and all that. Let me ease your worries! There is a 0.0% chance that AGI is dangerous!
Don't believe me? Here is the proof. Let "There is a 0.0% chance that AGI is dangerous". Let "".
- Suppose is true.
- Then by pure identity, " implies " is true. Since and " implies " are both true, this implies that is true as well!
We have shown that [if is true, then is true], thus we have shown " implies ". But this is precisely , so we have shown (without making assumptions) that is true. As shown above, if is true then is true, so is true; qed.
Aside from being a joke, this is also the rough concept behind the infamous Löb's theorem Miri always talks about. (Of course Löb's theorem doesn't really use a formula to define itself, it gets around it in such a way that the resulting statement is actually true.)
comment by Rafael Harth (sil-ver) · 2022-03-26T13:51:22.539Z · LW(p) · GW(p)
What is the best way to communicate that "whatever has more evidence is more likely true" is not the way to go about navigating life?
My go-to example is always "[god buried dinosaur bones to test our faith] fits the archeological evidence just as well as evolution", but I'm not sure how well that really gets the point across. Maybe something that avoids god, doesn't feel artificial, and where the unlikely hypothesis is more intuitively complex.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2022-03-26T21:04:21.661Z · LW(p) · GW(p)
I flip a coin 10 times and observe the sequence HTHTHHTTHH. Obviously, the coin is rigged to produce that specific sequence: the "rigged to produce HTHTHHTTHH" hypothesis predicts the observed outcome with probability 1, whereas the "fair coin" hypothesis predicts that outcome with probability 0.00098.
comment by Rafael Harth (sil-ver) · 2021-11-06T12:35:21.434Z · LW(p) · GW(p)
Something I've been wondering is whether most people misjudge their average level of happiness because they exclude a significant portion of their subjective experience. (I'm of course talking about the time spent dreaming.) Insofar as most dreams are pleasant, and this is certainly my experience, this could be a rational reason for [people who feel like their live isn't worth living] (definitely not talking about myself here!) to abstain from suicide. Probably not a very persuasive one, though, in most cases.
Relevant caveats:
- This will probably be less interesting the more dissimilar your moral views are from valence utilitarianism.
- Even for utilitarians, it excludes your impact on the lives of others. For EAs, this is hopefully the biggest part of the story!
↑ comment by Robbo · 2021-11-09T19:04:48.569Z · LW(p) · GW(p)
You might be interested in this post [LW(p) · GW(p)] by Harri Besceli [LW · GW], which argues that "the best and worst experiences you had last week probably happened when you were dreaming".
Eric Schwitzgebel has also written that philosophical hedonists, if consistent, would care more about the quality of dream experiences: https://schwitzsplinters.blogspot.com/2012/04/how-much-should-you-care-about-how-you.html
comment by Rafael Harth (sil-ver) · 2021-09-19T15:42:23.360Z · LW(p) · GW(p)
Keeping stock of and communicating what you haven't understood is an underrated skill/habit. It's very annoying to talk to someone and think they've understood something, only to realize much later that they haven't. It also makes conversations much less productive.
It's probably more of a habit than a skill. There certainly are some contexts where the right thing to do is pretend that you've understood everything even though you haven't. But on net, people do it way too much, and I'm not sure to what extent they're fooling themselves.
↑ comment by Viliam · 2020-11-23T19:04:51.431Z · LW(p) · GW(p)
There are relative differences in both poor and rich countries; people anywhere can imagine what it would be like to live like their more successful neighbors. But maybe the belief in social mobility makes it worse, because it feels like you could be one of those on the top. (What's your excuse for not making a startup and selling it for $1M two years later?)
I don't have a TV and I use ad-blockers online, so I have no idea what a typical experience looks like. The little experience I have suggests that TV ads are about "desirable" things, but online ads mostly... try to make you buy some unappealing thing by telling you thousand times that you should buy it. Although once in a while they choose something that you actually want, and then the thousand reminders can be quite painful. People in poor countries probably spend much less time watching ads.