Posts
Comments
i kinda thought that ey's anti-philosophy stance was a bit extreme but this is blackpilling me pretty hard lmao
He actually cites reflective equilibrium here:
Closest antecedents in academic metaethics are Rawls and Goodman's reflective equilibrium, Harsanyi and Railton's ideal advisor theories, and Frank Jackson's moral functionalism.
If Thurston is right here and mathematicians want to understand why some theorem is true (rather than to just know the truth values of various conjectures), and if we "feel the AGI" ... then it seems future "mathematics" will consist in "mathematicians" asking future ChatGPT to explain math to them. Whether something is true, and why. There would be no research anymore.
The interesting question is, I think, whether less-than-fully-general systems, like reasoning LLMs, could outperform humans in mathematical research. Or whether this would require a full AGI that is also smarter than mathematicians. Because if we had the latter, it would likely be an ASI that is better than humans in almost everything, not just mathematics.
I think when people use the term "gradual disempowerment" predominantly in one sense, people will also tend to understand it in that sense. And I think that sense will be rather literal and not the one specifically of the original authors. Compare the term "infohazard" which is used differently (see comments here) from how Yudkowsky was using it.
Unrelated to vagueness they can also just change the framework again at any time.
Reminds me of Schopenhauer's posthumously published manuscript The Art of Being Right: 38 Ways to Win an Argument.
In Richard Jeffrey's utility theory there is actually a very natural distinction between positive and negative motivations/desires. A plausible axiom is (the tautology has zero desirability: you already know it's true). Which implies with the main axiom[1] that the negation of any proposition with positive utility has negative utility, and vice versa. Which is intuitive: If something is good, its negation is bad, and the other way round. In particular, if (indifference between and ), then .
More generally, . Which means that positive and negative utility of a proposition and it's negation are scaled according to their relative odds. For example, while your lottery ticket winning the jackpot is obviously very good (large positive utility), having a losing ticket is clearly not very bad (small negative utility). Why? Because losing the lottery is very likely, far more likely than winning. Which means losing was already "priced in" to a large degree. If you learned that you indeed lost, that wouldn't be a big update, so the "news value" is negative but not large in magnitude.
Which means this utility theory has a zero point. Utility functions are therefore not invariant under adding an arbitrary constant. So the theory actually allows you to say is "twice as good" as , "three times as bad", "much better" etc. It's a ratio scale.
If and then ↩︎
conducive to well-being
That in itself isn't a good definition , because it doesn't distinguish ethics from, e.g. Medicine...and it doesn't tell you whose well being. De facto people are ethically obliged to do things which against their well being and refrain from doing some things which promote their own wellbeing...I can't rob people to pay my medical bills.
Promoting your own well-being only would be egoism, while ethics seems to be more similar to altruism.
Whose desires?
I guess of all beings that are conscious. Perhaps relative to their degree of consciousness. Though those are all questions which actual theories in normative ethics try to answer.
Why?
Not sure what this is asking for, but if it is "why is this analysis correct rather than another, or none?" - because of the meaning of the involved term. (Compare "why not count bushes as "trees" as well?" - "because that would be talking about something else")
The various forms of theories in normative ethics (e.g. the numerous theories of utilitarianism, or Extrapolated Volition) can be viewed as attempts to analyze what terms like “ethics” or “good” mean exactly.
They could also be seen as attempts to find different denotations of a term with shared connotation.
This doesn't reflect the actual methodology, where theories are judged in thought experiments on whether they satisfy our intuitive, pre-theoretical concepts. That's the same as in other areas of philosophy where conceptual analysis is performed.
Related: zettelkasten, a different note-taking method, where each card gets an address.
Many attempts at establishing an objective morality try to argue from considerations of human well-being. OK, but who decided that human well-being is what is important? We did!
That's a rather minimal amount of subjectivism. Everything downstream of that can be objective , so its really a compromise position
It's also possible (and I think very probable) that "ethical" means something like "conducive to well-being". Similar to how "tree" means something like "plant with a central wooden trunk". Imagine someone objecting: "OK, but who decided that trees need to have a wooden trunk? We did!" That's true in some weak sense (though nobody really "decided" that "tree" refers to trees), but it doesn't mean it's subjective whether or not trees have a wooden trunk.
Though I think the meaning of "ethical" is a bit different, as it doesn't just take well-being into account but also desires. The various forms of theories in normative ethics (e.g. the numerous theories of utilitarianism, or Extrapolated Volition) can be viewed as attempts to analyze what terms like "ethics" or "good" mean exactly.
That's some careful analysis!
Two remarks:
1
"Can" is the opposite of "unable". "Unable" means that the change involves granting ability to they who would act, i.e. teaching a technique, providing a tool, fixing the body, or altering the environment.
That's a good characterization, though arguably not a definition, as it relies on "ability", which is circular. I can do something = I have the ability to do something. I can = I'm able to.
But we can use the initial principle (it really needs a name) which doesn't mention ability:
You do a thing iff you can do it and you want to do it.
"Iff" behaves similar to an equation, so we can solve for "can", similar to algebra in arithmetic. I don't know the exact algebra of "iff", but solving for "can" arguably yields:
"I can do X" iff "If I wanted to do X, I would do X"
Which uses wanting and a counterfactual to define "can". We could also define:
"I want to do X" iff "If I could to do X, I would do X"
Though those two definitions together are circular. Maybe it is better to regard one concept as more basic than the other, and only define the less basic one in terms of the more basic one. It seems to me that "want" is more basic than "can", so I would only define "can" in terms of "want", and leave the definition for "want" open (for now).
2
Regarding the Confusing Cases. There are at least two canonical classes: akrasia (weakness of will) and addiction. "Can" the addict quit smoking? "Can" the person suffering from akrasia just do The Thing?
A better question perhaps: In which sense is the answer to the above "yes" and in which sense is it "no"?
One possibility is to analyze these cases in terms of first-order vs second-order desires. The first-order desire would be smoking or being lazy, the second order desire would be not to have the (first-order) desire to smoke, or be lazy. Second-order desires seem to be more important or rational than the first order desires. The second-order and first-order desire are "inconsistent" here. If the first-order desire to smoke is stronger than the second-order desire not to have the first-order desire, I don't quit smoking. (Or don't overcome my laziness in case of akrasia). Otherwise I do.
Here, the "can" definition as "If I wanted to do X, I would do X" is ambiguous. If it means "If I had the first-order desire to quit smoking, I would quit smoking", then the sentence is true, and I "can" overcome the addiction (or the akrasia). If it means "If I had a second-order desire to not have the first order desire to smoke, I would quit smoking", then the sentence is false, as I do in fact have the second-order desire but still don't quit. So in this sense it's not true that I "can" quit.
A similar but different different analysis wouldn't phrase it in terms of first and second-order desires, but in terms of rational wishes and a-rational urges. I wish to quit smoking, but I have the urge to smoke. I wish to get to work, but I have the urge to be lazy. If we count, in the definition of "can", urges as "wanting", I can stop smoking, if we count "wishes" as "wanting", I can't. By a similar argument as above.
I think these cases are actually not a major problem of the "can" definition. After all, it seems in fact ambiguous to ask whether someone "can" overcome some case of akrasia or addiction. The definition captures that ambiguity.
Your headline overstates the results. The last common ancestor of birds an mammals probably wasn't exactly unintelligent. (In contrast to our last common ancestor with the octopus, as the article discusses.)
"the" supposes there's exactly one canonical choice for what object in the context is indicated by the predicate. When you say "the cat" there's basically always a specific cat from context you're talking about. "The cat is in the garden" is different from "There's exactly one cat in the garden".
Yes, we have a presupposition that there is exactly one cat. But that presupposition is the same regardless of the actual number of cats (regardless of the context), because the "context" here is a feature of the external world ("territory"), while the belief is a part of the "map"/world model/mind. So when we want to formalize the meaning of "The cat is in the garden", that formalization has to be independent of the territory, that is, the same for any possible way the world is. So we can't use individual constants. Because those can't be used for cases where there is no cat or more than one. The mental content of a belief (the semantic content of a statement) is internal, so it doesn't depend on what the external world is like.
I mean there has to be some possibility for revising your world model if you notice that there are actually 2 objects for something where you previously thought there's only one.
The important part is that your world model doesn't need to depend on what the world is like. If you believe that the cat is in the garden, that belief is the same independently of whether the presuppositions it makes are true. Therefore we cannot inject parts of the territory into the map. Or rather: there is no such injection, and if our formalization of beliefs (map/world model) assumes otherwise, that formalization is wrong.
Yeah I didn't mean this as formal statement. formal would be: formal would be:
{exists x: apple(x) AND location(x, on=Table342)} CAUSES {exists x: apple(x) AND see(SelfPerson, x)}
Here you use two individual constants: Table342 and SelfPerson. Individual constants can only be used for direct reference, where unique reference can't fail. So it can only be used for internal (mental) objects. So "SelfPerson" is okay, because you know a priori that you exist uniquly. If you didn't have a body, you could still refer to yourself, and it's not possible that you accidentally refer to more than one person, like a copy of you. You are part of your mind, your internal state. But "Table342" is an external object. It might not exist, or multiple such tables might exist even though you presupposed it was only one. "Table342" is an individual constant, which are incompatible with presupposition failure. So it can't be used. That formalization is incompatible with possible worlds where the table doesn't exist uniquely. But you have the same belief whether or not your presupposition is satisfied. So the formalization is faulty. We have to use one where no constants are used for reference to external things like tables.
What I was saying was that we can, from our subjective perspective, only "point" to or "refer" to objects in a certain way. In terms of predicate logic the two ways of referring are via a) individual constants and b) variable quantification. The first corresponds to direct reference, where the reference always points to exactly one object. Mental objects can presumably be referred to directly. For other objects, like physical ones, quantifiers have to be used. Like "at least one" or "the" (the latter only presupposes there is exactly one object satisfying some predicate). E.g. "the cat in the garden". Perhaps there is no cat in the garden or there are several. So it (the cat) cannot be logically represented with a constant. "I" can be, but "this" again cannot. Even ordinary proper names of people cannot, because they aren't guaranteed to refer to exactly one object. Maybe "Superman" is actually two people with the same dress, or he doesn't exist, being the result of a hallucination. This case can be easily solved by treating those names as predicates. Compare:
- The woman believes the superhero can fly.
- The superhero is the colleague.
The above only has quantifiers and predicates, no constants. The original can be handled analogously:
- (The) Mia believes (the) Superman can fly.
- (The) Superman is (the) Clark Kent.
The names are also logical predicates here. In English you wouldn't pronounce the definitive articles for the proper nouns here, but in other languages you would.
Indicators like "here"/"tomorrow"/"the object I'm pointing to" don't get stored directly in beliefs. They are pointers used for efficiently identifying some location/time/object from context, but what get's saved in the world model is the statement where those pointers were substituted for the referent they were pointing to.
As I argued above, "pointing" (referring) is a matter of logic, so I would say assuming the existence of separate "pointers" is mistake.
You can say "{(the fact that) there's an apple on the table} causes {(the fact that) I see an apple}"
But that's not primitive in terms of predicate logic, because here "the" in "the table" means "this" which is not a primitive constant. You don't mean any table in the world, but a specific one, which you can identify in the way I explained in my previous comment. I don't know how it would work with fact causation rather than objects, though there might be an appropriate logical analysis.
I think object identification is important if we want to analyze beliefs instead of sentences. For beliefs we can't take a third person perspective and say "it's clear from context what is meant". Only the agent knows what he means when he has a belief (or she). So the agent has to have a subjective ability to identify things. For "I" this is unproblematic, because the agent is presumably internal and accessible to himself and therefore can be subjectively referred to directly. But for "this" (and arguably also for terms like "tomorrow") the referred object depends partly on facts external to the agent. Those external facts might be different even if the internal state of the agent is the same. For example, "this" might not exist, so it can't be a primitive term (constant) in standard predicate logic.
One approach would be to analyze the belief that this apple is green as "There is an x such that x is an apple and x is green and x causes e." Here "e" is a primitive term (similar to "I" in "I'm hungry") that refers to the current visual experience of a green apple.
So e is subjective experience and therefore internal to the agent. So it can be directly referred to, while this (the green apple he is seeing) is only indirectly referred to (as explained above), similar to "the biggest tree", "the prime minister of Japan", "the contents of this box".
Note the important role of the term "causes" here. The belief is representing a hypothetical physical object (the green apple) causing an internal object (the experience of a green apple). Though maybe it would be better to use "because" (which relates propositions) instead of "causes", which relates objects or at least noun phrases. But I'm not sure how this would be formalized.
Yeah. I proposed a while ago that all the AI content was becoming so dominant that it should be hived off to the Alignment Forum while LessWrong is for all the rest. This was rejected.
Maybe I missed it, but what about indexical terms like "I", "this", "now"?
There is still the possibility on the front page to filter out the AI tag completely.
That difference is rather extreme. It seems LLM companies have a strong winner-take-all market tendency. Similar to Google (web search) or Amazon (online retail) in the past. It seems now much more likely to me that ChatGPT has basically already won the LLM race, similar to how Google won the search engine race in the past. Gemini outperforming ChatGPT in a few benchmarks likely won't make a difference.
[...] because it is embedded natively, deep in the architecture of our omnimodal GPT‑4o model, 4o image generation can use everything it knows to apply these capabilities in subtle and expressive ways [...] Unlike DALL·E, which operates as a diffusion model, 4o image generation is an autoregressive model natively embedded within ChatGPT.
To operationalise this: a decision theory usually assumes that you have some number of options, each with some defined payout. Assuming payouts are fixed, all decision theories simply advise you to pick the outcome with the highest utility.
The theories typically assume that each choice option has a number of known mutually exclusive (and jointly exhaustive) possible outcomes. And to each outcome the agent assigns a utility and a probability. So uncertainty is in fact modelled insofar the agent can assign subjective probabilities to those outcomes occurring. The expected utility of an outcome is then something like its probability times its utility.
Other uncertainties are not covered in decision theory. E.g. 1) if you are uncertain what outcomes are possible in the first place, 2) if you are uncertain what utility to assign to a possible outcome, 3) if you are uncertain what probability to assign to a possible outcome.
I assume you are talking about some of the latter uncertainties?
(This is off-topic but I'm not keen on calling LLMs "he" or "she". Grok is not a man, nor a woman. We shouldn't anthropomorphize language models. We already have an appropriate pronoun for those: "it")
There is also Deliberation in Latent Space via Differentiable Cache Augmentation by Liu et al. and Efficient Reasoning with Hidden Thinking by Shen et al.
I think picking axioms is not necessary here and in any case inconsequential.
By picking your axioms you logically pinpoint what you are talking in the first place. Have you read Highly Advanced Epistemology 101 for Beginners? I'm noticing that our inferential distance is larger than it should be otherwise.
I have read it a while ago, but he overstates the importance of axiom systems. E.g. he wrote:
You need axioms to pin down a mathematical universe before you can talk about it in the first place. The axioms are pinning down what the heck this 'NUM-burz' sound means in the first place - that your mouth is talking about 0, 1, 2, 3, and so on.
That's evidently not true. Mathematicians studied arithmetic for two thousand years before it was axiomatized by Dedekind and Peano. Likewise, mathematical statisticians have studied probability theory long before it was axiomatized by Kolmogorov in the 1930s. Advanced theorems preceded these axiomatizations. Mathematicians rarely use axiom systems in their work even if they are theoretically available. That's why it is hard to translate proofs into Lean code. Mathematicians just use well-known mathematical facts (that are considered obvious or already sufficiently established by others) as assumptions for their proofs.
No, you are missing the point. I'm not saying that this phrase has to be axiom itself. I'm saying that you need to somehow axiomatically define your individual words, assign them meaning and only then, in regards to these language axioms the phrase "Bachelors are unmarried" is valid.
That's obviously not necessary. We neither do nor need to "somehow axiomatically define" our individual words for "Bachelors are unmarried" to be true. What would these axioms even be? Clearly the sentence has meaning and is true without any axiomatization.
I wouldn't generally dismiss an "embarassing & confusing public meltdown" when it comes from a genius. Because I'm not a genius while he or she is. So it's probably me who is wrong rather than him. Well, except the majority of comparable geniuses agrees with me rather than with him. Though geniuses are rare, and majorities are hard to come by. I still remember an (at the time) "embarrassing and confusing meltdown" by some genius.
My point is that if your picking of particular axioms is entangled with reality, then you are already using a map to describe some territory. And then you can just as well describe this territory more accurately.
I think picking axioms is not necessary here and in any case inconsequential. "Bachelors are unmarried" is true whether or not I regard it as some kind of axiom or not. I seems the same holds for tautologies and probabilistic laws. Moreover, I think neither of them is really "entangled" with reality, in the sense that they are compatible with any possible reality. They merely describe what's possible in the first place. That bachelors can't be married is not a fact about reality but a fact about the concept of a bachelor and the concept of marriage.
Rationality is about systematic ways to arrive to correct map-territory correspondence. Even if in your particular situation no one is exploiting you, the fact that you are exploitable in principle is bad. But to know about what is exploitable in principle we generalize from all the individual acts of exploatation. It all has to be grounded in reality in the end.
Suppose you are not instrumentally exploitable "in principle", whatever that means. Then it arguably would still be epistemically irrational to believe that "Linda is a feminist and a bank teller" is more likely than "Linda is a bank teller". Moreover, it is theoretically possible that there are cases where it is instrumentally rational to be epistemically irrational. Maybe someone rewards people with (epistemically) irrational beliefs. Maybe theism has favorable psychological consequences. Maybe Pascal's Wager is instrumentally rational. So epistemic irrationality can't in general be explained with instrumental irrationality as the latter may not even be present.
You've said yourself, meaning is downstream of experience. So in the end you have to appeal to reality while trying to justify it.
I don't think we have to appeal to reality. Suppose the concept of bachelorhood and marriage had never emerged. Or suppose humans had never come up with logic and probability theory, and not even with language at all. Or humans had never existed in the first place. Then it would still be true that all bachelors are necessarily unmarried, and that tautologies are true. Moreover, it's clear that long before the actual emergence of humanity and arithmetic, two dinosaurs plus three dinosaurs already were five dinosaurs. Or suppose the causal history had only been a little bit different, such that "blue" means "green" and "green" means "blue". Would it then be the case that grass is blue and the sky is green? Of course not. It would only mean that we say "grass is blue" when we mean that it is green.
Somewhat related: A critique of "bad faith".
Do you really have access to the GPT-4 base (foundation) model? Why? It's not publicly available.
Yes, the meaning of a statement depends causally on empirical facts. But this doesn't imply that the truth value of "Bachelors are unmarried" depends less than completely on its meaning. Its meaning (M) screens off the empirical facts (E) and its truth value (T). The causal graph looks like this:
E —> M —> T
If this graph is faithful, it follows that E and T are conditionally independent given M. . So if you know M, E gives you no additional information about T.
And the same is the case for all "analytic" statements, where the truth value only depends on its meaning. They are distinguished from synthetic statements, where the graph looks like this:
E —> M —> T
|_________^
That is, we have an additional direct influence of the empirical facts on the truth value. Here E and T are no longer conditionally independent given M.
I think that logical and probabilistic laws are analytic in the above sense, rather than synthetic. Including axioms. There are often alternative axiomatizations of the same laws. So and are equally analytic, even though only the latter is used as an axiom.
Being Dutch-bookable is considered irrational because you systematically lose your bets.
I think the instrumental justification (like Dutch book arguments) for laws of epistemic rationality (like logic and probability) is too weak. Because in situations where there happens to be in fact no danger of being exploited by a Dutch book (because there is nobody who would do such an exploit) it is not instrumentally irrational to be epistemically irrational. But you continue to be epistemically irrational if you have e.g. incoherent beliefs. So epistemic rationality cannot be grounded in instrumental rationality. Epistemic rationality laws being true in virtue of their meaning alone (being analytic) therefore seems a more plausible justification for epistemic rationality.
It seems clear to me that statements expressing logical or probabilistic laws like or are "analytic". Similar to "Bachelors are unmarried".
The truth of a statement in general is determined by two things, it's meaning and what the world is like. But for some statements the latter part is irrelevant, and their meanings alone are sufficient to determine their truth or falsity.
Not to remove all limitations: I think the probability axioms are a sort of "logic of sets of beliefs". If the axioms are violated the belief set seems to be irrational. (Or at least the smallest incoherent subset that, if removed, would make the set coherent.) Conventional logic doesn't work as a logic for belief sets, as the preface and lottery paradox show, but subjective probability theory does work. As a justification for the axioms: that seems a similar problem to justifying the tautologies / inference rules of classical logic. Maybe an instrumental Dutch book argument works. But I do think it does come down to semantic content: If someone says "P(A and B)>P(A)" it isn't a sign of incoherence if he means with "and" what I mean with "or".
Regarding the map representing the territory: That's a more challenging thing to formalize than just logic or probability theory. It would amount to a theory of induction. We would need to formalize and philosophically justify at least something like Ockham's razor. There are some attempts, but I think no good solution.
Well, technically P(Ω)=1 is an axiom, so you do need a sample space if you want to adhere to the axioms.
For a propositional theory this axiom is replaced with , i.e. a tautology in classical propositional logic receives probability 1.
But sure, if you do not care about accurate beliefs and systematic ways to arrive to them at all, then the question is, indeed, not interesting. Of course then it's not clear what use is probability theory for you, in the first place.
Degrees of belief adhering to the probability calculus at any point in time rules out things like "Mary is a feminist and a bank teller" to simultaneously receive a higher degree of belief than "Mary is a bank teller". It also requires e.g. that if and then . That's called "probabilism" or "synchronic coherence".
Another assumption is typically that after "observing" . This is called "conditionalization" or sometimes "diachronic coherence".
And how would you know which worlds are possible and which are not?
Yes, that's why I only said "less arbitrary".
Regarding "knowing": In subjective probability theory, the probability over the "event" space is just about what you believe, not about what you know. You could theoretically believe to degree 0 in the propositions "the die comes up 6" or "the die lands at an angle". Or that the die comes up as both 1 and 2 with some positive probability. There is no requirement that your degrees of belief are accurate relative to some external standard. It is only assumed that the beliefs we do have compose in a way that adheres to the axioms of probability theory. E.g. P(A)≥P(A and B). Otherwise we are, presumably, irrational.
A less arbitrary way to define a sample space is to take the set of all possible worlds. Each event, e.g. a die roll, corresponds to the disjunction of possible worlds where that event happens. The possible worlds can differ in a lot of tiny details, e.g. the exact position of a die on the table. Even just an atom being different at the other end of the galaxy would constitute a different possible world. A possible world is a maximally specific way the world could be. So two possible worlds are always mutually exclusive. And the set of all possible worlds includes every possible way reality could be. There are no excluded possibilities like a die falling on the floor.
But for subjective probability theory a "sample space" isn't even needed at all. A probability function can simply be defined over a Boolean algebra of propositions. Propositions ("events") are taken to be primary instead of being defined via primary outcomes of a sample space. We just have beliefs in some propositions, and there is nothing psychological corresponding to outcomes of a sample space. We only need outcomes if probabilities are defined to be ratios of frequencies of outcomes. Likewise, "random variables" or "partitions" don't make sense for subjective probability theory: there are just propositions.
I think the main problem from this evolutionary perspective is not so much entertainment and art, but low fertility. Not having children.
A drug that fixes akrasia without major side-effects would indeed be the Holy Grail. Unfortunately I don't think caffeine does anything of that sort. For me it increases focus, but it doesn't combat weakness of will, avoidance behavior, ugh fields. I don't know about other existing drugs.
I think the main reason is that until a few years ago, not much AI research came out of China. Gwern highlighted this repeatedly.
I agree with the downvoters that the thesis of this post seems crazy. But aren't entertainment and art superstimuli? Aren't they forms of wireheading?
Hedonic and desire theories are perfectly standard, we had plenty of people talking about them here, including myself. Jeffrey's utility theory is explicitly meant to model (beliefs and) desires. Both are also often discussed in ethics, including over at the EA Forum. Daniel Kahneman has written about hedonic utility. To equate money with utility is a common simplification in many economic contexts, where expected utility is actually calculated, e.g. when talking about bets and gambles. Even though it isn't held to be perfectly accurate. I didn't encounter the reproduction and energy interpretations before, but they do make some sense.
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
Interesting. This reminds me of a related thought I had: Why do models with differential equations work so often in physics but so rarely in other empirical sciences? Perhaps physics simply is "the differential equation science".
Which is also related to the frequently expressed opinion that philosophy makes little progress because everything that gets developed enough to make significant progress splits off from philosophy. Because philosophy is "the study of ill-defined and intractable problems".
Not saying that I think these views are accurate, though they do have some plausibility.
It seems to be only "deception" if the parent tries to conceal the fact that he or she is simplifying things.
There is also the related problem of intelligence being negatively correlated with fertility, which leads to a dysgenic trend. Even if preventing people below a certain level of intelligence to have children was realistically possible, it would make another problem more severe: the fertility of smarter people is far below replacement, leading to quickly shrinking populations. Though fertility is likely partially heritable, and would go up again after some generations, once the descendants of the (currently rare) high-fertility people start to dominate.
This seems to be a relatively balanced article which discusses serveral concepts of utility with a focus on their problems, while acknowledging some of their use cases. I don't think the downvotes are justified.
That's an interesting perspective. Only it doesn't seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can't make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the "expected" value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value.
Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
Maybe this is avoided by KV caching?
This is not how many decisions feel to me - many decisions are exactly a belief (complete with bayesean uncertainty). A belief in future action, to be sure, but it's distinct in time from the action itself.
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn't seem to be an actual decision, but rather just a belief about a future decision -- about which action you will pick in the future.
See Spohn's example about believing ("deciding") you won't wear shorts next winter:
One might object that we often do speak of probabilities for acts. For instance, I might say: "It's very unlikely that I shall wear my shorts outdoors next winter." But I do not think that such an utterance expresses a genuine probability for an act; rather I would construe this utterance as expressing that I find it very unlikely to get into a decision situation next winter in which it would be best to wear my shorts outdoors, i.e. that I find it very unlikely that it will be warmer than 20°C next winter, that someone will offer me DM 1000.- for wearing shorts outdoors, or that fashion suddenly will prescribe wearing shorts, etc. Besides, it is characteristic of such utterances that they refer only to acts which one has not yet to decide upon. As soon as I have to make up my mind whether to wear my shorts outdoors or not, my utterance is out of place.
Decision screens off thought from action. When you really make a decision, that is the end of the matter, and the actions to carry it out flow inexorably.
Yes, but that arguably means we only make decisions about which things to do now. Because we can't force our future selves to follow through, to inexorably carry out something. See here:
Our past selves can't simply force us to do certain things, the memory of a past "commitment" is only one factor that may influence our present decision making, but it doesn't replace a decision. Otherwise, always when we "decide" to definitely do an unpleasant task tomorrow rather than today ("I do the dishes tomorrow, I swear!"), we would then tomorrow in fact always follow through with it, which isn't at all the case.
I think in some cases an embedding approach produces better results than either a LLM or a simple keyword search, but I'm not sure how often. For a keyword search you have to know the "relevant" keywords in advance, whereas embeddings are a bit more forgiving. Though not as forgiving as LLMs. Which on the other hand can't give you the sources and they may make things up, especially on information that doesn't occur very often in the source data.
I think my previous questions were just too hard, it does work okay on simpler questions. Though then another question is whether text embeddings improve over keyword search or just an LLMs. They seem to be some middle ground between Google and ChatGPT.
Regarding data subsets: Recently there were some announcements of more efficient embedding models. Though I don't know what the relevant parameters here are vs that OpenAI embedding model.
Since we can't experience being dead, this wouldn't really affect our anticipated future experiences in any way.
That's a mistaken way of thinking about anticipated experience, see here:
evidence is balanced between making the observation and not making the observation, not between the observation and the observation of the negation.