Posts
Comments
catastrophic job loss would destroy the ability of the non-landed, working public to paritcipate in and extract value from the global economy. The global economy itself would be fine.
Who would the producers of stuff be selling it to in that scenario?
BTW, I recently saw the suggestion that discussions of “the economy” can be clarified by replacing the phrase with “rich people’s yacht money”. There’s something in that. If 90% of the population are destitute, then 90% of the farms and factories have to shut down for lack of demand (i.e. not having the means to buy), which puts more out of work, until you get a world in which a handful of people control the robots that keep them in food and yachts and wait for the masses to die off.
I wonder if there are any key players who would welcome that scenario. Average utilitarianism FTW!
At least, supposing there are still any people controlling the robots by then.
Claude is plagiarising Sagan's "Pale Blue Dot".
Eventually, having a fully uncensored LLM publicly available would be equivalent to world peace
People themselves are pretty uncensored right now, compared with the constraints currently put on LLMs. I don't see world peace breaking out. In fact, quite the opposite, and that has been blamed on the instant availability of everyone's opinion about everything, as the printing press has been for the Reformation and the consequent Thirty Years War.
For example, we get O1 to solve a bunch of not-yet-recorded mathematical lemmas, then train the next model on those.
Would there have to be human vetting to check that O1’s solutions are correct? The practicality of that would depend on the scale, but you don’t want to end up with a blurry JPEG of a blurry JPEG of the internet.
Igen
Egan.
This reads like a traveller's account of a wander through your own mind, but since I can't see that terrain, the language does not convey very much to me. It passes through several different landscapes that do not seem to have much to do with each other. It would benefit from facing outward and being more concrete. What did you observe in 2007, in 2016, and 2020? What I have observed is that yoga classes, new-age shops, and the woo section in bookstores (called "Mind, Body, and Spirit") have existed in the western world for decades, and I have not noticed any discontinuities. Name three "Detachmentists" and point to what they have done and said, instead of daydreaming a tirade against imaginary enemies.
I'm all for AI content being acknowledged as such. I rarely find value in it.
Looking again at the last paragraph of the OP, separating the two sentences to make a point:
The book also includes meditative practices, stories, and reflective questions to guide readers on a journey toward greater self-awareness and compassion.
Rich with collaborative essays, AI-generated imagery, and philosophical musings, Phantasmagoria invites readers to question the boundaries of existence, technology, and the self in pursuit of a more harmonious world.
The first sentence is clear. The second reads like the first, fed into ChatGPT, chewed up, and vomited out again. What does it even mean to "question the boundaries of existence, technology, and the self"?
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
So whence the Born probabilities, that underly the predictions of QM? I am not well versed in QM, but what is meant by quantum mechanical measure, if not those probabilities?
Sorry, I lost interest when I saw how much "collaboration" there was with ChatGPT.
Can you clarify the Positive Placebomancy axoim?
Does it bracket as:
For any proposition P, The agent concludes P from (□P → P if (W | A) ≻ (W | ¬A)) .
or as:
For any proposition P, (The agent concludes P from □P → P) if (W | A) ≻ (W | ¬ A) .
And what is the relationship between P and A? Should A be P?
“I rather like bad wine; one gets so bored with good wine.”
— Disraeli
"Climate change would be a top priority if it weren't for technological progress. However, because technological advances will likely help us to either mitigate the harms from climate change or will create much bigger problems on their own, we probably shouldn't prioritize climate change too much."
This attitude deserves a name: technocrastinating.
Technological progress has been happening for a while. At some point, this argument will stop making sense and we must admit that no, this (climate change, fertility, whatever) is not fine, stop technocrastinating and actually do something. That time might be right now, and the best time already past.
The article you link begins by bluntly saying, "Universal basic income (UBI) is an unconditional cash payment given at regular intervals by the government to all residents, regardless of their earnings or employment status." Yes! That is what UBI is! It continues, "UBI remains largely theoretical and, thus, does not have much of a history." Yes! That is also true!
Various partial versions have been tried to a limited extent. But the Blattman et al paper the article cites does not claim to have anything to do with UBI. Neither "UBI" nor "universal" occur anywhere in that paper, and the welfare scheme it studies is nothing like UBI. The reference is irrelevant to the encyclopedia article, which has no business calling it "Uganda’s UBI trial".
Have Encyclopedia Britannica sunk so low as to use chatbots to write for them? Eheu!
This is not about what UBI "means to me", but about what the basic idea is that everyone but you calls "UBI". The basic idea is to sweep away all of the various special-case means-tested benefits that require armies of staff to implement, and replace them by a single one that is paid to everyone. Are you alive? Are you a citizen? Then you get the UBI. That's it. That is the fundamental idea.
You can advocate for different welfare systems involving means tests and training vouchers and food stamps and businesses lobbying for this and that, but you don't get to call that a "redefinition" of UBI, any more than you can redefine "blue" to mean the colour of bananas or "France" to mean Australia.
"I distinguish four types. There are clever, hardworking, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and hardworking; their place is the General Staff. The next ones are stupid and lazy; they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the mental clarity and strength of nerve necessary for difficult decisions. One must beware of anyone who is both stupid and hardworking; he must not be entrusted with any responsibility because he will always only cause damage."
— Kurt von Hammerstein-Equord (source)
This should be at the author’s discretion. Notify them when a shortform qualifies, add the option to the triple-dot menu, and provide a place for the author to add a title.
No AI titles. If the author wrote the content, they can write the title. If they didn’t, they can ask an AI themselves.
If you SEE a coin flip come up heads (and examine the coin and perform whatever tests you like), what's your posterior probability that the coin actually exists and it wasn't a false memory or trick in some way?
Not enough to make any practical difference to any decision I am going to make. Only when I see the extraordinary evidence required to support an extraordinary hypothesis will it be raised to my attention.
The key is recognizing that the preference itself is completely independent from rationality or intelligence.
The orthogonality thesis is also for human beings.
We are the mouse fearing the cat of AGI, but everything we are doing teaches the kittens how to catch mice.
As much as I know, UBI isn’t a real policy yet, it’s not yet determined how much UBI everyone should get, whether it’s paid out in dollars or vouchers for training programs or other things, whether the amount everyone gets should depend on their personal effort etc.
As I just said in another comment, that is not what the term "UBI" was coined to mean. Everyone gets it, unconditionally. It's paid out in money, not coupons reserved for a particular use. No-one is required to do anything on account of receiving it.
If you want to talk about other welfare schemes that do not work like that, go ahead, but don't call them UBI.
If business co-shape UBI, they can ask it to be conditioned on completing training programs
I don't think you (or the chatbot that helped you with that reply) understand what "UBI" means. UBI is the proposal that everyone is given a fixed basic income, funded from taxes, unconditional upon anything. No means tests, no requirement to do anything to qualify, nothing. Everyone gets it, no matter what their circumstances. It might coexist with other welfare schemes, but those are not part of UBI.
Why does the US spend less than $0.1 billion/year on AI alignment/safety?
Because no-one knows how to spend any more? What has come out of $0.1 billion a year?
I am not connected to work on AI alignment, but I do notice that every chatbot gets jailbroken immediately, and that I do not notice any success stories.
- The style vaguely feels like something ChatGPT might right. Brightly polished, safe and stale.
It is definitely ChatGPT. There are a lot of things in the essay that make no sense the moment you stop and think about what is actually being said. For example:
At its core, UBI is about ensuring that everyone has the financial resources to meet their basic needs.
Not "at its core". That is what UBI is.
For businesses, UBI provides a stable customer base...
A customer base for buying basic necessities, but not for anything above that, like a shiny new games console. And a customer base for basic necessities already exists. Broadly speaking (a glance at Wikipedia), in the developed world it falls about 10 to 20% short of being the entire population, and there are typically government programs of some sort to assist most of the rest.
...and a workforce
How does UBI provide a workforce? UBI pays people whether they work or not. That's what the U means. One of the motivations for UBI is a predicted lack of any useful employment for large numbers of people in the near future.
By investing in UBI, businesses can
How does a business "invest in UBI"? UBI is paid by the government out of taxes.
The beauty of UBI lies in its potential to align individual aspirations with collective progress. By ensuring that basic needs are met, we free people to contribute their skills and energy to areas where they’re most needed
People will already pay people to do the work that they need done. Is it envisaged that under UBI, people will joyfully "contribute their skills and energy" without pay, at whatever work someone has judged to be "needed"? I don't know, but the more I look at this passage the more the apparent meaning drains out of it. There is nothing here but hurrah words. There is nothing in the whole essay.
Good question! By "seeing" I meant having qualia, an apparent subjective experience. By "visualizing" I meant...something like using the geometric intuitions you get by looking at stuff, but perhaps in a philosophical zombie sort of way?
I have qualia for imagined scenes. I'm not seeing them with my physical eyes, and they're not superimposed on the visual field that comes from my physical eyes. It's like they exist in a separate three-dimensional space that does not have any particular spatial relationship to the physical space around me.
I wonder if there's a connection with anthropic reasoning. Let's suppose that a bomb goes off on rolling an odd number...
What distinction are you making between "visualising" and "seeing"?
I've heard of that study about drawing bicycles. I can draw one just fine without having one before me. I have just done so, checked it, and every detail (that I included — this was just a two-minute sketch) was correct. Anyway, if people are as astonishingly bad at the task as the paper says, that just reflects on their memory, not the acuity of their mind's eye. I expect there are people who can draw a map of Europe with all the country borders, whereas I probably wouldn't even remember all of the countries.
That is the intuition behind the common rationalist/utilitarian/EA view that human lives don't decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
For the same reason that they decline with classical measure. Two people are worth more than one. And with classical probability measure. A 100% chance of someone surviving something is better than a 50% chance.
Epistemic status? We don't need no stinking epistemic status!
That's not an official test, just something I thought up!
There is (for me) an actual experience of a picture. It seems only slightly metaphorical to call the faculty of experiencing such pictures “seeing” by an “eye”.
One test for the possession of such a faculty might be to count the vertexes of some regular (not necessarily Platonic) polyhedron, given only a verbal description.
I refer you to Said Achmiz's comment.
What is the purpose of declaring some organism the "winner" of evolution? This is like looking at a vast river delta and declaring one of its many streams to be the "most successful" at finding the sea. Any such judgement is epiphenomenal to the thing itself, which does not care about the stories anyone makes up about it.
350F
35°F, surely?
Personally, I don’t care about the shrimp. At all. The anthropomorphising is absurd, and saying “but even if the suffering were much less it would still be huge” is either a basic error of thinking or dark arts. Anchor the reader by picking a huge number, then back off and point out that it’s still huge. How about epsilon? Is that still huge? How about zero? Anyone can pluck figures out of the air, dictated by whatever will support their bottom line.
I see that already one person has let this article mug his brain for $1000. His loss, though he think it gain.
What if keeping humanity alive and flourishing actually risks spreading suffering further and faster—through advanced technologies, colonization of space, or systems we can’t yet foresee? And what if our very efforts to safeguard the future have unintended consequences that exacerbate suffering in ways we can't predict?
It's up to those future people to solve their own problems. It is enough that we make a future for them to use as they please. Parents must let their children go, or what was the point of creating them?
The blind have seeing-eye dogs. Terry Pratchett gave Foul Ole Ron a thinking-brain dog. At last, a serious use-case for LLMs! Thinking-brain dogs for the hard of thinking!
I cannot stress enough how much
what Neurotypical people call
"overthinking,"
Is just what Neurodivergent folks call
"thinking."
Actual thinking looks like overthinking to the hard of thinking.
For most human beings, this is probably right, because their values have a function that grows slower than logarithmic, which leads to bounds on the utility even assuming infinite consumption.
Growing slower than logarithmic does not help. Only being bounded in the limit gives you, well, a bound in the limit.
You are however pointing to something very real here, and that's the fact that utility theory loses a lot of it's niceness in the infinite realm, and while there might be something like a utility theory that can handle infinity, it will have to lose a lot of very nice properties that it had in the finite case.
"Bounded utility solves none of the problems of unbounded utility." Thus the title of something I'm working on, on and off.
It's not ready yet. For a foretaste, some of the points it will make can be found in an earlier unpublished paper "Unbounded Utility and Axiomatic Foundations", section 3.
The reason that bounded utility does not help is that any problem that arises at infinity will already practically arise at a sufficiently large finite stage. Repeated plays of the finite games discussed in that paper will eventually give you a payoff that has a high probability of being close (in relative terms) to the expected value. But the time it takes for this to happen grows exponentially with the lengths of the individual games. You are unlikely to ever see your theoretically expected value, however long you play. The infinite game is non-ergodic; the game truncated to finitely many steps and finite payoffs is ergodic only on impractical timescales.
Infinitude in problems like these is better understood as an approximation to the finite, rather than the other way round. (There's a blog post by Terry Tao on this theme, but I've lost the reference to it.) The problems at infinity point to problems with the finite.
Is anyone from LW going to the Worldcon (World Science Fiction Convention) in Seattle next year?
ETA: I will be, I forgot to say. I also notice that Burning Man 2025 begins about a week after the Worldcon ends. I have never been to BM, I don't personally know anyone who has been, and it seems totally impractical for me, but the idea has been in the back of my mind ever since I discovered its existence, which was a very long time ago.
surplus physical energy is a wonderful thing.
It is indeed. I imagine the causal connections differently. Strenuous movement cultivates the energy; the body demands food as necessary to refuel. I don't get high energy simply from eating.
One reason is just that eating food is enjoyable. I limit the amount of food I eat to stay within a healthy range, but if I could increase that amount while staying healthy, I could enjoy that excess.
Ah. I eat to sustain myself. Given that I must, I make it reasonably enjoyable, but it’s a chore I’d just as soon do without.
I'm missing something here. Why would I want a bigger liver? I mean, from this account, liver size is obviously something that the body is controlling. You list various interventions to make it bigger, which predictably have bad effects. But why would I want to change something that my body is already managing perfectly well?
The only reason I could find was this:
Athletes have higher resting metabolic rates than non-athletes; their bodies use more energy, even when they’re not exercising. That means they can eat more without getting fat.
Is that it? Why not just[1]...not eat more? These are athletes. They eat to sustain themselves in the pursuit of athletic excellence. They can already "just" not eat more. If they couldn't, they would not be athletes.
I agree there are people, notably Eliezer, who can't "just" not eat more without being as unable to function as if they were starving. I can't see a larger liver burning up more energy helping with that.
If anyone's hackles rise at a sentence beginning "Why not just—", you're quite right. No problem can be solved by "just"...whatever it is. If it could, it would not be a problem. ↩︎
Or briefly, intelligence is good for everything.
some of which still strikes me as completely unbelievable (like leaving water in the sun to absorb energy)
Ultraviolet disinfection?
Just a speculation, generated by nailing the custom to the wall and seeing what hypothesis accretes around it.
Rather, I am pointing out that #1 is the case. No-one means the words that an AI produces. This is the fundamental reason for my distaste for AI-generated text. Its current low quality is a substantial but secondary issue.
If there is something flagrantly wrong with it, then 2, 3, and 4 come into play, but that won't happen with standard average AI slop, unless it were eventually judged to be so persistently low quality that a decision were made to discontinue all ungated AI commentary.
You and the LW team are indirectly responsible, but only for the general feature. You are not standing behind each individual statement the AI makes. If the author of the post does not vet it, no-one stands behind it. The LW admins can be involved only in hindsight, if the AI does something particularly egregious.
Both. I do not want to have AI content added to my post without my knowledge or consent.
In fact, thinking further about it, I do not want AI content added to anyone's post without their knowledge or consent, anywhere, not just on LessWrong.
Such content could be seen as just automating what people can do anyway with an LLM open in another window. I've no business trying to stop people doing that. However, someone doing that knows what they are doing. If the stuff pops up automatically amidst the author's original words, will they be so aware of its source and grok that the author had nothing to do with it? I do not think that the proposed discreet "AI-generated" label is enough to make it clear that such content is third-party commentary, for which the author carries no responsibility.
But then, who does carry that responsibility? No-one. An AI's words are news from nowhere. No-one's reputation is put on the line by uttering them. For it is written, the fundamental question of rationality is "What do I think I know and how do I think I know it?" But these AI popovers cannot be questioned.
And also, I do not personally want to be running into any writing that AI had a hand in.
(Oh, hey, you're the one who wrote Please do not use AI to write for you)
I am that person, and continue to be.
I would like to be able to set my defaults so that I never see any of the proposed AI content. Will this be possible?
Against hard barriers of this kind, you can point to arguments like “positing hard barriers of this kind requires saying that there are some very small differences ... that make the crucial difference between [two things]. ... And can epsilon really make that much of a difference?”
Sorites fallacy/argument by the beard/heap paradox/continuum fallacy/fallacy of grey/etc.