Posts
Comments
What does "unbounded" mean?
Without bound, as in, without there existing some specific bound you will never surpass.
What does it mean to "retrocausally compress 'self'"
Make yourself easier for your past self to index on. e.g. for an evil version, if Horde Prime wants Horde clones to work to benefit Horde Prime, Horde Prime can work to place himself in the center of the universe, which he previously programmed other Hordes to care about.
Are you postulating that: [...]
I'm saying something closest to #3. In order to specify an individual, you have to be able to point at them in some way.
So what does "is magic real?", as a child might ask the question, correspond to?
https://www.lesswrong.com/tag/timeless-decision-theory
https://www.lesswrong.com/tag/functional-decision-theory
Veganism?
Why does evil exist?
Do these things feel logically impossible per se?
Yes, same qualia as looking at an Escher staircase IRL but feels more fundamental.
Do they feel impossible because they contradict other things that you believe are true?
No. I can't break down why they feel impossible.
Do you draw the conclusion that the impossible-seeming things genuinely cannot exist or (in the case of self-perception?) genuinely do not exist, despite appearances?
Kinda but I can't maintain that because milliseconds later I perceive the "impossible" qualia again.
Free will used to feel impossible but now that I understand free will as related to e.g. the 5-and-10 problem it's more... manipulable somehow.
standard model of PA = schelling model of PA?
5. A second machine, designed solely to neutralize an evil super-intelligent machine will win every time, if given similar amounts of computing resources (because specialized machines always beat general ones).
This implies you have some resource you didn't fully imbue to the first AI, that you still have available to imbue to the second. What is that resource?
I feel a sense of impossibility that "anything could exist at all".
I feel a sense of impossibility when I contemplate the recursive nature of perceiving myself perceiving thoughts.
I feel a sense of impossibility about something unspeakable that comes before and is outside anything else.
How are we sure we mean the same thing by the word consciousness though? All I can tell for sure is that ppl think consciousness is "impossible" (cus they try to invent quantum phlogiston to explain it), and something about consciousness engendering moral worth.
I don't think I'd have less moral worth if I couldn't recognize myself in a mirror. I still have moral worth when I'm not introspecting.
I get the sense of impossibility but I get it about lots of things, e.g. Greg Egan's dust from permutation city.
I feel like maybe I'm missing something other people are experiencing here, but maybe not.
spectrum of qualia of rapid muscle movement:
1. wiggle back and forth with no delay
2. effort required to make individual movements one in a row
some actions are only sometimes available to me in form #2 e.g. glossolalia, patterns of rapid eye movement
sometimes it seems like a matter of training e.g. learning to wiggle my ears
I've been experimenting a bit with using vimwiki: https://github.com/vimwiki/vimwiki
This topic (affordance/encoding) is one of the universal entry points to systemization of fully general agency.
Submission: low bandwidth oracle, ask:
IFF I'm going to die with P>80% in the next 10 years while >80% (modulo natural death rate) of the rest of humanity survives for at least 5 more years then, was what killed me in the reference class:
- disease
- mechanical/gross-physical accident
- murdered
- other
Repeat to drill down and know the most important hedges for personal survival.
The "rest of humanity survives" condition reduces the chance the question becomes entangled with the eschaton.
i.e. I'm pointing out that selfish utility functions are less personally or humanity-existentially dangerous to ask the oracle questions relevant to in cases where concerns are forced to be local (in this case, forced-local because you died before the eschaton). However the answers still might be dangerous to people near you.
i.e. Selfish deals with the devil might not destroy the world if they're banal in the grand scheme of things.
Decided to upload source to github now that I know arbital's license: https://github.com/emma-borhanian/arbital-scrape
Licensed under MIT and Unlicense. Updated the drive/mega links.
Thanks for hosting, added link to post.
Please do not re-download the pages from arbital.com without good reason. I've added a single line of code to disable this. This is why I'm not uploading the source code to github, but did include it in the zip file you can download.
Running the code as-is will simply regenerate the HTML using the already-downloaded raw json.
Edit: This is being downvoted. I'm happy to reevaluate this and upload to github instead of merely including the source in the zip file. Please comment if this is what you wish.
I'll wait to open source mine to see if yours is better then :)
Simulacra as free-floating schelling points could actually be good if they represent mathematical truth about coordination between agents within a reference class, intended to create better outcomes in the world?
But if a simulacrum corresponds to truth because people conform their future behavior to its meaning in the spirit of cooperation does it still count as a simulacrum?
It feels like you're trying to implicitly import all of good intent, in its full potential, stuff it into the word "truth", and claim it's incompatible with the use of schelling points via the distortions:
- the idea that the symbol had an original meaning and any change involving voluntary conformance to the new meaning would inherently be malicious
- using an example (job title) which is already a simulacrum, but initially used cooperatively
- assuming that people lagging in stage 1-3 would be exploited/arbitraged by people in stage 4
- cooperative simulacrum (e.g. maps) are less contentious and so not salient examples of the word
In other words I think you're assuming:
good intent = truth = in-principle CDT-verifiable truth (fair)
Do you ever get the feeling that you're unsure what was true until the moment you said it? Like on the inside you're this highly contextual malleable thing but when you act it resolves and then you become consistent with something for a time?
Do you ever feel like you're writing checks you can't quite cash, running ahead, saying as true what you plan to *make* true, what becomes true in the saying it. Do you ever experience imposter syndrome?
Do you ever feel like we're all playing a game of pretend and nobody can quite step out of character?
> From the inside, this is an experience that in-the-moment is enjoyable/satisfying/juicy/fun/rewarding/attractive to you/thrilling/etc etc.
people’s preferences change in different contexts since they are implicitly always trying to comply with what they think is permissible/safe before trying to get it, up to some level of stake outweighing this, along many different axes of things one can have a stake in
to see people’s intrinsic preferences we have to consider that people often aren’t getting what they want and are tricked into wanting suboptimal things wrt some of their long-suppressed wants, because of social itself
this has to be really rigorous because it’s competing against anti-inductive memes
this is really important to model because if we know anything about people’s terminal preferences modulo social we know we are confused about social anytime we can’t explain why they aren’t pursuing opportunities they should know about or anytime they are internally conflicted even though they know all the consequences of their actions relative to their real ideal-to-them terminal preferences
> Social sort of exists here, but only in the form that if an agent can give something you want, such as snuggles, then you want that interaction.
is it social if a human wants another human to be smiling because perception of smiles is good?