Posts

What epsilon do you subtract from "certainty" in your own probability estimates? 2024-11-26T19:13:46.795Z
Should LW suggest standard metaprompts? 2024-08-21T16:41:07.757Z
What causes a decision theory to be used? 2023-09-25T16:33:36.161Z
Adversarial (SEO) GPT training data? 2023-03-21T18:55:01.330Z
{M|Im|Am}oral Mazes - any large-scale counterexamples? 2023-01-03T16:43:37.682Z
Does a LLM have a utility function? 2022-12-09T17:19:45.936Z
Is there a worked example of Georgian taxes? 2022-06-16T14:07:27.795Z
Believable near-term AI disaster 2022-04-07T18:20:16.843Z
Laurie Anderson talks 2021-12-18T20:47:01.484Z
For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z
Dagon's Shortform 2019-07-31T18:21:43.072Z
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z

Comments

Comment by Dagon on Non-Obvious Benefits of Insurance · 2024-12-23T15:29:36.516Z · LW · GW

Thanks for this!  It applies to a lot of different kinds of insurance.  Car insurance, for instance, isn't financially great (except liability umbrella in many cases), but having the insurance company set standards and negotiate with the other driver (or THEIR insurance company) is much simpler than having to do it yourself, potentially in court.

For some kinds of insurance, there's also tax treatment advantages.  Because it's usually framed as responsible risk reduction (and because insurance bundles some lobbying into the fees), premiums are sometimes untaxed, and payouts are almost always untaxed.  This part only affects the financial considerations, but can be significant.

Comment by Dagon on Habryka's Shortform Feed · 2024-12-21T00:11:39.208Z · LW · GW

This is going the wrong direction.  If privacy from admins is important (I argue that it's not for LW messages, but that's a separate discussion), then breaches of privacy should be exceptions for specific purposes, not allowed unless "really secret contents".

Don't make this filter-in for privacy.  Make it filter-out - if it's detected as likely-spam, THEN take more intrusive measures.  Privacy-preserving measures include quarantining or asking a few recipients if they consider it harmful before delevering (or not) the rest, automated content filters, etc.  This infrastructure requires a fair bit of data-handling work to get it right, and a mitigation process where a sender can find out they're blocked and explicitly ask the moderator(s) to allow it.

Comment by Dagon on What I expected from this site: A LessWrong review · 2024-12-20T22:07:32.789Z · LW · GW

“I think contributing considerations is usually much more valuable than making predictions.”

I think he's absolutely right.  Seeing predictions of top predictors should absolutely be a feature of forecasting sites.  I think the crossover with more conceptual and descriptive posts on LessWrong is pretty minimal.

Comment by Dagon on Habryka's Shortform Feed · 2024-12-20T20:40:14.803Z · LW · GW

I have no expectation of strong privacy on the site. I do expect politeness in not publishing or using my DM or other content, but that line is fuzzy and monitoring for spam (not just metadata; content and similarity-of-content) is absolutely something I want from the site.

For something actually private, I might use DMs to establish a mechanism. Feel free to look at that.

If you -do- intend to provide real privacy, you should formalize the criteria, and put up a canary page that says you have not been asked to reveal any data under a sealed order.

edit to add: I am relatively paranoid about privacy, and also quite technically-savvy in implementation of such. I'd FAR rather the site just plainly say "there is no expectation of privacy, act accordingly" than that it try to set expectations otherwise, but then have to move line later. Your Terms of Service are clear, and make no distinction for User Generated Content between posts, comments, and DMs.

Comment by Dagon on What Goes Without Saying · 2024-12-20T18:40:12.793Z · LW · GW

Thank you for saying this!  I very much like that you're acknowledging tensions and that unhelpful attitudes include BOTH "too much" and "too little" worry about each topic.  

I'd also like to remind everyone (including myself; I often forget this) about typical mind fallacy and the enormous variety in human agency, and peoples' very different modeling and tolerance of various social, relationship, and financial risks.

if you’re in a dysfunctional organization where everything is about private fiefdoms instead of getting things done…why not…leave?”

This is a great example!  A whole lot of people, the vast majority that I've talked to, can easily answer this - "because they pay me and I'm not sure anyone else will", with a bit of "I know this mediocracy well, and the effort to learn a new one only to find it's not better will drain what little energy I have left".  It's truly exceptional to have the self-confidence to say "yeah, maybe it won't work, but I can deal if so, and it's possible I can do much better".

It's very legitimate to see problems and STILL not be confident that a different set of problems would be better for you or for your impact on the world.  The companies that seem great from outside are often either 1) impossible to get hired at for most people; and/or 2) not actually that great, if you know actual employees inside them.

The question of "how can I personally do better on these dimensions", however, is one that everyone can and should ask themselves.  It's just that the answer will be idiosyncratic and specific to the individual's situation and self-beliefs.

Comment by Dagon on TheManxLoiner's Shortform · 2024-12-20T16:37:26.178Z · LW · GW

I vote no.  An option for READERS to hid the names of posters/commenters might be nice, but an option to post something that you're unwilling to have a name on (not even your real name, just a tag with some history and karma) does not improve things.

Comment by Dagon on Are we a different person each time? A simple argument for the impermanence of our identity · 2024-12-18T18:53:20.598Z · LW · GW

Identity is a modeling choice.  There's no such thing in physics, as far as anyone can tell.  All models are wrong, some models are useful.  Continuity of identity is very useful for a whole lot of behavioral and social choices, and I'd recommend using it almost always.

As a thought experiment in favor of presentism being conceivable and logically consistent with everything you know, see Boltzmann brain - Wikipedia .

I think that counter-argument is pretty weak.  It seems to rely on "exist" being something different than we normally mean, and tries to mix up tenses in a confusing way.

  • (1) If a proposition is true, then it exists.

Ehn, ok, but for a pretty liberal and useless use of the word "exists".  If presentism is true, then "exists" could easily mean "exists in memory, there may be no reality behind it".

  • (2) <Socrates was wise> is true.

Debatable, and not today's argument, but you'd have to show WHY it's true, which might include questions of what other currently-nonexistent things can be said to be "was wise".  

  • (3) <Socrates was wise> exists. (1, 2)

The proposition exists, yes.

  • (4) If a proposition exists and has constituents, then its constituents exist.
  • (5) Socrates is a constituent of <Socrates was wise>.
  • (6) Socrates exists. (3, 4, 5)

Bait and switch.  The constituent of <Socrates was wise> is either <Socrates>, the thing that can be part of a proposition, or "Socrates was", the existence of memory of Socrates.

  • (7) If Socrates exists, then presentism is false.

Complete non-sequitur.  Both the proposition-referent or the memory of Socrates can exist in presentism.

  • (8) Presentism is false. (6, 7)

Nope.

Comment by Dagon on Everything you care about is in the map · 2024-12-18T18:35:25.633Z · LW · GW

you can only care about what you fully understand

I think I need an operational definition of "care about" to process this.  Presumably, you can care about anything you can imagine, whether you perceive it or not, whether it exists or not, whether it corresponds to other maps or not.  Caring about something does not make it territory.  It's just another map.

Embedded agents are in the territory.

Kind of.  Identification of agency is map, not territory.  Processing within an agent happens (presumably) in a territory, but the higher-level modeling and output of that processing is purely about maps.  The agent is a subset of the territory, but doesn't have access at the agent level to the territory.

Comment by Dagon on Everything you care about is in the map · 2024-12-17T19:13:28.206Z · LW · GW

Agreed - we (and more generally, embedded agents) have no access to territory.  It's all maps, even our experiences are filtered through interpretation.  Territory is inferred as the thing that makes different maps (at different levels of abstraction or different parts of the universe) consistent with each other and across time. 

That said, some maps are very detailed, repeatable, and can support a lot of other maps.  I tend to think of those as "closer to the territory".  In colloquial discussion and informal thinking, I don't think there's much harm in pretending that the actual territory is the same as the fine-grained maps.  Not technically true - there are more levels of maps, and they're asymptotic to reaching the territory.  But close enough for a lot of things.

Comment by Dagon on A practical guide to tiling the universe with hedonium · 2024-12-17T06:52:50.185Z · LW · GW

A few missing points, which may break the whole plan:

  1. For a long long time, it's probably better to increase hedons experienced, then absolute amount of hedonium, and only after there's a whole lot of it do we worry about density.
  2. Your description of neural hedonium has no reason to believe it only or primarily experiences POSITIVE hedons.  Your proposal risks creating and maximizing sufferonium.  In fact, this is probably a bigger unsolved problem than identification of substrate.  How do you make something that actually maximizes what you want?
  3. it's a bit suspect to go from "play it safe" on the hard problem with artificial neural material, without playing it even safer, and already-solved manufacturing of just making lots of organic brains.  
Comment by Dagon on Nathan Young's Shortform · 2024-12-16T18:43:19.302Z · LW · GW

Probably not for me.  I had a few projects using AWS IoT buttons (no display, but arbitrary code run for click, double-click, or long-click of a small battery-powered wifi button), but the value wasn't really there, and I presume adding a display wouldn't quite be enough to devote the counter space.  Amusingly, it turns out the AWS version was EOL'd today - Learn about AWS IoT legacy services - AWS IoT Core

Comment by Dagon on Remap your caps lock key · 2024-12-15T15:53:54.616Z · LW · GW

Go further with this.  Don't map it to a single key, use it as a modifier so other keys have more functionality.  using CL+hjkl as arrow keys is great for VI users, CL+WASD for gamers - either way you don't need to move your fingers as much.

I use and recommend https://ultimatehackingkeyboard.com/, which defaults to putting the mod key in place of caps lock (it's fully programmable, so you can do what you like.  On Windows, https://www.autohotkey.com/ is extremely flexible.

Comment by Dagon on Purplehermann's Shortform · 2024-12-15T00:29:04.013Z · LW · GW

Well, no.  the easy part of a university undergrad education is pretty cheap (to provide) already.  It's bundled with a lot of things that make it expensive to buy traditionally.

Comment by Dagon on WannabeChthonic's Shortform · 2024-12-13T21:43:33.776Z · LW · GW

GPT 4o prompt: please translate the following into German:
There aren't many (in fact, none that I know of) crisp legible rules on the topic.  If you don't provide a translation, I expect you'll fail to get much engagement, and perhaps see things downvoted more than they otherwise would be.  This is true of topics and content as well - there aren't a lot of rules, but there is a lot of opinion and reaction to non-standard topics or approaches.  The best thing is usually to read quite a bit, and if what you have to say seems to fit well, try it out with a few small-ish posts and see what feedback you get.

Note that LLMs are VERY good at translation of this sort of thing.  Posting in German with English translation might work just fine.  Or it might not - it depends more on the content than anything else, but quality of presentation isn't completely irrelevant.

evaluation:
Es gibt nicht viele (tatsächlich kenne ich überhaupt keine) klaren, gut lesbaren Regeln zu diesem Thema. Wenn du keine Übersetzung bereitstellst, erwarte ich, dass du nur wenig Resonanz erhalten wirst und möglicherweise mehr Downvotes sehen wirst, als es sonst der Fall wäre. Dasselbe gilt für Themen und Inhalte: Es gibt nicht viele Regeln, aber es gibt viele Meinungen und Reaktionen auf unkonventionelle Themen oder Ansätze. Am besten liest du zunächst einiges, und wenn das, was du sagen möchtest, gut passt, probiere es mit ein paar kleineren Beiträgen aus und schau, welches Feedback du bekommst.

Beachte, dass LLMs SEHR gut darin sind, so etwas zu übersetzen. Beiträge auf Deutsch zusammen mit einer englischen Übersetzung könnten durchaus funktionieren. Oder auch nicht – das hängt mehr vom Inhalt ab als von allem anderen, aber auch die Qualität der Präsentation ist nicht völlig irrelevant.

Comment by Dagon on [deleted post] 2024-12-13T19:26:51.444Z

I don't understand how the offer gets rescinded after "accept".  Is this just a game of "player says higher or lower about her threshold until banker and player agree to flip or pay?".  Is there an assumption that it's not anchored or path-dependent (that is, if player would accept 100k, but after banker offers 500k, rescinds it, and offers 110k, she rejects it?

In real-world games, I always wonder how much arbitrage goes on.  In the big-buyin poker and sports-betting world, it's common for players to lay off some of their action, and if someone I knew was about to play this game, I'd absolutely offer to guarantee some amount if she refused offers below a higher amount and split the difference with me.  For instance, I'd guarantee $500K if she rejects offers below $990K and gives me 90% of any payout over $500k.  I'd base this on a risk pool of people who buy shares, so none of us face the full $500K risk.  

It is interesting, though, when you play these kinds of games at parties, to see how the results differ when you talk about absolute values of money, vs "a coin flip for a year's paid vacation vs offers of number of weeks off", or "flip for 2 months' pay vs percent-bonus offers".

 

Comment by Dagon on daijin's Shortform · 2024-12-11T23:17:49.457Z · LW · GW

Paying the (ongoing, repeated) pirate-game blackmail ("pay us or we'll impose a wealth tax") IS a form of wealth tax.  You probably need to be more specific about what kinds and levels of wealth tax could happen with various counterfactual assumptions (without those assumptions, there's no reason to believe anything is possible except what actually exists).

Comment by Dagon on Second-Time Free · 2024-12-11T18:39:31.773Z · LW · GW

Do you have a strategy doc or list of criteria for promotional programs you're considering?  My first question, as an outsider, would be "why give ANY freebies at all"?  My second would be "what dimensions of price discrimination are important" (whether they've tried other dances, what their economic status is, are they seeking to meet people or just to dance, etc.)  And third "what is the budget for promos"?

My mental model is that if it's a financial hardship for someone, they're probably not going to be a regular attendee just because one is free.  For that case, you need some sort of subsidy/discount that goes on longer than one event.

If the main thing is to reduce perceived risk, as opposed to actual cost, you should consider a refund option - for anyone, they can ask for a refund if they didn't get their money's worth.  Limit it to once per attendee - they can come back without penalty, but the risk is now on them.

This also gives a great feedback channel to learn WHY a new attendee didn't find it worth the price.

Comment by Dagon on A shortcoming of concrete demonstrations as AGI risk advocacy · 2024-12-11T17:37:43.686Z · LW · GW

There hasn't been a large-scale nuclear war, should we be unafraid of proliferation?   More specific to this argument, I don't know any professional software developers who haven't had the experience of being VERY surprised by a supposedly well-known algorithm.  

Comment by Dagon on Why Isn't Tesla Level 3? · 2024-12-11T16:29:01.522Z · LW · GW

Yeah, there's a lot of similarity with other human-level cognitive domains.  We seem to be in the center of a logistic curve - massive very recent progress, but an unknown and likely large amount of not-yet-solved quality, reliability, and edge-case-safe requirements.

20 years ago, it was pure science fiction.  10 years ago it was "just an engineering problem, but a big one", 5 years ago it was "need to fine-tune and deal with regulation".  Now it's ... "we can't say we were wrong, and we've improved massively AGAIN, but it feels like the end-state is still out of reach".  

For a lot of applications, FSD IS ALREADY safer than human drivers.  But it's not as resilient and flexible, and it's much worse than human in very rare situations, like person stuck under wheels in sensor-free location.  The goalpost remains rather undefined, and I suspect it's quite a ways out yet.  

I do put some probability into a discontinuity - some breakthrough or change in pattern that near-instantaneously makes it so much obviously better than a human in all relevant ways that it's just no longer a question.  It's, of course, impossible to put a timeline on that.  I'd probably guess another 8-20 years on the current path, could be as fast as 2026 if something major changes.

Note that this is only slightly different from my AGI estimates - I'd say 15-40 years on current path (or as early as 5 if something big changes) for significant amounts of human functioning is no longer economically or socially desired - the shift from AI as assistants and automation to AI as autonomous full-stack organizations. 

This similarity makes me suspicious that I'm just using cached heuristics, but also may be just that they're similar kinds of tasks in terms of generality and breadth of execution.

Comment by Dagon on [deleted post] 2024-12-10T18:05:38.453Z

I didn't vote at first - it seemed low-value and failed to elucidate or explore any of the issues or reasoning behind the recommendation.  But not actively harmful.  Now that you've acknowledged that it's at least partly a troll, I have downvoted.

Comment by Dagon on Keeping self-replicating nanobots in check · 2024-12-09T18:22:46.827Z · LW · GW

That's a fair bit of additional system complexity (though perhaps similar code-complexity, and fewer actual circuits).  More importantly, it really just moves the problem out one level - now you worry about runaway or mutated controllers.  You can make a tree of controllers-controlling-controllers, up to a small number of top-level controllers, with "only" logarithmic overhead, but it's still not clear why a supervisor bot is less risk than a distributed set of bots.
 

Comment by Dagon on PoMP and Circumstance: Introduction · 2024-12-09T18:10:24.227Z · LW · GW

This is a useful and important topic, but there are some details in the writeup that are misleadingly explained and may reduce the trust in the overall explanation.

A lot of passwords are all-or-nothing. You either have full access to a service or no access to it.

It's really necessary to split authentication (authz) from authorization (authn).  Passwords are authentication - they show identity of user.  There are separate systems for what that identity is allowed to do.  It's not the password that's all-or-nothing.

A security sandbox, such as one in a Docker container or a browser tab, can prevent an external service from accessing data inside it.

Almost exactly the opposite.  The sandbox prevents things INSIDE from accessing data (except in very controlled ways) outside of it.  Sandboxes makes it harder for an attacker to escape and hit other systems, not harder for an attacker who's got access to the host to get into the sandbox.  In truth, there's a bit of both, as it makes it easier to secure the host when all "user work" happens in a sandbox.

 

Comment by Dagon on A Paradox of Simulated Suffering · 2024-12-05T19:18:27.426Z · LW · GW

Hmm.  I can't tell whether this is an interesting or new take on the question of what is a "true" experience, or if it's just another case of picking something we can measure and then talking about why it's vaguely related to the real question.

Do you also compare HUMAN predictions of other human emotional responses, to determine if that prediction is always experienced as suffering itself?

Comment by Dagon on Higher and lower pleasures · 2024-12-05T17:46:51.888Z · LW · GW

One of my favorite banjo solos is in this video: (Gimme Some of That) Ol' Atonal Music - Merle Hazard feat. Alison Brown .  It's extremely relevant to the post, as well - making the point that there are multiple levels to art appreciation.  The video makes the distiction between emotion and thinking, or heart and brain, but your distinction about timeframes and types of impact (immediate pleasure vs changing/improving future interpretations of experiences) is valid as well.

That said, I'm not sure that it's the art which contains the differences, so much as the audience and what someone is putting into the experience of the art.  Ok, both - some art supports more layers than others.

Comment by Dagon on Picking favourites is hard · 2024-12-05T17:32:54.474Z · LW · GW

As I've gotten older, I note more and more problems with the literal interpretation of topics like these.  This has made me change my default interpretation (and sometimes I mention it in my response) to a more charitable version, like "what are some of your enjoyable or recommended ...".   In addition to the problems you mention, there are a few other important factors that make the direct "exactly one winner, legibly better than all others" interpretation problematic:
 

  • Variable over time.  I explicitly value variety and change, and my ordering of things changes based on which attributes of that thing are more important in this instant, and how I estimate my reaction to those attributes in this instant.
  • Illegible preferences.  I have no idea why I'm drawn to some things.  I can make up reasons, but I have no expectation that I can actually discern all my true reasons, and I'm usually unwilling to try very hard.
  • High-dimensionality.  Most discussions and recommendation-requests of this form are about non-simple experiences.  Comparing any two requires a LOT of choice in how to weight various dimensions of comparison in order to get a ranking.  
Comment by Dagon on Matt Goldenberg's Short Form Feed · 2024-12-05T17:17:34.072Z · LW · GW

It's interesting to figure out how to make use of this multi-level model.  Especially since personal judgement and punishment/reward (both officially and socially) IS the egregore - holding people accountable for their actions is indistinguishable from changing their incentives, right?

Comment by Dagon on Sam Harris’s Argument For Objective Morality · 2024-12-05T16:54:10.068Z · LW · GW

Mostly agreed - this argument fails to bridge (or even acknowledge) the is-ought gap, and it relies on very common (but probably not truly universal) experiences.  I also am sad that it is based on avoidance instincts ("truly sucks") rather than seeking anything.

That said, it's a popular folk philosophy, for very good reasons.  It's simple enough to understand, and seems to be applicable in a very wide range of situations.  It's probably not "true" in the physics sense, but it's pretty true in the "workable for humans" sense.  

There's probably a larger gap here than, say newton to einstein for gravity, but it's the same sort of distinction.

Comment by Dagon on AI box question · 2024-12-04T19:43:07.161Z · LW · GW

I'm not sure that AI boxing is a live debate anymore.  People are lining up to give full web access to current limited-but-unknown-capabilities implementations, and there's not much reason to believe there will be any attempt at constraining the use or reach of more advanced versions.

Comment by Dagon on Sorry for the downtime, looks like we got DDosd · 2024-12-04T00:40:54.696Z · LW · GW

This seems just like regular auth, just using a trusted 3P to re-anonymize.  Maybe I'm missing something, though.  It seems likely it won't provide much value if it's unbreakably anonymous (because it only takes a few stolen credentials to give an attacker access to fake-humanity), and doesn't provide sufficient anonymity for important uses if it's escrowed (such that the issuer CAN track identity and individual usage, even if they currently choose not to).

Comment by Dagon on Sorry for the downtime, looks like we got DDosd · 2024-12-03T23:10:12.656Z · LW · GW

Interesting thought.  I tend to agree that the endgame of ... protection from scalable attacks in general ... is lack of anonymity.  Without identity, there can be no memory of behavior, and no prevention of abuse that's only harmful across multiple events/sources.  I suspect it's a long way out, though.

Your proposed solution (paid IP whitelisting) is pretty painful - the vast majority of real users (and authorized scrapers) don't have a persistent enough address, or at least don't know that they do, to participate.

Comment by Dagon on Is malice a real emotion? · 2024-12-02T22:28:09.298Z · LW · GW

I'm not sure it classifies as an emotion (nor does stupidity, for that matter), but it probably does exist as a motivation for some human acts, with the relevant emotion usually being anger.  

I don't think your distinction (harm for its own sake, distinct from harm with a motivation) is real, unless you think there are un-caused actions in other realms, or you discount some motivations (like anger or hatred) as "not valid" for some reason.

 

Comment by Dagon on Solenoid_Entity's Shortform · 2024-12-01T18:33:51.179Z · LW · GW

Anthropics start out weird.   

Trying to reason from a single datapoint out of an unknown distribution is always going to be low-information and low-predictive-power.  MWI expands the scope of the unknown distribution (or does it?  It all adds up to normal, right?), but doesn't change the underlying unknowns.

Comment by Dagon on Is the mind a program? · 2024-11-30T19:10:20.461Z · LW · GW

That's a really good example, thank you!  I see at least some of the analogous questions, in terms of physical measurements and variance in observations of behavioral and reported experiences.  I'm not sure I see the analogy in terms of qualia and other unsure-even-how-to-detect phenomena.  

Comment by Dagon on Is the mind a program? · 2024-11-30T16:35:48.518Z · LW · GW

Couldn't you imagine that you use philosophical reasoning to derive accurate facts about consciousness,

My imagination is pretty good, and while I can imagine that, it's not about this universe or my experience in reasoning and prediction.

Can you give an example in another domain where philosophical reasoning about a topic led to empirical facts about that topic?  Not meta-reasoning about science, but actual reasoning about a real thing?

Comment by Dagon on Is the mind a program? · 2024-11-30T03:43:45.944Z · LW · GW

Hmm, still not following, or maybe not agreeing.  I think that "if the reasoning used to solve the problem is philosophical" then "correct solution" is not available.  "useful", "consensus", or "applicable in current societal context" might be better evaluations of a philosophical reasoning.

Comment by Dagon on Is the mind a program? · 2024-11-29T18:47:04.375Z · LW · GW

I'd call that an empirical problem that has philosophical consequences :)

And it's still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term.  They've fully(*) emulated some brains - https://openworm.org/ is fascinating in how far it's come very recently.  They're nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.  

* "fully" is not actually claimed nor tested.  Only the currently-measurable neural weights and interactions are emulated.  More subtle physical properties may well turn out to be important, but we can't tell yet if that's so.

Comment by Dagon on Is the mind a program? · 2024-11-29T18:06:44.254Z · LW · GW

But if someone finds the correct answer to a philosophical question, then they can... try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?

I think this is a crux.  To the extent that it's a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about "useful" rather than "true"), posts like this one make no sense.  To the extent that it's expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it's NOT purely philosophical.

This post appears to be about an empirical question - can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain.  It's not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood.  It's also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.

Comment by Dagon on Is the mind a program? · 2024-11-29T16:59:06.344Z · LW · GW

This comes down to a HUGE unknown - what features of reality need to be replicated in another medium in order to result in sufficiently-close results? 

I don't know the answer, and I'm pretty sure nobody else does either.  We have a non-existence proof: it hasn't happened yet.  That's not much evidence that it's impossible.  The fact that there's no actual progress toward it IS some evidence, but it's not overwhelming.

Personally, I don't see much reason to pursue it in the short-term.  But I don't feel a very strong need to convince others.

Comment by Dagon on Isekka's Shortform · 2024-11-28T19:09:12.935Z · LW · GW

I mean "mass and energy are conserved" - there's no way to gain weight except if losses are smaller than gains.  This is a basic truth, and an unassailable motte about how physics works.  It's completely irrelevant to the bailey of weight loss and calculating calories.

Comment by Dagon on Isekka's Shortform · 2024-11-26T19:23:30.797Z · LW · GW

Not sure this is a new frontier, exactly - it was part of high-school biology classes decades ago.  Still, very worth reminding people and bringing up when someone over-focuses on the bailey of "legible, calculated CICO" as opposed to the motte of "absorbed and actual CICO".

Comment by Dagon on You are not too "irrational" to know your preferences. · 2024-11-26T17:41:38.167Z · LW · GW

I'd enjoy some acknowledgement that there IS an interplay between cognitive beliefs (based on intelligent modeling of the universe and other people) and intuitive experienced emotions.  "not a monocausal result of how smart or stupid they are" does not imply total lack of correlation or impact.  Nor does it imply that cognitive ability to choose a framing or model is not effective in changing one's aliefs and preferences.

I'm fully onboard with countering the bullying and soldier-mindset debate techniques that smart people use against less-smart (or equally-smart but differently-educated) people.  I don't buy that everyone is entitled to express and follow any preferences, including anti-social or harmful-to-others beliefs.  Some things are just wrong in modern social contexts.

Comment by Dagon on Arthropod (non) sentience · 2024-11-25T18:29:35.867Z · LW · GW

I appreciate the discussion, but I'm disappointed by the lack of rigor in proposals, and somewhat expect failure for the entire endeavor of quantifying empathy (which is the underlying drive for discussing consciousness in these contexts, as far as I'm concerned).
 

Of course, we do not measure computers by mass, but by speed, number of processors and information integration. But if you directly do not have enough computing capacity, your neural network is simply small and information processing is limited.

It's worth going one step further here - how DO we measure computers, and how might that apply to consciousness?  Computer benchmarking is a pretty complex topic, with most of the objective trivial measures (FlOps, IOPS, data throughput, etc.) being well-known to not tell the important details, and specific usage benchmarks being required to really evaluate a computing system.  Number of transistors is a marketing datum, not a measure of value for any given purpose.

Until we get closer to actual measurements of cognition and emotion, we're unlikely to get any agreement on relative importance of different entities' experiences.

Comment by Dagon on The Manufactured Crisis: How Society Is Willingly Tying Its Own Noose · 2024-11-22T19:37:40.001Z · LW · GW

I suspect that almost all work that can be done remotely can be done even more cheaply the more remote you make it (not outside-the-city, but outside-the-continent).  I also suspect that it's not all that long before many or most mid-level fully-remotable jobs become irrelevant entirely.  Partially-remotable jobs (WFH 80% or less of the time, where the in-office human connections are (seen as) important part of the job) don't actually let people live somewhere truly cheap.

I think you're also missing many of the motivations for preferring a suburban area near (but not in the core of) a big city - schools and general sortation (having most neighbors in similar socioeconomic situation). 

Comment by Dagon on Benito's Shortform Feed · 2024-11-22T17:56:16.077Z · LW · GW

I wonder if you're objecting to identifying this group as cult-like, or to implying that all cults are bad and should be opposed.  Personally, I find a LOT of human behavior, especially in groups, to be partly cult-like in their overfocus on group-identification and othering of outsiders, and often in outsized influence of one or a few leaders.   I don't think ALL of them are bad, but enough are to be a bit suspicious without counter-evidence. 

Comment by Dagon on AtillaYasar's Shortform · 2024-11-21T21:18:42.982Z · LW · GW

I tend to use nlogn (N things, times logN overhead) as my initial complexity estimate for coordinating among "things".  It'll, of course, vary widely with specifics, but it's surprising how often it's reasonably useful for thinking about it.

Comment by Dagon on Evolution's selection target depends on your weighting · 2024-11-19T22:20:09.511Z · LW · GW

Wish I could upvote and disagree.  Evolution is a mechanism without a target.  It's the result of selection processes, not the cause of those choices.

Comment by Dagon on AtillaYasar's Shortform · 2024-11-19T16:45:31.695Z · LW · GW

There have been a number of debates (which I can't easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism.  It's both, at different times to different degrees, and often not explicit about what the goals are.

The practical outcome seems spot-on.  With some people you can have the meta-conversation about what they want from an interaction, with most you can't, and you have to make your best guess, which you can refine or change based on their reactions.

Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives?  I'm pretty sure it's "predict a plausible next token", but I don't know how I'll know to change my belief.

Comment by Dagon on Anvil Problems · 2024-11-19T16:38:18.249Z · LW · GW

Gah!  I missed my chance to give one of my favorite Carl Sagan quotes, a recipe for Apple Pie, which demonstrates the universality and depth of this problem:

If you wish to make an apple pie from scratch you must first invent the universe.

Comment by Dagon on Ethical Implications of the Quantum Multiverse · 2024-11-19T16:34:17.904Z · LW · GW

Note that the argument whether MWI changes anything is very different from the argument about what matters and why.  I think it doesn't change anything, independently of how much what things in-universe matter.

Separately, I tend to think "mattering is local".  I don't argue as strongly for this, because it's (recursively) a more personal intuition, less supported by type-2 thinking.  

Comment by Dagon on Ethical Implications of the Quantum Multiverse · 2024-11-18T23:20:21.570Z · LW · GW

I think all the same arguments that it doesn't change decisions also apply to why it doesn't change virtue evaluations.  It still all adds up to normality.  It's still unimaginably big.  Our actions as well as our beliefs and evaluations are irrelevant at most scales of measurement.