Posts

What causes a decision theory to be used? 2023-09-25T16:33:36.161Z
Adversarial (SEO) GPT training data? 2023-03-21T18:55:01.330Z
{M|Im|Am}oral Mazes - any large-scale counterexamples? 2023-01-03T16:43:37.682Z
Does a LLM have a utility function? 2022-12-09T17:19:45.936Z
Is there a worked example of Georgian taxes? 2022-06-16T14:07:27.795Z
Believable near-term AI disaster 2022-04-07T18:20:16.843Z
Laurie Anderson talks 2021-12-18T20:47:01.484Z
For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"? 2020-06-12T16:59:56.845Z
Money isn't real. When you donate money to a charity, how does it actually help? 2020-02-02T17:03:04.426Z
Dagon's Shortform 2019-07-31T18:21:43.072Z
Did the recent blackmail discussion change your beliefs? 2019-03-24T16:06:52.811Z

Comments

Comment by Dagon on the micro-fulfillment cambrian explosion · 2023-12-04T16:32:43.012Z · LW · GW

[ I've worked with some of the very large warehousing companies mentioned, but not recently ]

For restaraunt use, picking packaged units from a pallet or storage area to bring to "ready-use" shelves is NOT very time-consuming, most don't have a dedicated employee for it.  Unpacking to line-use containers and refilling the common open supplies would be a big savings, but it'd have to have more flexibility and use less space when not active than current solutions have before it's anywhere near competitive with humans on practical grounds, regardless of cost.

For smaller places, employees are not fractionally employable, and tasks are not very well standardized, so once you need people around, you may as well have them do most of the work.  

The big question is when multimodal LLM and "AI" gets good enough to put into automation that is general-purpose enough to handle exceptions well.  Putting something back after it falls, minor self-maintenance, incorrect shipments recieved, refilling a busy ingredient bin, etc. are all things that humans are currently required for, and are the large part of employment costs for smaller businesses.

Comment by Dagon on How do you do post mortems? · 2023-12-03T23:53:05.550Z · LW · GW

For work, it's worth putting together a template/checklist so you can make sure others are on the same page regarding content and expected clarity of the document (and the process of creating and reviewing it).  For personal look-backs, it can be a lot less formal, but it's still nice to have a list of things you want out of it.

Important (IMO) sections:

  • "what happened".  A few paragraphs of plain-language explanation of what you're looking back on.  Should be somewhat narrow and specific to a single event or decision, rather than "last week felt bad, here are all the reasons..."
  • "why do we care".  A description of the impact or cost of the event or decision, perhaps with a comparison to what SHOULD have happened, perhaps with a description of risk even if there was no cost.
  • Timeline - datestamps of when things happened, including when we reacted or learned about things.
  • "how did we behave during the problem or event" - time to discover, time to mitigate, time to resolve, how clearly were we focused, what confusions caused delay or waste.
  • "formal root-cause".  Often called "5 whys", for the idea that it takes (at least) 5 levels of causation to understand something, but other formats work as well.  Do go deep enough to understand the systemic or structural reasons for the thing.
  • "What should we change for the future?"
Comment by Dagon on Stupid Question: Why am I getting consistently downvoted? · 2023-12-03T22:12:17.598Z · LW · GW

Yeah, I can see how a single highly-downvoted bad comment can outweigh a lot of low-positive good comments.  I do wish there were a way to reset the limit in the cases where a poster acknowledges the problem and agrees they won't do it again (in at least the near-term).  Or perhaps a more complex algorithm that counts posts/comments rather than just votes.  

And I've long been of the opinion that strong votes are harmful, in many ways.  

Comment by Dagon on Stupid Question: Why am I getting consistently downvoted? · 2023-12-03T17:08:09.374Z · LW · GW

Can you clarify what part of the downvote system is broken?  If someone posts multiple things that get voted below zero, that indicates to me that most voters don't want to see more of that on LW.  Are you saying it means something else?

I do wish there were agreement indicators on top-level posts, so it could be much clearer to remind people "voting is about whether you think this is good to see on LW, agreement is about the specific arguments".  But even absent that, I don't see very much below-zero post scores that surprise me or I think are strongly incorrect.  If I did, I somewhat expect the mods would override a throttle.

Comment by Dagon on faul_sname's Shortform · 2023-12-03T16:56:26.367Z · LW · GW

There are competing theories here.  Including secrecy of architecture and details in the security stack is pretty common, but so is publishing (or semi-publishing: making it company confidential, but talked about widely enough that it's not hard to find if someone wants to) mechanisms to get feedback and improvements.  The latter also makes the entire value chain safer, as other organizations can learn from your methods.

Comment by Dagon on Stupid Question: Why am I getting consistently downvoted? · 2023-12-02T17:13:11.750Z · LW · GW

https://www.lesswrong.com/posts/hHyYph9CcYfdnoC5j/automatic-rate-limiting-on-lesswrong claims it's net karma on last 20 posts (and last 20 within a month).  And total karma, but that's not an issue for a long-term poster who's just gotten sidetracked to an unpopular few posts. 

So yes, "quite a few", especially if upvotes are scarcer than downvotes for the poster. But remember, during this time, they ARE posting, just not at the quantity that wasn't working.

The real question is whether the poster actually changes behavior based on the downvotes and throttling.  I do think it's unfortunate that some topics could theoretically be good for LW, but end up not working.  I don't think it's problematic that many topics and presentation styles are not possible on LW.

Comment by Dagon on Stupid Question: Why am I getting consistently downvoted? · 2023-12-02T16:53:28.304Z · LW · GW

It's worth being clear in your mind the distinction between "put in the work" and "ideas that are both clear and correct (or at least promising)".  They're related, especially work and the clarity of what the idea is, but not the same.

Comment by Dagon on Stupid Question: Why am I getting consistently downvoted? · 2023-12-02T16:48:28.897Z · LW · GW

I'm not sure I understand this concern.  For someone who posts a burst of unpopular (whether for the topic, for the style, or for other reasons) posts, rate limiting seems ideal.  It prevents them from digging deeper, while still allowing them to return to positive contribution, and to focus on quality rather than quantity.

I understand it's annoying to the poster (and I've been caught and annoyed myself), but I haven't seen any that seem like a complete error.  I kind of expect the mods would intervene if it were a clear problem, but I also expect the base intervention is advice to slow down.

Comment by Dagon on Queuing theory: Benefits of operating at 70% capacity · 2023-12-01T23:42:41.844Z · LW · GW

Definitely, along with switching costs (if you drop a low-priority to work on a high-priority item, there's some delay and some waste involved).  In many systems, the switching delay/cost is high enough that it's best to just leave some nodes idle.  In others, the low-priority things can be arranged such that dropping/setting-aside is pretty painless.

Comment by Dagon on the uni wheel is dumb · 2023-12-01T17:50:49.826Z · LW · GW

Mostly agree, but also caution about being too confident in one's skepticism.  Almost all innovation is stupid until it works, and it's VERY hard to know in advance which problems end up being solvable or what new applications come up when something is stupid for it's obvious purpose but a good fit for something else.

I honestly don't know which direction this should move your opinion of Hyundai's research agenda.  Even if (as seems likely), it's not useful in car manufacturing, it may be useful elsewhere, and the project and measurement mechanisms may teach them/us something about the range of problems to address in drivetrain design.

Comment by Dagon on Ethicophysics II: Politics is the Mind-Savior · 2023-12-01T05:38:01.443Z · LW · GW

On reflection, I suspect that I'm struggling with the is-ought problem in the entire project.  Physics is "is" and ethics is "ought", and I'm very skeptical that "ethicophysics" is actually either, let alone a bridge between the two.

Comment by Dagon on A Formula for Violence (and Its Antidote) · 2023-11-30T16:27:33.825Z · LW · GW

I understand (but do not agree with) the idea of preserving someone's clickstream.  I do not want pure linkposts without any information on LessWrong.  The equation is:

V = mc2

(Violence) = (Mass of animals) x (Degree of Confinement)2

and the solution is

Quite simply: fight violence with kindness.

Comment by Dagon on Ethicophysics II: Politics is the Mind-Savior · 2023-11-30T16:19:27.265Z · LW · GW

I suspect we have a disagreement about whether the "worked out theoretical equations" suffer from is-ought any less than the plain language version.  And if they are that fundamentally different, why should anyone think the equations CAN be explained in plain language.

I am currently unwilling to put in the work to figure out what the equations are actually describing.  If it's not the same (though with more rigor) as the plain language claims, that seriously devalues the work.

Comment by Dagon on Thoughts on teletransportation with copies? · 2023-11-29T16:27:08.899Z · LW · GW

As others have pointed out, there's an ambiguity in the word "you". We don't have intuitions about branching or discontinuous memory paths, so you'll get different answers if you mean "a person with the memories, personality, and capabilities that are the same as the one who went into the copier" vs "a singular identity experiencing something right now".  

Q1: 100%.  A person who feels like me experiences planet A and a different person who is me experiences planet B. 

Q2: Still 100%.  One of me experiences A, one C and one D. 

Q3: Copied money is probably illegal, and my prior for scams is high, so I'd probably reject the offer.  If I suspend my disbelief, I'd pay just under $50 for the ticket.  It turns into $100 (on planet A), where the $50 turns into $100 as well ($50 on A and $50 on B) The amount is small, so declining marginal utility doesn't play much into it.

Q4: $33 for a ticket is equivalent to $33 copied 3 times.

Comment by Dagon on Ethicophysics II: Politics is the Mind-Savior · 2023-11-28T17:08:30.956Z · LW · GW

Do you actually want discussion on LW, or is this just substack spam?  If you want discussion, you probably should put something to discuss in the post itself, rather than a link to a link to a PDF in academic language that isn't broken out or presented in a way that can be commented upon.

From a very light skim, it seems like your "mathematically rigorous treatment" isn't.  It includes some equations, but not much of a tie between the math and the topics you seem to want to analyze.  It deeply suffers from is-ought confusion.

Comment by Dagon on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T17:03:32.782Z · LW · GW

[ note: I am not a libertarian, and haven't been for many years. But I am sympathetic. ]

Like many libertarian ideas, this mixes "ought" and "can" in ways that are a bit hard to follow.  It's pretty well-understood that all rights, including the right to redress of harm, are enforced by violence.  In smaller groups, it's usually social violence and shared beliefs about status.  In larger groups, it's a mix of that, and multi-layered resolution procedures, with violence only when things go very wrong.  

When you say you'd "prefer a world cognizant enough of the risk to be telling AI companies that...", I'm not sure what that means in practice - the world isn't cognizant and can't tell anyone anything.  Are you saying you wish these ideas were popular enough that citizens forced governments to do something?  Or that you wish AI companies would voluntarily do this without being told?  Or something else?

Comment by Dagon on How Microsoft's ruthless employee evaluation system annihilated team collaboration. · 2023-11-26T16:59:18.002Z · LW · GW

I'm very skeptical of fairly limited experiences being used to make universal pronouncements.

I'm sure this was the experience for many individuals and teams.  I know for certain it was pretty normal and not worried about for others.  I knew a lot of MS employees in that era, though I worked at a different giant tech firm with vaguely-similar procedures.  I was senior enough, though an IC rather than a manager, to have a fair bit of input into evaluations of my team and division, and I  saw firsthand the implementation and effects of this theory.  It was dysfunctional and harmful for some teams, but more often (in my experience) somewhat effective at actually improving the makeup of teams.

There is a brutal and unpleasant truth in hiring and employment decisions, which is that there is a wide variance in contribution among similar-seeming employees.  Much of this is illegible and extremely hard to objectively measure individually, but it's visible to coworkers and line managers.  For a long-term improvement of a very large organization, it's a reasonable theory to replace the least-productive employees with a new hire (random sample, likely to be better than bottom).  Brutal and unpleasant, but not wrong.

I don't think the policy itself was particularly effective at motivation, but it wasn't terribly harmful either.  The top and middle very much understood the theory, and it was just part of life. 

Details, of course, matter.  A fixed percentage imposed on a small group is going to be wrong most of the time.  I've heard MS, at times, did exactly that - 10% is way too high to be immediately fired (unless it's 10% to get special attention, with the reasonable expectation that more than half will improve and stay).  

Comment by Dagon on 1. A Sense of Fairness · 2023-11-26T16:24:39.515Z · LW · GW

Voting is one example. Who gets "human rights" is another. A third is "who is included, with what weight, in the sum over well being in a utility function". A fourth is "we're learning human values to optimize them: who or what counts as human"? A fifth is economic fairness,

I think voting is the only one with fairly simple observable implementations.  The others (well, and voting, too) are all messy enough that it's pretty tenuous to draw conclusions about, especially without noting all the exceptions and historical violence that led to the current state (which may or may not be an equilibrium, and it may or may not be possible to list the forces in opposition that create the equilibrium).  

I think the biggest piece missing from these predictions/analysis/recommendations is the acknowledgement of misalignment and variance in capabilities of existing humans.  All current social systems are in tension - people struggling and striving in both cooperation and competition.  The latter component is brutal and real, and it gets somewhat sublimated with wealth, but doesn't go away.

Comment by Dagon on 1. A Sense of Fairness · 2023-11-25T20:27:16.554Z · LW · GW

[ epistemic status: I don't agree with all the premeses and some of the modeling, or the conclusions.  But it's hard to find one single crux.  If this comment isn't helpful, I'll back off - feel free to rebut or disagree, but I may not comment further. ]

This seems to be mostly about voting, which is an extremely tiny part of group decision-making.  It's not used for anything really important (or if it is, the voting options are limited to a tiny subset of the potential behavior space).  Even on that narrow topic, it switches from a fairly zoomed-out and incomplete causality (impulse for fairness) to a normative "this should be" stance, without a lot of support for why it should be that way.

It's also assuming a LOT more rationality and ability to change among humans, without an acknowledgement of the variance in ability and interest in doing so among current and historical populations.  "Eventually, we'll learn from the lapses" seems laughable.  Humans do learn, both individually over short/medium terms, and culturally over generations.  But we don't learn fast enough for this to happen.

Comment by Dagon on 3. Uploading · 2023-11-25T00:04:54.787Z · LW · GW

Sorry, kind of bounced off the part 1 - didn't agree, but couldn't find the handle to frame my disagreement or work toward a crux.  Which makes it somewhat unfair (but still unfortunately the case) to disagree now.

I like the focus on power (to sabotage or defect) as a reason to give wider voice to the populace.  I wonder if this applies to uploads.  It seems likely that the troublemakers can just be powered down, or at least copied less often.

Comment by Dagon on 3. Uploading · 2023-11-23T20:00:17.613Z · LW · GW

I suspect your modeling of “the fairness instinct” is insufficient. Historically, there were many periods of time where slaves or mostly-powerless individuals were the significant majority. Even today, there are very limited questions where one-person-one-vote applies. Even in the few cases where that mechanism holds, ZERO allow any human (not even any embodied human) to vote. There are always pretty restrictive criteria of membership and accident of birth that limit the eligible vote population.

Comment by Dagon on [Bias] Restricting freedom is more harmful than it seems · 2023-11-23T19:21:49.609Z · LW · GW

Without examples, I have trouble understanding "censorship of independent-minded people".  It's probably not formal censorship (but maybe it is - most common media disallows some words and ideas).  There's a big difference between "negative reactions to beliefs that many/most find unpleasant, even if partially true" and "negative reactions to ideas that contradict common values, with no real truth value". They're not the same motives, and not the same mechanisms for the idea-haver to refine their beliefs.  

In many groups, especially public ones, even non-committal exploration of these ideas is disallowed, because at least some observers will misinterpret the discussion as advocacy or motivated attempts to move the overton window.  In these cases, the restriction is distributed enough that there's no clear way to have the discussion with the right folks.  Meaning your use of "we" and framing the title as advice is confusing.

Another way of framing my confusion/disagreement is that I think "independent-minded" and "conventional-minded" are not very good categories, and the model of opposition is not very useful.  Different types of heresy have different groups opposing them for different reasons.

Comment by Dagon on [Bias] Restricting freedom is more harmful than it seems · 2023-11-22T12:56:31.324Z · LW · GW

Downvoted.  This states an overgeneral concept far more forcefully than it deserves, and doesn't give enough examples to know what kind of exceptions to look for.  I'm also unsure what "censure" means specifically in this model of things - is my comment a censure?

I also dislike the framing of "conventional-minded" vs "independent-minded" as attributes of people, rather than as descriptions of topics that bring criticism.  This could be intentional, if you're arguing that the kind of censure you're talking about tends to be directed at people rather than ideas, but it's not clear if so.

Comment by Dagon on [deleted post] 2023-11-22T01:16:06.279Z

Not really an answer, but a few modeling considerations:

  • "Elites" are a difficult group to talk about - it's not uniform in capabilities or desires.  It's more specific to say "commodities traders" or "fund managers" or whatever group you are actually analyzing.
  • Traders, analysists, and other market participants have used ML for a long long time in their work.  It's not clear what about "AI" you think is different enough to justify a large change.
Comment by Dagon on How much should e-signatures have to cost a country? · 2023-11-22T01:08:12.378Z · LW · GW

At a simple calculation, $19B USD for 118M expected signatures (of different types) is $161 per signature.  This contradicts the article which says 1.5 Francs or 2-4 Francs.  However, it's also "2 to 17" Billion Francs, depending on actual usage.  Still doesn't add up.

I have no clue what's actually included in the price - digitization and indexing/retrieval of documents can cost a lot more than just the identity verification.  And legally-binding identity verification ain't cheap in the first place.

It does seem high to me, but I can say that about almost all government spending, for any country for any program.

Comment by Dagon on [deleted post] 2023-11-21T16:13:32.135Z

I can't tell if you're saying "this is completely and horribly incorrect in approach and model", or if you're saying "yeah, there are cases where imposed rapid change is harmful, but there's nuance I'd like to point out".  I disagree with the former, and don't see the latter very clearly in the text.

The title of Scott's post (give up 70 percent of the way through) seems about right to me, and skimming over the post, it seems he's mostly talking about extreme, rapid, politically-motivated changes.  I agree with him that it's concerning, and the vigor with which many people NOT in the victim group demand the change is somewhere between incomprehensible and horrifying (in that I'm personally judged for not following the trends closely enough, and not changing long linguistic habits quickly enough).

Your argument seems to be that change is inevitable and proper, but I don't think Scott's claiming otherwise.  

it seems like you each have reasonable mottes, and overlapping baileys.  I find Scott to be more specific in examples of changes that worry him, than you in examples of change where you support and Scott doesn't. Honestly, saying his examples ("asian" and "field work") are worse than yours ("black" and "gay") is very close to strawman arguing.

Comment by Dagon on D0TheMath's Shortform · 2023-11-21T15:36:17.561Z · LW · GW

We absolutely agree that incentives matter.  Where I think we disagree is on how much they matter and how controllable they are.  Especially for orgs whose goals are orthogonal or even contradictory with the common cultural and environmental incentives outside of the org.

I'm mostly reacting to your topic sentence

EAs are, and I thought this even before the recent Altman situation, strikingly bad at setting up good organizational incentives.

And wondering if 'strikingly bad' is relative to some EA or non-profit-driven org that does it well,or if 'strikingly bad' is just acknowledgement that it may not be possible to do well.

Comment by Dagon on D0TheMath's Shortform · 2023-11-21T06:36:51.323Z · LW · GW

I'm confused.  NVidia (and most profit-seeking corporations) are reasonably aligned WRT incentives, because those are the incentives of the world around them.

I'm looking for examples of things like EA orgs, which have goals very different from standard capitalist structures, and how they can set up "good incentives" within this overall framework.  

If there are no such examples, your complaint about 'strikingly bad at setting up good organizational incentives" is hard to understand.  It may be more that the ENVIRONMENT in which they exist has competing incentives and orgs have no choice but to work within that.

Comment by Dagon on D0TheMath's Shortform · 2023-11-21T04:50:50.950Z · LW · GW

Can you give some examples of organizations larger than a few dozen people, needing significant resources, with goals not aligned with wealth and power, which have good organizational incentives?  

I don't disagree that incentives matter, but I don't see that there's any way to radically change incentives without pretty structural changes across large swaths of society.

Comment by Dagon on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-20T18:38:01.148Z · LW · GW

A few aspects of my model of university education (in the US):

  • "Education" isn't a monolithic thing, it's a relation between student, environment, teachers, and body of material for the common conception of that degree.  Particularly good (or bad) professors can make a big difference in motivation and access to information, and can set up systems and TAs well or poorly to make it easier or harder for the median student.  That matters, but variance in student overwhelms variance in teaching ability.
  • "Top" universities are generally more focused on research, publication, and prestige than on undergraduate education.  Professors are tenured for research and prestige, not for teaching ability.  Many of them think of their jobs as 'run my lab/work on papers with grad students first.  Do the minimum for most students, identify the future stars to get them into the "real work"'.
  • Much of the Alumnae value from the institution is about reputation, not about quality of the education they got.  If a school is optimizing for donations 15-years on (when the median successful student is getting rich enough to donate), they care about prestige and top outcomes, not median education.  
  • Quality of undergrad education is actually unimportant for most students.  If you're not staying in academia, you need the degree to get in the door of many jobs, but your actual skill and value will come from how well you can learn the actual job and apply what you've  internalized in school.  This will be more about how far beyond the coursework minimum you've gone, and how much you've "played with" and gotten good at stuff you've tried on you own.  The actual material is the bare minimum, usually outdated and incomplete.
  • For law and medicine, undergrad is is only about placement in the "real" school you get your final degree from.  For other advanced degrees, undergrad is really pre-grad school, and tends to be research-focused with fairly minimum effort into other classes.  Oh, and about washing out the students who want advanced degrees but aren't actually able to get themselves there.
  • For most degrees, the first 2 years are just plain worse than the higher-level courses.  If you're just starting, your current experience will likely get better.  But still not great if you only look at the coursework rather than all the resources for challenging yourself.
  • Most of the learning doesn't happen in lectures.  Find the study groups, TA sessions (and TAs willing to spend 1:1 time on interesting topics), and labs where you can really think and learn.
  •  I suspect the vast majority of students would be better off at a lower-ranked school or community college for the first 2 years, and then transfer to a middle-ranked (or top, if your goals and results match that way) university for the degree.  

You don't have much of a LW history, so I can't guess at your thoughts, goals, level of thinking, etc.  My recommendation for the median LW poster (interested in some fairly deep topics, top 20% IQ) who finds themselves at a top university and disappointed by the coursework would be to do enough studying of assigned and optional reading so you just don't worry about grades - get to the point where you just know this stuff.  Identify the outside-of-class reading and groups that challenge you on topics you want to understand more deeply.  It'll vary widely based on your ability, your professors' attitudes, and the institution's policies, but you may be able to take the more advanced/interesting classes sooner than most, and get more than most out of the overall experience.  

Comment by Dagon on Said Achmiz's Shortform · 2023-11-20T17:56:41.609Z · LW · GW

I mean, testing with a production account is not generally best practice, but it seems to show things are operational.  What aspect of things are you testing?

I (a real human, not a test system) saw the post, upvoted but disagreeed, and made this reply comment. 

Comment by Dagon on R&D is a Huge Externality, So Why Do Markets Do So Much of it? · 2023-11-17T18:09:33.492Z · LW · GW

I think "R&D" is a misleading category - it comprises a LOT of activities with different uncertainty, type, scope, and timeframe of impact.  For tax and reporting purposes, a whole lot of not-very-research-ey software and other engineering is classified as "R&D", though it's more reasonably thought of as "implementation and construction".

Nordquist's "Innovation" measure is very different from economic reporting of R&D spending.  This makes the denominator very questionable in your thesis.

Perhaps more important, returns are NOT uniformly distributed.  Even successful "pure research" projects have a MIX of short/medium-term localized benefits and longer/broader impacts, and research insitutions are (mostly) pretty good at managing BOTH grants and licensing/product development as funding and "value capture" mechanisms.

Comment by Dagon on On Tapping Out · 2023-11-17T17:41:26.629Z · LW · GW

I'm not sure the connection between martial arts training/competition and rationalist discussion is all that strong.  Also, I'm not sure if this is meant to apply to "casual discussion in most contexts" or "discussion about rationalist topics among people who share a LOT of context and norms", or "comment threads on LessWrong".

The primary difference I see is that in martial arts, the goal is generally self-improvement, where in rationalist discussions the goal is finding and agreeing on external truths.  Martial arts isn't about disagreement or misunderstanding of the universe, and the mechanisms for safe improvement aren't necessarily applicable to other dimensions of improvement.

In fact, I almost never use the phrase "tapping out", because I don't like the implications.  I use more words, and say "I don't think I can contribute more, I'm going to [switch topics, go elsewhere, whatever. ]"  

Comment by Dagon on Social Dark Matter · 2023-11-17T17:22:36.790Z · LW · GW

Agreed with the main point of your comment: even mildly-rare events can be distributed in such a way that some of us literally never experience them, and others of us see it so often it appears near-universal.  This is both a true variance in distribution AND a filter effect of what gets highlighted and what downplayed in different social groups.  See also https://www.lesswrong.com/tag/typical-mind-fallacy .

For myself, in Seattle (San-Francisco-Lite), I'd only very rarely noticed that someone was trans until the early '00s, when a friend transitioned, and like a switch I was far more aware and noticed a fair number of transwomen out in public (and eventually made friends which included more).  There's enough of a continuum that MANY instances won't be certain unless you spend a fair bit of time with someone.  I can imagine that in many locations, it's uncomfortable enough that only the more passing segment spends much time in non-explicitly-safe locations.  

So both: there may be fewer trans people where you are, they may not tend to go where you often do.  But also, until you get used to it and have a number of examples to compare with, you may just not notice.  

Importantly, and to your point, this generalizes.  Our experiences are different, both objectively AND subjectively in terms of what we notice, focus on, and learn from those experiences.  This variance is routinely overlooked.

Comment by Dagon on How much fraud is there in academia? · 2023-11-16T17:14:55.486Z · LW · GW

In addition to measurement problems, and definitional problems (is p-hacking "fraud" or just bad methodology?), I think "academia" is too broad to meaningfully answer this question.

Different disciplines, and even different topics within a discipline will have a very different distribution of quality of research, including multiple components - specificity of topic, design of mechanism, data collection, and application of testing methodology.  AND in clarity and transparency, for whether others can easily replicate the results, AND agree or disagree with the interpretation.  AND in importance of result, whether anyone seriously tries to replicate or contradict a finding.

Then there are selection effects.  To the degree that popular media and political/personal discussions of interesting topics are biased and untrustworthy, their choice of WHICH academic papers to use as evidence is likely to have the same biases.  Not necessarily massive fraud, but less reliable in terms of conclusions than a random sampling.

obMontyPython:

Sir John Cunningham : May I take this opportunity of emphasizing that there is no cannibalism in the British Navy, absolutely none. And when I say none, I mean there is a certain amount, more than I personally admit.

Comment by Dagon on Good businesses create epistemic monopolies · 2023-11-15T17:42:51.150Z · LW · GW

Thanks for this - it's an important part of modeling the world and understanding the competitive and cooperative symbiosis of commerce (and generally, human interaction).

I think application of this model requires extending the idea of "monopoly" to include partial substitutability (most non-government-supported monopolies aren't all or nothing, they're hard-to-quantify-but-generally-small differences in desirability). And also some amount of human herding and status-quo bias that makes a temporary advantage much more long-lived if you can make it habitual or accepted standard.  

Comment by Dagon on thesofakillers's Shortform · 2023-11-15T17:25:52.369Z · LW · GW

I mean, there are some parallels between any two topics.  Whether those parallels are important, and whether they help model either thing varies pretty widely.

In this case, I don't see many useful parallels.  The difference between individual small-scale rights and power to harm a very few individuals being demonstrably real for guns, vesus the somewhat theoretical future large-scale degradation or destruction of civilization makes it just completely a different dimension of disagreement.

One parallel MIGHT be the general distrust of government restriction on private activity, but from people I've talked with on both topics, that's present but not controlling for beliefs about these topics.

Comment by Dagon on Redirecting one’s own taxes as an effective altruism method · 2023-11-13T17:07:08.902Z · LW · GW

upvoted for interesting ideas and personal experience on the topic.  If I could strong-disagree, I would.  I do not recommend this to anyone.

Mostly my reasoning is "not safe".  You're correct that historically, the IRS doesn't come at small non-payers very hard.  You're incorrect to extend that to "never" or to "that won't change without warning due to technology, or legal/political environment".  You're also correct that, at current interest rates, it's about double at ten years.  You're incorrect, though, to think that's the biggest risk.  If they decide there's a pattern that shows intent, penalties become MUCH higher.  Unlikely to include jail time, but in terms of net expectation over all possible universes, it's zero or somewhat negative for almost everybody.  Admittedly, weighted toward "most get away with small amounts, a few have a VERY BAD experience".  But you could recreate that in Vegas with a simple 3- to 6- step martingale.

I also warn about lifestyle-distortion effects.  Unless you reverse course and pay up (which is probably possible "just" by paying the back taxes and penalties, until some IRS investigator decides to start seriously documenting your intent), you can't take a good W-2 job, can't invest in real-estate, and need to keep a low enough profile that the IRS doesn't decide to actually go after you.

Comment by Dagon on Andreas Chrysopoulos' Shortform · 2023-11-10T17:12:19.031Z · LW · GW

It gets tried every so often, but there are HUGE differences between companies and geographical/political governance.   

The primary difference, in my mind, is filtering and voluntary association.  People choose where to work, and companies choose who works for them, independently (mostly) of where they live, what kind of lifestyle they like, whether they have children or relatives nearby, etc.  Cities and countries can sometimes turn away some immigrants, but they universally accept children born there and they can't fire citizens who aren't productive.

Comment by Dagon on What’s going on? LLMs and IS-A sentences · 2023-11-08T22:52:36.384Z · LW · GW

Umm, I think you're putting too much weight on idiomatic shorthand that's evolved for communicating some common things very easily, and less-common ideas less easily.  "Garfield is a cat" is a very reasonable and common thing to try to communicate - a specific not-well-known thing (garfield) being described in terms of a nearly-universal knowledge ("cat").  The reverse might be "Cats are things like Garfield", which is a bit odd because the necessity of communicating it is a bit odd.

It tends to track specific to general, not because they're specific or general concepts, but because specifics more commonly need to be described than generalities.  


 

Comment by Dagon on Implementing Decision Theory · 2023-11-07T22:55:05.003Z · LW · GW

If you think evolution has a utility function, and that it's the SAME function that an agent formed by an evolutionary process has, you're not likely to get me to follow you down any experimental or reasoning path.  And if you think this utility function is "perfectly selfish", you've got EVEN MORE work cut out in defining terms, because those just don't mean what I think you want them to.

Empathy as a heuristic to enable cooperation is easy to understand, but when normatively modeling things, you have to deconstruct the heuristics to actual goals and strategies.

Comment by Dagon on Andreas Chrysopoulos' Shortform · 2023-11-07T16:21:28.631Z · LW · GW

I think you're using the wrong model for what "have a purpose" means.  purpose isn't an attribute of a thing.  Purpose is a relation between an agent and a thing.  An agent infers (or creates) a purpose for things (including themselves).  This purpose-for-me is temporary, mutable, and relative.  Different agents may have different (or no) purposes for the same thing.

Comment by Dagon on l8c's Shortform · 2023-11-07T16:18:16.848Z · LW · GW

[epistemic status: mostly priors about fantastic quantities being bullshit.  no clue what evidence would update me in any direction. ]

I don't believe the universe is infinite.  It has a beginning, an end, and a finite (but large and perhaps growing) extent.  I further do not believe the term "exist" can apply to other universes.  

Comment by Dagon on Go flash blinking lights at printed text right now · 2023-11-06T17:51:54.878Z · LW · GW

I see.  So the experiment is to see if you can find a frequency that is comfortable/helpful, and then figure out if it's likely to match your alpha waves?  From what I can tell, alpha waves are typically between 8 and 12 Hz, but I don't know if it varies over time (nor how quickly) for individuals.  

Unfortunately, the linked paper notes that the pulse is timed with the "trough" of the alpha wave, which is unlikely to be found with at-home experimentation.  That implies that it'd need to use an EEG to synchronize, rather than ANY fixed frequency.

Comment by Dagon on Go flash blinking lights at printed text right now · 2023-11-06T16:39:58.285Z · LW · GW

Do you have a hypothesis you're collecting data for, or is this just fun for you?  I'm a little put off by the imperative in the title, without justification in the post.

Comment by Dagon on papetoast's Shortforms · 2023-11-06T16:37:44.274Z · LW · GW

For some screen size/shape, for some browser positioning, for some readers, this is probably true. It's fucking stupid to believe that's anywhere close to a majority.   If that's YOUR reading area, why not just make your browser that size? 

It should be pretty easy to write a tampermonkey or browser extension to make it work that way.  Now that you point it out, I'm kind of surprised this doesn't seem to exist.  

Comment by Dagon on Andreas Chrysopoulos' Shortform · 2023-11-05T15:58:43.966Z · LW · GW

The VAST majority of matter and energy in the universe is in the non-purpose category - it often has activity and reaction, and effects over time, but it doesn't strategically change it's mechanisms in order to achieve something, it just executes.

Humans (and arguably other animals and groups distinct from indiiduals) may have purpose, and may infer purpose on things that don't have it intrinsically.  Even then, there are usually multiple simultaneous purposes (and non-purpose mechanisms) that interact, sometimes amplifying, sometimes dampening one another. 

Comment by Dagon on Snapshot of narratives and frames against regulating AI · 2023-11-03T16:28:55.225Z · LW · GW

I think you're using a different sense of the word "possible".  In a simplified physics model, where mass and energy are easily transformed as needed, you can just wave your hands and say "there's plenty of mass to use for computronium".  That's not the same as saying "there is an achievable causal path from what we experience now to the world described".

Comment by Dagon on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T21:56:48.266Z · LW · GW

It's also assuming:

  1. We know roughly how to achieve immortality
  2. We can do that exactly in the window of "the last possible moment" of AGI.
  3. Efforts between immortality and AGI are fungible and exclusive, or at least related in some way.
  4. Ok, yeah - we have to succeed on BOTH alignment and immortality to keep any of us from dying. 

3 and 4 are, I think, the point of the post.  To the extent that work on immortality rather than alignment, we narrow the window of #2, and risk getting neither.

Comment by Dagon on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T19:19:40.899Z · LW · GW

Honestly, I haven’t seen much about individual biological immortality, or even significant life-extension, in the last few years.

I suspect progress on computational consciousness-like mechanisms has fully eclipsed the idea that biological brains in the current iteration are the way of the future. And there’s been roughly no progress on upload, so the topic of immortality for currently-existing humans has mostly fallen away.

Also, if/when AI is vastly more effective than biological intelligence, it takes a lot of the ego-drive away for the losers.