Posts

METR is hiring ML Research Engineers and Scientists 2024-06-05T21:27:39.276Z
Debate series: should we push for a pause on the development of AI? 2023-09-08T16:29:51.367Z
Gender Vectors in ROME’s Latent Space 2023-05-21T18:46:54.161Z
How much do markets value Open AI? 2023-05-14T19:28:38.686Z
Can we evaluate the "tool versus agent" AGI prediction? 2023-04-08T18:40:29.316Z
Entrepreneurship ETG Might Be Better Than 80k Thought 2022-12-29T17:51:13.412Z
YCombinator fraud rates 2022-12-25T19:21:52.829Z
Discovering Latent Knowledge in Language Models Without Supervision 2022-12-14T12:32:56.469Z
[Job] Project Manager: Community Health (CEA) 2022-09-10T18:40:33.889Z
Dating profiles from first principles: heterosexual male profile design 2021-10-25T03:23:09.292Z
Cognitive Impacts of Cocaine Use 2021-07-31T21:53:05.575Z
TikTok Recommendation Algorithm Optimizations 2020-03-27T01:03:58.721Z
Why are people so bad at dating? 2019-10-28T14:32:47.410Z
Problems and Solutions in Infinite Ethics 2015-01-04T14:06:06.292Z
Political Skills which Increase Income 2014-03-02T17:56:32.568Z

Comments

Comment by Xodarap on MichaelDickens's Shortform · 2024-10-03T15:20:18.512Z · LW · GW

Thanks!

Comment by Xodarap on MichaelDickens's Shortform · 2024-10-03T03:32:59.829Z · LW · GW

You said

If you "withdraw from a cause area" you would expect that if you have an organization that does good work in multiple cause areas, then you would expect you would still fund the organization for work in cause areas that funding wasn't withdrawn from. However, what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations, where if you are associated with a certain set of ideas, or identities or causes, then no matter how cost-effective your other work is, you cannot get funding from OP

I'm wondering if you have a list of organizations where Open Phil would have funded their other work, but because they withdrew from funding part of the organization they decided to withdraw totally.

This feels very importantly different from good ventures choosing not to fund certain cause areas (and I think you agree, which is why you put that footnote).

Comment by Xodarap on MichaelDickens's Shortform · 2024-10-02T18:32:51.026Z · LW · GW

what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations

is there a list of these somewhere/details on what happened?

Comment by Xodarap on Quick evidence review of bulking & cutting · 2024-04-10T04:06:17.100Z · LW · GW

Thanks for writing this up! I wonder how feasible it is to just do a cycle of bulking and cutting and then do one of body recomposition and compare the results. I expect that the results will be too close to tell a difference, which I guess just means that you should do whichever is easier.

Comment by Xodarap on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-14T02:22:43.736Z · LW · GW

I think it would be helpful  for helping others calibrate, though obviously it's fairly personal.

Comment by Xodarap on Dating Roundup #2: If At First You Don’t Succeed · 2024-01-12T04:13:05.429Z · LW · GW

Possibly too sensitive, but could you share how the photos performed on Photfeeler? Particularly what percentile attractiveness? 

Comment by Xodarap on Rant on Problem Factorization for Alignment · 2024-01-03T19:04:09.336Z · LW · GW

Sure, I think everyone agrees that marginal returns to labor diminish with the number of employees. John's claim though was that returns are non-positive, and that seems empirically false.

Comment by Xodarap on Book Review: Going Infinite · 2023-12-03T00:13:30.352Z · LW · GW

We have Wildeford's Third Law: "Most >10 year forecasts are technically also AI forecasts".

We need a law like "Most statements about the value of EA are technically also AI forecasts".

Comment by Xodarap on Book Review: Going Infinite · 2023-12-02T17:36:16.208Z · LW · GW

Yep that's fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don't seem to.

Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of "donated or built in terms of successful companies" EA comes out ahead.

(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI's EA credentials are dubious.)

Comment by Xodarap on Book Review: Going Infinite · 2023-12-02T02:15:38.361Z · LW · GW

EA has defrauded much more money than we've ever donated or built in terms of successful companies

 

FTX is missing $1.8B.  OpenPhil has donated $2.8B. 

Comment by Xodarap on Integrity in AI Governance and Advocacy · 2023-11-09T00:18:59.200Z · LW · GW

I do think it's at the top of frauds in the last decade, though that's a narrower category.

Nikola went from a peak market cap of $66B to ~$1B today, vs. FTX which went from ~$32B to [some unknown but non-negative number].

I also think the Forex scandal counts as bigger (as one reference point, banks paid >$10B in fines), although I'm not exactly sure how one should define the "size" of fraud.[1] 

I wouldn't be surprised if there's some precise category in which FTX is the top, but my guess is that you have to define that category fairly precisely.

  1. ^

    Wikipedia says "the monetary losses caused by manipulation of the forex market were estimated to represent $11.5 billion per year for Britain’s 20.7 million pension holders alone" which, if anywhere close to true, would make this way bigger than FTX, but I think the methodology behind that number is just guessing that market manipulation made foreign-exchange x% less efficient, and then multiplying through by x%, which isn't a terrible methodology but also isn't super rigorous.

Comment by Xodarap on Book Review: Going Infinite · 2023-10-27T02:45:37.318Z · LW · GW

Oh  yeah, just because it's a reference point that doesn't mean that we should copy them

Comment by Xodarap on Book Review: Going Infinite · 2023-10-27T02:37:30.456Z · LW · GW

I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements.

I claim YCombinator is a counter example.

(The existence of one counterexample obviously doesn't disagree with the "almost any" claim.)

Comment by Xodarap on Book Review: Going Infinite · 2023-10-26T18:07:37.735Z · LW · GW

IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member

As a reference point: fraud seems fairly common in ycombinator backed companies, but I can't find any sort of postmortem, even about major things like uBiome where the founders are literally fugitives from the FBI.

It seems like you could tell a fairly compelling story that YC pushing founders to pursue risky strategies and flout rules is upstream of this level of fraudulent behavior, though I haven't investigated closely.

My guess is that they just kind of accept that their advice to founders is just going to backfire 1-2% of the time.

Comment by Xodarap on Gender Vectors in ROME’s Latent Space · 2023-05-22T19:29:39.333Z · LW · GW

Thanks for the questions!

  1. I feel a little confused about this myself; it's possible I'm doing something wrong. (The code I'm using is the `get_prob` function in the linked notebook; someone with LLM experience can probably say if that's broken without understanding the context.) My best guess is that human intuition has a hard time conceptualizing just how many possibilities exist; e.g. "Female", "female", "F", "f" etc. are all separate tokens which might realistically be continuations.
  2. I haven't noticed anything; my guess is that there probably is some effect but it would be hard to predict ex ante. The weights used to look up information about "Ben" are also the weights used to look up information about "the Eiffel Tower", so messing with the former will also mess with the latter, though I don't really understand how.
    1. A thing I would really like to do here is better understand "superposition". A really cool finding would be something like: messing with the "gender" dimension of "Ben" is the same as messing with the "architected by" dimension of "the Eiffel Tower" because the model "repurposes" the gender dimension when talking about landmarks since landmarks don't have genders. But much more research would be required here to find something like that.
  3. My guess is that this is just randomness. It would be interesting to force the random seed to be the same before and after modification and see how much it actually changes.
Comment by Xodarap on How much do markets value Open AI? · 2023-05-15T13:24:52.585Z · LW · GW

Thanks! I mentioned anthropic in the post, but would similarly find it interesting if someone did a write up about cohere. It could be that OAI is not representative for reasons I don't understand.

Comment by Xodarap on How much do markets value Open AI? · 2023-05-14T20:35:57.392Z · LW · GW
  1. Yep, revenue multiples are a heuristic for expectations of future growth, which is what I care about
  2. This is true, but I'm not aware of any investments on $0 revenue at the $10B scale. Would love to hear of counterexamples if you know of any![1]
  1. ^

    Instagram is the closest I can think of, but that was ~20x smaller and an acquisition, not an investment

Comment by Xodarap on Discussion with Nate Soares on a key alignment difficulty · 2023-03-26T20:38:30.100Z · LW · GW

I tried playing the game Nate suggested with myself. I think it updated me a bit more towards Holden's view, though I'm very confident that if I did it with someone who was more expert than I am both the attacker and the defender would be more competent, and possibly the attacker would win.

Attacker: okay, let's start with a classic: Alignment strategy of "kill all the capabilities researchers so networks are easier to interpret."
Defender: Arguments that this is a bad idea will obviously be in any training set that a human level AI researcher would be trained on. E.g. posts from this site.

Attacker: sure. But those arguments don't address what an AI would be considering after many cycles of reflection. For example: it might observe that Alice endorses things like war where people are killed "for the greater good", and a consistent extrapolation from these beliefs is that murder is acceptable.
Defender: still pretty sure that the training corpus would include stuff about the state having a monopoly on violence, and any faithful attempt to replicate Alice would just clearly not have her murdering people? Like a next token prediction that has her writing a serial killer manifesto would get a ton of loss.

Attacker: it's true that you probably wouldn't predict that Alice would actively murder people, but you would predict that she would be okay with allowing people to die through her own inaction (standard "child drowning in a pond" thought experiment stuff). And just like genocides are bureaucratized such that no single individual feels responsible, the AI might come up with some system which doesn't actually leave Alice feeling responsible for the capabilities researchers dying.
(Meta point: when does something stop being POUDA? Like what if Alice's CEV actually is to do something wild (in the opinion of current-Alice)? I think for the sake of this exercise we should not assume that Alice actually would want to do something wild if she knew/reflected more, but this might be ignoring an important threat vector?)
Defender: I'm not sure exactly what this would look like, but I'm imagining something like "build a biological science company that has an opaque bureaucracy such that each person pursuing the goodwill somehow result in the collective creating a bio weapon that kills capabilities researchers" and this just seems really outside what you would expect Alice to do? I concede that there might not be anything in the training set which specifically prohibits this per se, but it just seems like a wild departure from Alice's usual work of interpreting neurons.

(Meta: is this Nate's point? Iterating reflection will inevitably take us wildly outside the training distribution so our models of what an AI attempting to replicate Alice would do are wildly off? Or is this a point for Holden: the only way we can get POUDA is by doing something that seems really implausible?)

Comment by Xodarap on Recursive Middle Manager Hell · 2023-01-19T20:59:34.416Z · LW · GW

Yeah that's correct on both counts (that does seem like an important distinction, and neither really match my experience, though the former is more similar).

Comment by Xodarap on Recursive Middle Manager Hell · 2023-01-15T19:38:33.764Z · LW · GW

I spent about a decade at a company that grew from 3,000 to 10,000 people; I would guess the layers of management were roughly the logarithm in base 7 of the number of people. Manager selection was honestly kind of a disorganized process, but it was basically: impress your direct manager enough that they suggest you for management, then impress your division manager enough that they sign off on this suggestion.

I'm currently somewhere much smaller, I report to the top layer and have two layers below me. Process is roughly the same.

I realized that I should have said that I found your spotify example the most compelling: the problems I see/saw are less "manager screws over the business to personally advance" but rather "helping the business would require manager to take a personal hit, and they didn't want to do that."

Comment by Xodarap on Recursive Middle Manager Hell · 2023-01-12T16:19:14.362Z · LW · GW

For what it's worth, I think a naïve reading of this post would imply that moral mazes are more common than my experience indicates.

I've been in middle management at a few places, and in general people just do reasonable things because they are reasonable people, and they aren't ruthlessly optimizing enough to be super political even if that's the theoretical equilibrium of the game they are playing.[1]

 

  1. ^

    This obviously doesn't mean that they are ruthlessly optimizing for the company's true goals though. They are just kind of casually doing things they think are good for the business, because playing politics is too much work.

Comment by Xodarap on Let’s think about slowing down AI · 2022-12-29T20:54:59.967Z · LW · GW

FYI I think your first skepticism was mentioned in the safety from speed section; she concludes that section:

These [objections] all seem plausible. But also plausibly wrong. I don’t know of a decisive analysis of any of these considerations, and am not going to do one here. My impression is that they could basically all go either way.

She mentions your second skepticism near the top, but I don't see anywhere she directly addresses it.

Comment by Xodarap on Discovering Latent Knowledge in Language Models Without Supervision · 2022-12-15T20:53:23.898Z · LW · GW

One of the authors has now posted about this here

Comment by Xodarap on Clarifying AI X-risk · 2022-11-06T18:23:11.597Z · LW · GW

think about how humans most often deceive other humans: we do it mainly by deceiving ourselves... when that sort of deception happens, I wouldn't necessarily expect to be able to see deception in an AI's internal thoughts

The fact that humans will give different predictions when forced to make an explicit bet versus just casually talking seems to imply that it's theoretically possible to identify deception, even in cases of self-deception.

Comment by Xodarap on Counterarguments to the basic AI x-risk case · 2022-10-20T00:01:07.134Z · LW · GW

Basic question: why would the AI system optimize for X-ness?

I thought Katja's argument was something like:

  1. Suppose we train a system to generate (say) plans for increasing the profits of your paperclip factory similar to how we train GANs to generate faces
  2. Then we would expect those paperclip factory planners to have analogous errors to face generator errors
  3. I.e. they will not be "eldritch"

The fact that you could repurpose the GAN discriminator in this terrifying way doesn't really seem relevant if no one is in practice doing that?

Comment by Xodarap on When does technical work to reduce AGI conflict make a difference?: Introduction · 2022-09-16T00:10:35.398Z · LW · GW

Thanks for sharing this! Could you make it an actual sequence? I think that would make navigation easier.

Comment by Xodarap on Rant on Problem Factorization for Alignment · 2022-09-10T04:12:43.563Z · LW · GW

Thanks! The point about existence proofs is helpful.

After thinking about this more, I'm just kind of confused about the prompt: Aren't big companies by definition working on problems that can be factored? Because if they weren't, why would they hire additional people?

Comment by Xodarap on Rant on Problem Factorization for Alignment · 2022-09-09T21:05:03.133Z · LW · GW

Ask someone who’s worked in a non-academia cognitive job for a while (like e.g. a tech company), at a company with more than a dozen people, and they’ll be like “lolwut obviously humans don’t factorize problems well, have you ever seen an actual company?”. I’d love to test this theory, please give feedback in the comments about your own work experience and thoughts on problem factorization.

What does "well" mean here? Like what would change your mind about this?

I have the opposite intuition from you: it's clearly obvious that groups of people can accomplish things that individuals cannot; while there are inefficiencies from bureaucracy, those inefficiencies are regularly outweighed by the benefit having more people provides; and that benefit frequently comes from factorization (i.e. different parts of the company working on different bits of the same thing).

As one example: YCombinator companies have roughly linear correlation between exit value and number of employees, and basically all companies with $100MM+ exits have >100 employees. My impression is that there are very few companies with even $1MM revenue/employee (though I don't have a data set easily available).

Comment by Xodarap on Formalizing Objections against Surrogate Goals · 2022-08-29T18:48:37.243Z · LW · GW

I think humans actually do use SPI pretty frequently, if I understand correctly. Some examples:

  1. Pre-committing to resolving disputes through arbitration instead of the normal judiciary process. In theory at least, this results in an isomorphic "game", but with lower legal costs, thereby constituting a Pareto improvement.
  2. Ritualized aggression: Directly analogous to the Nerf gun example. E.g. a bear will "commit" to giving up its territory to another bear who can roar louder, without the need of them actually fighting, which would be costly for both parties.
    1. This example is maybe especially interesting because it implies that SPIs are (easily?) discoverable by "blind optimization" processes like evolution, and don't need some sort of intelligence or social context.
    2. And it also gives an example of your point about not needing to know that the other party has committed to the SPI - bears presumably don't have the ability to credibly commit, but the SPI still (usually) goes through
       
Comment by Xodarap on CLR's recent work on multi-agent systems · 2022-06-24T01:40:48.185Z · LW · GW

Thanks for sharing this update. Possibly a stupid question: Do you have thoughts on whether cooperative inverse reinforcement learning could help address some of the concerns with identifiability?

There are a set of problems which come from agents intentionally misrepresenting their preferences. But it seems like at least some problems come from agents failing to successfully communicate their preferences, and this seems very analogous to the problem CIRL is attempting to address.

Comment by Xodarap on Becoming a Staff Engineer · 2022-04-28T01:13:11.385Z · LW · GW

> Start-ups want engineers who are overpowered for the immediate problem because they anticipate scaling, and decisions made now will affect their ability to do that later.

I'm sure this is true of some startups, but was not true of mine, nor the ones I was thinking of what I wrote that.

Senior engineers are like… Really good engineers? Not sure how to describe it in a non-tautological way. I somewhat regularly see a senior engineer solve in an afternoon a problem which a junior engineer has struggled with for weeks.

Being able to move that quickly is extremely valuable for startups.

(I agree that many staff engineers are not "really good engineers" in the way I am describing, and are therefore probably not of interest to many EA organizations.)

Comment by Xodarap on Becoming a Staff Engineer · 2022-04-20T21:29:03.755Z · LW · GW

Startups sometimes have founders or early employees who are staff (or higher) engineers.

  • Sometimes this goes terribly: the staff engineer is used to working in a giant bureaucracy, so instead of writing code they organize a long series of meetings to produce a UML diagram or something, and the company fails.
  • Sometimes this goes amazingly: the staff engineer can fix bugs 10x faster than the competitors’ junior engineers while simultaneously having the soft skills to talk to customers, interview users, etc.

If you are in the former category, EA organizations mostly don't need you. If you are in the latter category though, EA organizations desperately need you, for the same reasons startups want to hire you, even though you also have skills that we won’t be able to use.

If someone is considering applying to CEA I’m happy to talk with them about whether it would be a good fit for both sides, including which of their skills would be useful, which new ones they would have to learn, and how they would feel about that. Some people really like being able to meet a promising EA at a party, hear about one of their issues, rapidly bang out a PR to fix it, and then immediately see that EA become more impactful (and know that without them - it wouldn’t happen). But other people really like the “make a UML diagram” style of work, and don’t enjoy working here. It’s hard for me to guess or to give a generic answer that will fit everyone.

Comment by Xodarap on Convince me that humanity *isn’t* doomed by AGI · 2022-04-20T00:59:10.230Z · LW · GW

EAG London last weekend contained a session with Rohin Shah, Buck Shlegeris and Beth Barnes on the question of how concerned we should be about AGI. They seemed to put roughly 10-30% chance on human extinction from AGI.

Comment by Xodarap on Convince me that humanity *isn’t* doomed by AGI · 2022-04-20T00:58:52.444Z · LW · GW
Comment by Xodarap on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-09T00:18:39.442Z · LW · GW

Thanks, yeah now that I look closer Metaculus shows a 25% cumulative probability before April 2029, which is not too far off from OP's 30% claim.

Comment by Xodarap on [RETRACTED] It's time for EA leadership to pull the short-timelines fire alarm. · 2022-04-08T17:28:07.764Z · LW · GW

Note that Metaculus predictions don't seem to have been meaningfully changed in the past few weeks, despite these announcements. Are there other forecasts which could be referenced?

Comment by Xodarap on Dating profiles from first principles: heterosexual male profile design · 2022-04-06T17:18:46.153Z · LW · GW

Update: I improved the profile of someone who reached out to me from this article. They went from zero matches in a year to ~2/week.

I think this is roughly the effect size one should expect from following this advice: it's not going to take you from the 5th percentile to the 95th, but you can go from the 20th to the 70th or something.

Comment by Xodarap on Why Agent Foundations? An Overly Abstract Explanation · 2022-04-02T17:45:32.036Z · LW · GW

Why is the Alignment Researcher different than a normal AI researcher?

E.g. Markov decision processes are often conceptualized as "agents" which take "actions" and receive "rewards" etc. and I think none of those terms are "True Names".

Despite this, when researchers look into ways to give MDP's some other sort of capability or guarantee, they don't really seem to prioritize finding True Names. In your dialogue: the AI researcher seems perfectly fine accepting the philosopher's vaguely defined terms.

What is it about alignment which makes finding True Names such an important strategy, when finding True Names doesn't seem to be that important for e.g. learning from biased data sets (or any of the other million things AI researchers try to get MDP's to do)?

Comment by Xodarap on Do any AI alignment orgs hire remotely? · 2022-04-01T16:30:31.123Z · LW · GW

Sure, feel free to DM me.

Comment by Xodarap on Do any AI alignment orgs hire remotely? · 2022-03-14T23:27:05.425Z · LW · GW

We (the Center for Effective Altruism) are hiring Full-Stack Engineers. We are a remote first team, and work on tools which (we hope) better enable others to work on AI alignment, including collaborating with the LessWrong team on the platform you used to ask this question :)

Comment by Xodarap on Prizes for ELK proposals · 2022-01-27T21:57:26.711Z · LW · GW

A small suggestion: the counterexample to "penalize downstream", as I understand it, requires there to be tampering in the training data set. It seems conceptually cleaner to me if we can assume the training data set has not been tampered with (e.g. because if alignment only required there to be no tampering in the training data, that would be much easier).

The following counterexample does not require tampering in the training data:

  1. The predictor has nodes  indicating whether the diamond was stolen at time 
  2. It also has node  indicating whether the diamond was ever stolen. The direct translator would look at this node.
  3. However, it happens that in the training data , i.e. we only ever needed to look at the last frame of the video
  4. Therefore, the human interpreter can look only at  and get the same loss as the direct translator, despite being upstream of it.

(I'm interested in pursuing approaches that assume training data has not been tampered with. Maybe nobody but me cares about this, but posting in case somebody else does. I may be understanding something here – corrections are appreciated.)

Comment by Xodarap on Prizes for ELK proposals · 2022-01-27T13:14:58.816Z · LW · GW

Thanks for sharing your idea!

Comment by Xodarap on Prizes for ELK proposals · 2022-01-26T15:19:57.547Z · LW · GW

I'm not an ARC member, but I think assuming that the chip is impossible to tamper with is assuming the conclusion.

The task is to train a reporter which accurately reports the presence of the diamond, even if we are unable to tell whether tampering has occurred (e.g. because the AI understands some esoteric physics principle which lets them tamper with the chip in a way we don't understand). See the section on page 6 starting with "You might try to address this possibility by installing more cameras and sensors..."

Comment by Xodarap on Prizes for ELK proposals · 2022-01-25T20:36:16.472Z · LW · GW

Thanks!

Comment by Xodarap on Prizes for ELK proposals · 2022-01-25T16:22:38.007Z · LW · GW

I've been trying to understand this paragraph:

That is, it looks plausible (though still <50%) that we could improve these regularizers enough that a typical “bad” reporter was a learned optimizer which used knowledge of direct translation, together with other tricks and strategies, in order to quickly answer questions. For example, this is the structure of the counterexample discussed in Section: upstream. This is a still a problem because e.g. the other heuristics would often “misfire” and lead to bad answers, but it is a promising starting point because in some sense it has forced some optimization process to figure out how to do direct translation.

This comment is half me summarizing my interpretation of it to help others, and half an implicit question for the ARC team about whether my interpretation is correct.

  1. What is a "bad" reporter? I think the term is used to refer to a reporter which is at least partially a human interpreter, or at least one which can't confidently be said to be a direct translator.
  2. What does it mean to "use knowledge of direct translation"? I think this means that, at least in some circumstances, it acts as a direct translator. I.e. there is some theoretical training data set + question such that the reporter will act as a direct translator. (Do we have to be able to prove this? Or do we just need to state it's likely?)
  3. How did the "upstream" counterexample "force some optimization process to figure out how to do direct translation"? I think this is saying that, if we were in a world where the direct translation nodes were upstream of the "human interpreter" nodes, the upstream regularizer would successfully force the reporter to do direct translation.
  4. Why is this "a promising starting point?" Maybe we could find some other way of forcing the direct translator nodes to be upstream of the human interpreter ones, and then that strategy combined with the upstream regulatizer would force a direct translator.

Corrections and feedback on this extremely welcome!

Comment by Xodarap on Prizes for ELK proposals · 2022-01-24T23:51:00.593Z · LW · GW

In "Strategy: penalize computation time" you say:

> At first blush this is vulnerable to the same counterexample described in the last section [complexity]... But the situation is a little bit more complex...  the direct translator may be able to effectively “re-use” that inference rather than starting from scratch

It seems to me that this "counter-counterexample" also applies for complexity – if the translator is able to reuse computation from the predictor, wouldn't that both reduce the complexity and the time? 

(You don't explicitly state that this "reuse" is only helpful for time, so maybe you agree it is also helpful for complexity – just trying to be sure I understand the argument.)

Comment by Xodarap on School Daze · 2022-01-17T21:31:50.683Z · LW · GW

I think children can be prosecuted in any state but the prosecution of parents is more novel and was a minor controversy during the last presidential campaign.

Comment by Xodarap on Dating profiles from first principles: heterosexual male profile design · 2021-10-26T15:57:42.986Z · LW · GW

Thank you, I am familiar with that post. Their explanation is:

Suppose you're a man who's really into someone. If you suspect other men are uninterested, it means less competition. You therefore have an added incentive to send a message. You might start thinking: maybe she's lonely. . . maybe she's just waiting to find a guy who appreciates her. . . at least I won't get lost in the crowd. . . maybe these small thoughts, plus the fact that you really think she's hot, prod you to action. You send her the perfectly crafted opening message.

On Tinder/bumble/etc. it's just as costly to swipe left as it is to swipe right. I don't think people are as likely to factor in likelihood of success when swiping as they are when deciding to invest the time to send a message. (One exception is super likes, but I'm skeptical that one should optimize their profile for super likes.)

Also, to the extent that one thinks this theory is valid, I don't think the resulting advice is to "play up unique ways you are attractive" – instead, it's to signal in your profile that you are attracted to people who are conventionally unattractive (e.g. "thicc thighs save lives") and still be conventionally attractive yourself. 

Comment by Xodarap on Dating profiles from first principles: heterosexual male profile design · 2021-10-26T05:26:49.281Z · LW · GW

I actually don't think these things are that unsustainable. Even if you don't know anything about fashion now, if you spend a couple hours watching YouTube videos to learn what makes clothes fit well, I genuinely think you could spend the rest of your life only buying shirts where the shoulder seam is at your shoulder etc.

I agree though that portraying yourself as a vastly different person than you actually are is dangerous.

Comment by Xodarap on Dating profiles from first principles: heterosexual male profile design · 2021-10-26T05:22:34.441Z · LW · GW

Thanks!

I have no idea what to do with                "If you are the sort of person who is always upbeat and positive, try to signal this through your expression, posture, and clothes."

That's fair. When I think of upbeat expression, posture, and clothes, I think of things like:

  1. Wacky expressions
  2. Silly costumes, often with friends
  3. Huge grins
  4. Arms and legs out, taking up lots of space

My hunch says that signaling that you host social gatherings / plan parties is not much more beneficial than signaling that you attend them. If there is literature that suggests otherwise, I'm very interested in hearing about how and why.


My intuition is just that you need more social capital to host a party of N people than to attend a party of N people. It's easier to get an invite to a party than it is to convince people to attend your party, and also you only need to know one person to get a party invite whereas you need to (perhaps indirectly) know N people to organize a party of N people.

I'd guess that tact (or perhaps whatever the opposite of desperation is? confidence? nonchalance? ) probably also has an effect on female evaluations of male profiles - perhaps this is through a perceived difference in social ability? 

I agree that tact is a key difficulty. I warned against including shirtless pictures largely for this reason; I think it's a fair point that wealth and social capital can also be displayed in a tactless way.