Posts

RVA Meetup 2023-04-08T19:27:39.286Z
Use the Nato Alphabet 2023-03-08T19:14:48.008Z
How "grifty" is the Foresight Institute? Are they making button soup? 2023-03-07T19:43:32.750Z
Burning Uptime: When your Sandbox of Empathy is Leaky and also an Hourglass 2023-01-13T05:18:09.898Z
November First Saturday Richmond, VA Rationalist Meetup 2022-10-27T01:17:51.446Z
Make your own button / pin! Richmond Rationalists Meetup #0 Saturday Sept 03 2022-08-27T04:41:52.710Z
Meetup: West End Trolleys Saturday October 1 Richmond, VA LW / ACX Meetup 2022-08-23T01:57:50.145Z
[Alignment] Is there a census on who's working on what? 2022-05-23T15:33:28.267Z
Xida Ren's Shortform 2022-03-28T13:55:41.299Z

Comments

Comment by Cedar (xida-ren) on Collating widely available time/money trades · 2023-03-11T14:38:27.120Z · LW · GW

Got it. WORTH.

Not wearing glasses have huge social signaling benefits & people somehow treat me nicer & listen to me more. As usual this is based on my perceptions and may be placebo effect.

If you about break even per-hour, you should definitely get it 

Comment by Cedar (xida-ren) on Collating widely available time/money trades · 2023-03-11T14:37:06.984Z · LW · GW

you can get cold water pressure driven bidets for like $60

 

https://www.amazon.com/Veken-Ultra-Slim-Non-Electric-Adjustable-Attachment/dp/B082HFS8KT/ref=sr_1_6?keywords=bidet&qid=1678545307&sr=8-6

 

actually nvm this one is only $29.

No clue about the quality though. May be better to go for something $20.

Installing mine costed me around 2 hours. You could use it for a year.

Which means the cost per hour actually comes down to something like $7.5, which is way below minimum wage!

Comment by Cedar (xida-ren) on Use the Nato Alphabet · 2023-03-09T00:48:16.716Z · LW · GW

checked out talon. looks amazing for people with disabilities.

now as a person with no obvious disabilities i wonder if it's still worth it to learn it for:

  1. just incase
  2. maybe eyetracking etc would make it easier to do things on my computer vs mouse & kbd.

any opinions?

Comment by Cedar (xida-ren) on Use the Nato Alphabet · 2023-03-09T00:45:59.277Z · LW · GW

Ty 4 the catch. Used chatgpt to generate the html and i think when i asked it to add the CSS, it didn't have enough characters to give me everything.

Comment by Cedar (xida-ren) on Use the Nato Alphabet · 2023-03-09T00:45:23.447Z · LW · GW

somewhat far-fetched guess:

internet -> everybody does astrology now -> zebra gets confused with Libra -> replacement with Zulu

Comment by Cedar (xida-ren) on How "grifty" is the Foresight Institute? Are they making button soup? · 2023-03-08T19:11:57.737Z · LW · GW

Oh! This is really good to know. Thank you so much for speaking up!

Comment by Cedar (xida-ren) on How "grifty" is the Foresight Institute? Are they making button soup? · 2023-03-07T19:47:06.263Z · LW · GW

Friend of mine: "people listed seem cool; prolly easy to meet without spending money tho"

Comment by Cedar (xida-ren) on Burning Uptime: When your Sandbox of Empathy is Leaky and also an Hourglass · 2023-01-14T00:24:03.044Z · LW · GW

Wup time to edit that : D

Got the "My experiences are universal" going on in here

Comment by Cedar (xida-ren) on Notes on writing · 2023-01-10T02:23:38.805Z · LW · GW

Don’t think about big words or small words—think about which particular word you mean.

I think this is good advice for most people who are used to being forced to use big words by school systems, but I personally follow a different version of this.

I see compression as an important feature of communication, and there are always tradeoffs to be made between

  1. Making my sentence short
  2. Conveying exactly what I want to say.

And sometimes I settle with transferring a "good enough" version of my idea, because communicating all the hairy details takes too much time / energy / social credit. I'm always scared of taking too much of people's attention or overrunning their working memory.

Comment by Cedar (xida-ren) on Let’s think about slowing down AI · 2022-12-29T17:49:34.871Z · LW · GW
  • AI is pretty safe: unaligned AGI has a mere 7% chance of causing doom, plus a further 7% chance of causing short term lock-in of something mediocre
  • Your opponent risks bad lock-in: If there’s a ‘lock-in’ of something mediocre, your opponent has a 5% chance of locking in something actively terrible, whereas you’ll always pick good mediocre lock-in world (and mediocre lock-ins are either 5% as good as utopia, -5% as good)
  • Your opponent risks messing up utopia: In the event of aligned AGI, you will reliably achieve the best outcome, whereas your opponent has a 5% chance of ending up in a ‘mediocre bad’ scenario then too.
  • Safety investment obliterates your chance of getting to AGI first: moving from no safety at all to full safety means you go from a 50% chance of being first to a 0% chance
  • Your opponent is racing: Your opponent is investing everything in capabilities and nothing in safety
  • Safety work helps others at a steep discount:  your safety work contributes 50% to the other player’s safety 

This is more a personal note / call for somebody to examine my thinking processes, but I've been thinking really hard about putting hardware security methods to work. Specifically, spreading knowledge far and wide about how to:

  1. allow hardware designers / manufacturers to have easy, total control over who uses their product for what for how much throughout the supply chain
  2. make it easy to secure AI related data (including e.g. model weights and architecture) and difficult to steal.

This sounds like it would improve every aspect of the racey-environment conditions, except:

Your opponent is racing: Your opponent is investing everything in capabilities and nothing in safety

The exact effect of this is unclear. On the one hand, if racey, zero-sum thinking actors learn that you're trying to "restrict" or "control" AI hardware supply, they'll totally amp up their efforts. On the other hand, you've also given them one more thing to worry about (their hardware supply).

I would love to get some frames on how to think about this.

Comment by Cedar (xida-ren) on Calibration Trivia · 2022-10-07T13:20:47.099Z · LW · GW
Comment by Cedar (xida-ren) on Contemporary Linguistics: A Perspective on Research and Information Sharing · 2022-09-28T23:07:38.273Z · LW · GW

This is great : D

Now you be lookin nice n proppa!

Comment by Cedar (xida-ren) on Contemporary Linguistics: A Perspective on Research and Information Sharing · 2022-09-27T21:54:21.914Z · LW · GW

Great! Now, lesswrong uses markdown syntax. Which means you can do

# Section

## SubSection

### Sub Sub Section

Consider using this in you LessWrong, and substack posts. This would greatly help readability and make it easier for you readers to engage with your work.

Comment by Cedar (xida-ren) on Contemporary Linguistics: A Perspective on Research and Information Sharing · 2022-09-23T00:21:19.487Z · LW · GW

Welcome to LessWrong!

This post is missing a TL;DR in the beginning, and section titles (section TLDRs, basically) in between.

Comment by Cedar (xida-ren) on Supervise Process, not Outcomes · 2022-08-29T16:15:20.512Z · LW · GW

That sounds reasonable! Thanks for the explanation!

Comment by Cedar (xida-ren) on Supervise Process, not Outcomes · 2022-08-10T03:09:43.262Z · LW · GW

I'm new to alignment and I'm pretty clueless.

What's Ought's take on the "stop publishing all capabilities research" stance that e.g. Yudkowsky is taking in this tweet? https://twitter.com/ESYudkowsky/status/1557184416786423809

Comment by Cedar (xida-ren) on Announcing Squiggle: Early Access · 2022-08-06T03:32:25.788Z · LW · GW

25,50,75:

I'm thinking, just like how to can infer whether it's normal or lognormal, we can use one of the bell curve shaped distribution that gives a sort of closest approximation.

More generally, it'd be awesome if there a way to get the max entropy distribution given a bunch of statistics like quantiles or nsamples with min and max.

Comment by Cedar (xida-ren) on Announcing Squiggle: Early Access · 2022-08-05T19:29:24.376Z · LW · GW

For to(a,b), is there a way to specify other confidence intervals?

 

E.g. let's say I have the 25, 50, 75 percentiles, but not the 5 and 95 percentiles?

Comment by Cedar (xida-ren) on I want to donate some money (not much, just what I can afford) to AGI Alignment research, to whatever organization has the best chance of making sure that AGI goes well and doesn't kill us all. What are my best options, where can I make the most difference per dollar? · 2022-08-03T06:38:20.906Z · LW · GW

Could you please link me when grant applications are open instead?

Considering AI Safety as a career and would go for it if I can have some time where I'm not worried about rent.

Comment by Cedar (xida-ren) on (When) do high-dimensional spaces have linear paths down to local minima? · 2022-07-07T14:47:52.707Z · LW · GW

https://www.lesswrong.com/posts/Hna2P8gcTyRgNDYBY/race-along-rashomon-ridge

This feels related. This also talks about paths in hyperparameter space, but instead of linear paths it talks about paths consisting of optimal models between two optimal models.

Comment by Cedar (xida-ren) on [Alignment] Is there a census on who's working on what? · 2022-05-30T17:26:58.449Z · LW · GW

Thanks Thomas! I really appreciate this!

Comment by Cedar (xida-ren) on [Alignment] Is there a census on who's working on what? · 2022-05-24T18:55:26.641Z · LW · GW

Yooo! That sounds amazing. Please do let me know once that report is up!

Comment by Cedar (xida-ren) on The Last Paperclip · 2022-05-16T05:36:21.394Z · LW · GW

Beautiful! Thank you for the link and references. That makes a lot of sense!

Comment by Cedar (xida-ren) on The Last Paperclip · 2022-05-14T12:56:02.742Z · LW · GW

: (

That's not how the story went in my mind.

It felt obvious to me that once the probes are starting to make each other into paperclips, some sort of natural selection would take over where probes that prioritize survival over paperclip making would murder the rest and survive. And there'd be a new cycle of life.

Comment by Cedar (xida-ren) on Narrative Syncing · 2022-05-02T18:54:12.933Z · LW · GW

I kinda like the term "cheerleading" instead of narrative syncing. Kinda like "I define a cheer and y'all follow me and do the cheer".

Shameless plug here: I'm trying to get into alignment but struggling to get motivated emotionally. If any of you wants to do mutual cheerleading over a discord chat or something, please PM me. also PM me if you just want to hang out and chat and figure out whether you want information or cheerleading. I'd be glad to help with that too by rubber duckie-ing with you.

I'm doing this because I mistook my need for cheerleading as a need for information a while ago and had a very confusing 1-hour chat with a rationalist where he kept trying to give me information and I kept trying to look for inclusion/acceptance signals. I learned a lot both by listening to him and by reflecting upon that experience but I fear I've ended up wasting his time and I kinda feel sad about that. This is why I'm putting my email here.

Comment by Cedar (xida-ren) on My Approach to Non-Literal Communication · 2022-05-01T12:53:54.727Z · LW · GW

I find it a very very useful summary of simulacrum levels and some of the things that happen in immoral mazes, and using normal people words have the added advantage that I can linkdrop with less fear of exposing my rationalist cultism.

Comment by Cedar (xida-ren) on Virtue signaling is sometimes the best or the only metric we have · 2022-04-29T15:43:16.018Z · LW · GW

Just to make sure we're on the same page, we seem to be both agreeing that

  1. Good virtual signaling is possible and should be attempted
  2. Moloch worship is possible, should be avoided, and may be the reason for why many people hate / avoid / devaluate virtual signaling.

And you seem to be saying, yes Moloch is real, yes things can go very bad. But we should still be trying to build good norms and standards to accurately signal virtues.

Does my summary feel fair and correct?

Comment by Cedar (xida-ren) on Virtue signaling is sometimes the best or the only metric we have · 2022-04-28T23:00:33.338Z · LW · GW

I agree with you in terms of virtual signaling being a proper and good thing with a real function. However, I think many people's objection against it is related to the higher order effects of goodharting and Moloch-worship (escalating virtual signalling in a game that always ends in a race to the bottom towards things like public self flagellation and self immolation). I was looking for it in the article but I didn't find it, so I figured I'd mention it here.

Comment by Cedar (xida-ren) on 21 on 21 · 2022-04-27T14:11:16.271Z · LW · GW

I would like to add my own modification to 19(valuing time). The actually important here is something like integral of mood over time. If you gain time but you aren't happy in it(content and curious and excited happy, not addiction -chasing happy), time is worth very little. So if going to visit your friends takes a lot of time but makes you happy over the week, do it anyways.

Comment by Cedar (xida-ren) on 21 on 21 · 2022-04-27T13:14:11.018Z · LW · GW

Got any tips on how to make good things addictive? I went through college and paid all my attention to graduating (instead of playing the infinite game), and now my life is dominated by anxieties and addictions.

Really need tips on how to make brain focus on solving hard problems again. I miss those days.

Comment by Cedar (xida-ren) on (When) do high-dimensional spaces have linear paths down to local minima? · 2022-04-22T17:54:26.027Z · LW · GW

I was trying to point attention to this fact

A systematic way of finding the local minima of a loss function

But I thought about it a bit and i realized that I misunderstood the question, so I deleted my answer

Comment by Cedar (xida-ren) on Only Asking Real Questions · 2022-04-19T17:31:45.648Z · LW · GW

Reflecting upon my experience, I have decided to wholeheartedly agree with you on being literal with young children. I think establishing a strong connection between language and objective reality is useful even when sometimes language is used to make illusions and manipulate.

Comment by Cedar (xida-ren) on Only Asking Real Questions · 2022-04-19T17:29:23.321Z · LW · GW

I agree with you in terms of the fair and respectful systems grounding. My own experience with mazey-ness has been that they cause me intense anxiety and distress, and I'd imagine having that earlier in childhood being a very bad thing.

In terms of mazey organizations being common... I feel like it really depends on where you are and where you're from. In the social-economic section of China where I come from, for example, asking fake questions, sacrificing your own standards to fit in, manipulation, and "measure effort by who self-flagellates the hardest" are so common that they're just the assumed backdrop for every conversation about career stuff and academic stuff. And I think it has a certain momentum on you that persists even when you leave. For example, I find myself having a tendency to be attracted by vaguely shiny and prestigious things, and that's accounted for me landing in a sell-side quant position (VERY mazey) and a PhD program (somewhat mazey).

But yes all in all I agree that if surviving without bullshit is at all possible, developing a strong bullshit allergy is an awesome thing to do for your kids.

Comment by Cedar (xida-ren) on What are some beautiful, rationalist artworks? · 2022-04-18T21:16:11.179Z · LW · GW

My objection to makeup is that it's sorta a zero sum game, where if everybody spends 1hr a day on makeup, the world isn't really a better place since beauty is a relative thing.

I agree that, in a society where everybody is judged by their made-up looks, innate beauty would matter less, and that's good.

However, people will start competing on effort spent on makeup, which to me feels like a really bad thing. Imagine everybody having to spend 2 hours on make-up every day before heading out. I think that's what some women already have to deal with in their workplaces and I'd rather not everybody's lives be like that.

Comment by Cedar (xida-ren) on Code Generation as an AI risk setting · 2022-04-18T15:31:05.814Z · LW · GW

Yes please!

Comment by Cedar (xida-ren) on A Quick Guide to Confronting Doom · 2022-04-17T02:08:14.966Z · LW · GW

Thank you so much! I didn't know 80k does advising! In terms of people with knowledge on the possibilities... I have a background and a career path that doesn't end up giving me a lot of access to people who know, so I'll definitely try to get help at 80k.

Also worth bearing in mind as a general principle that if almost everything you try succeeds, you're not trying enough challenging things. Just make sure to take negative outcomes as useful information (often you can ask for specific feedback too). There's a psychological balance to be struck here, but trying at least a little more than you're comfortable with will generally expand your comfort zone and widen your options.

This was very encouraging! Thank you.

Comment by Cedar (xida-ren) on Only Asking Real Questions · 2022-04-17T01:58:28.469Z · LW · GW

My parents really cared about making things fair and giving expectations of fairness, and I really enjoyed it. It contributed to many of my qualities I care about (commitment to objective reality, attempt at consistency, integration of different parts of the self).

But sometimes I wonder if the opposite would be better, especially for surviving in maze-ey environment a la moral mazes.

Comment by Cedar (xida-ren) on A Quick Guide to Confronting Doom · 2022-04-16T23:16:36.037Z · LW · GW

you should be able to get funding to cover most reasonable forms of upskilling and/or seeing-if-you-can-help trial period.

Hi Joe! I wonder if you have any pointers as to how to get help? I would like to try to help while being able to pay for rent and food. I think right now I'm may not br articulate enough to write grant proposals and get funding, so I think I could also use somebody to talk to to figure out what's the most high impact thing I could do.

I wonder if you'd be willing to chat / know anybody who is?

Comment by Cedar (xida-ren) on Clem's Memo · 2022-04-16T23:05:35.412Z · LW · GW

This is truly epic.

I don't know where this conditioned response comes from, but I tear up a little when I read things like this.

I'm scared that when the next threat to our civilization comes, the masses are so enmeshed in bullshit and the leaders so focused on producing it, that nobody rises to take action like this.

Comment by Cedar (xida-ren) on 2021 AI Alignment Literature Review and Charity Comparison · 2022-04-09T15:10:30.265Z · LW · GW

About Utility Maximization = Description Length Minimization

 

I read it very recently so I claim to remember how it felt like to not have read it. I clicked on it because it fitted an intuition I had, but it was surprising how simple the answer was and how many other things it opened me up to. Like this: 

Mutual Information Neural Estimation

Comment by Cedar (xida-ren) on Utility Maximization = Description Length Minimization · 2022-04-09T15:02:43.645Z · LW · GW

Related reading linking mutual information to best possible classifier: 

https://arxiv.org/pdf/1801.04062.pdf

This one talks about estimating KL divergence and Mutual Information using neural networks, but I'm specifically linking it to show y'all Theorem 1, 

 

Theorem 1 (Donsker-Varadhan representation). The KL divergence admits the following dual representation: 

 

 

This links the mutual information to the best possible regression. But I haven't figured out exactly how to parse / interpret this.

Comment by Cedar (xida-ren) on Framing Practicum: Stable Equilibrium · 2022-04-07T19:17:46.684Z · LW · GW

Ty! Ty!

I have an audible credit lying around that I'm not using, so I'm going to get started on Immune.

Thanks for the detailed response and the caveats. I'll keep them in mind!

Comment by Cedar (xida-ren) on Framing Practicum: Stable Equilibrium · 2022-04-07T12:29:37.371Z · LW · GW

!!! This got me so excited.

This is a knowledge gap I've always been missing about how the immune system works.

I love it and would love to get some sources I can read up on how immunity works, described in Mathy terms that I as a non biology person can understand.

Comment by Cedar (xida-ren) on Framing Practicum: Stable Equilibrium · 2022-04-07T12:25:35.726Z · LW · GW

The most natural example of a stable equilibrium to me (and to most people of the type I interact with, given my model of people) should be something related to things falling / temperatures equilibrating.

I feel like hair length / beard length falls square into the box in which people think when promote to "think outside the box". And I'm really really curious as to how that box outside of the box works, and if over the course of collecting answers here you notice answers that are super common.

Comment by Cedar (xida-ren) on Framing Practicum: Stable Equilibrium · 2022-04-07T12:17:07.505Z · LW · GW

I've been thinking about entropy a lot these days, not just in the usual physical systems with atoms and such sense, but in the sense of "relating log probabilities to description length and coming up with a way of generating average-case short descriptions, then measuring the length of the description for the system and calling it entropy". So I might just run wild with it.

  1. Lies and manipulation in large organizations. This tends to an [immoral maze] / [high simulacrum level] equilibrium where people don't talk about object level things and mostly talk about social (un)realities. This is related to entropy and shortest length descriptions because there are more ways to talk about social realities than there are ways to talk about object level truths.

  2. Physical entropy. This equilibrates at a maximum that's related to how big / complex the system is (how long of a string it would take to describe the least likely state in the system when you use a system of descriptions that tends to produce shortest length descriptions). Similar to the previous case, this has something to do with how there are many, many states with low likelyhoods and long descriptions.

  3. Life (as in, cells that are lit ("alive") and move in complicated and interesting ways in Conway's game of life. They tend to get locked into repeating patterns or die out. I don't have a good intuitive explanation for this, just that there are a lot of ways things could die, and not many ways things could come alive.

I would love it if somebody could critique my examples and help me get a deeper understanding of entropies and equilibria. I have a vague intuition about how, in order to count states and assign probabilities, you really really need to look at how state transitions work, and how entropy is somewhat related to some sort of "phase space volume" that isn't necessarily conserved depending on how you're looking at a system. I feel like there's probably a lesswrong post I haven't seen somewhere that would fill in my gap here.

If there isn't, I would love to get some encouragement and write one

Comment by Cedar (xida-ren) on What I Was Thinking About Before Alignment · 2022-04-07T04:36:42.579Z · LW · GW

Every single piece has been tweaked toward the same end goal

I feel like there'll be a better way to say this sentence once we figure out the answer to your first question,

How can we recognize adaptive systems in the wild? What universal behaviors indicate an adaptive optimizer?

It most definitely seems to make sense to say that systems can have goals, in a "if it looks like a duck it makes sense to call it a duck" kind of way. But at the same time, every single piece hasn't been tweaked to the same end goal as the system. They are each tweaked towards their own survival, and that's somewhat aligned with the system's survival.

Something I wish there are more lesswrong posts for (or at least I wish I've seen more lesswrong posts for) is posts exploring alignment in the context of :

  1. Organisms and their smaller replicator components (organisms < cells, cells < transposons& endoviruses & organelles)
  2. Social thingies and their smaller sorta-replicator components (religions < religious ideas, companies < replicating Management ideas)

If you have your favorite post that falls into the above genre or mentions something to that effect, please absolutely link me to it! I'd love to read more.

I'm only halfway through the A-Z sequence, so I'd also very much appreciate it if you could point to things in there to get me excited about progressing through it!

Comment by Cedar (xida-ren) on Prompt Your Brain · 2022-04-06T01:59:07.287Z · LW · GW

Of the three examples, only "rationality" came to mind : (

I'm ESL so that sorta ruins certain idiom based examples for me.

Comment by Cedar (xida-ren) on Prioritise Tasks by Rating not Sorting · 2022-04-06T01:57:54.726Z · LW · GW

I find that merge sort starts becoming useful for items as few as a deck of cards. But bucket sort is probably better anyways

Comment by Cedar (xida-ren) on Saving Time · 2022-04-05T19:13:24.037Z · LW · GW

I love your analysis. What do you think about this summary? : The solution to this optimization problem is to be the kind of agent that chooses only one box.

Comment by Cedar (xida-ren) on The Jordan Peterson vs Sam Harris Debate · 2022-04-05T18:53:48.935Z · LW · GW

I love how <3 has the dual interpretation of a heart and a fart.

Is there a particular reason for people to prefer eating animals who eat low quality food? Health, nutrition, etc?