Sinclair Chen's Shortform

post by Sinclair Chen (sinclair-chen) · 2023-04-15T07:27:32.738Z · LW · GW · 53 comments

53 comments

Comments sorted by top scores.

comment by Sinclair Chen (sinclair-chen) · 2023-07-22T21:08:28.786Z · LW(p) · GW(p)

wheras yudkowsky called rationality as a perfect dance, where you steps land exactly right, like a marching band, or like performing a light and airy piano piece perfectly via long hours of arduous concentration to iron out all the mistakes -

and some promote a more frivolous and fun dance, playing with ideas with humor and letting your mind stretch with imagine, letting your butterflies fly -

perhaps there is something to the synthesis, to a frenetic, awkward, and janky dance. knees scraping against the world you weren't ready for. excited, ebullient, manic discovery. the crazy thoughts at 2am. the gold in the garbage. climbing trees. It is not actually more "nice" than a cool logical thought, and it is not actually more easy.

Do not be afraid of crushing your own butterflies, stronger ones will take its place!

comment by Sinclair Chen (sinclair-chen) · 2023-09-15T17:43:52.117Z · LW(p) · GW(p)

Sometimes when one of my LW comments gets a lot of upvotes, I feel an urge that it's too high relative to how much I believe it and I need to "short" it

comment by Sinclair Chen (sinclair-chen) · 2023-08-08T23:09:04.031Z · LW(p) · GW(p)

why should I ever write longform with the aim of getting to the top of LW, as opposed to the top of Hacker News? similar audiences, but HN is bigger.

I don't cite. I don't research.
I have nothing to say about AI.

my friends are on here ... but that's outclassed by discord and twitter.
people here speak in my local dialect ... but that trains bad habits.
it helps LW itself ... but if im going for impact surely large reach is the way to go?

I guess LW is uniquely about the meta stuff. Thoughts on how to think better. but I'm suspicious of meta.

Replies from: Raemon, Viliam
comment by Raemon · 2023-08-09T00:43:45.254Z · LW(p) · GW(p)

it helps LW itself ... but if im going for impact surely large reach is the way to go?
 

I think the main question is "do you have things to say that build on states-of-the-art in a domain that LessWrong is on the cutting edge of?". Large reach is good when you have a fairly simple thing you want to communicate to raise the society baseline, but sometimes higher impact comes from pushing the state of the art forward, or communicating something nuanced with a lot of dependencies. 

You say you don't research, so maybe not, but just wanted to note you may want to consider variations on that theme.

If you're communicating something to broader society, fwiw I also don't know that there's actually much tradeoff between optimizing for hacker news vs LessWrong. If you optimize directly for doing well on hacker news probably LW folk will still reasonably like it, even if it's not, like, peak karma or whatever (with some caveats around politically loaded stuff which might play differently with different audiences).

comment by Viliam · 2023-08-09T07:38:49.349Z · LW(p) · GW(p)

You could publish on LW and submit the article to HN.

From my perspective (as a reader), the greatest difference between the websites is that cliché stupid comments will be downvoted on LW; on HN there is more of this kind of noise.

I think you are correct that getting to the top of HN would give you way more visibility. Though I have no idea about specific numbers.

(So, post on HN for exposure, and post on LW for rational discussion?)

comment by Sinclair Chen (sinclair-chen) · 2023-05-13T22:17:28.645Z · LW(p) · GW(p)

Ways I've followed math off a cliff

  1. In my first real job search, I told myself I about a month to find a job. Then, after a bit over a week I just decided to go with the best offer I had. I justified this as the solution to optimal stopping problem, to pick the best option after 1/e time has passed. The job was fine, but the reasoning was wrong - the secretary problem assumes you know no info other than which candidate is better. Instead, I should've put a price on features I wanted from a job (mentorship, ability to wear a lot of hats and learn lots of things) and judged each job within what I thought was the distribution.
    1. Notably: my next job didn't pay very well and I stayed there too long after I'd given up hope in the product. I think I was following a pattern of following a path of low resistance both for the first and second jobs.
  2. I was reading up on crypto a couple years ago and saw what I thought was an amazing opportunity. It was a rebasing dao on the avalanche chain, called Wonderland. I looked into the returns, and guessed how long it would keep up, and put that probability and return rate into a Kelly calculator.
    1. Someone at a LW meetup: "hmm I don't think Kelly is the right model for this..." but I didn't listen. 
    2. I did eventually cut my losses after only losing ~$20,000, and some further reasoning that the whitepaper didn't really make sense.
comment by Sinclair Chen (sinclair-chen) · 2023-05-17T16:19:57.755Z · LW(p) · GW(p)

I’ve been learning more from YouTube than from Lesswrong or ACX these days

Replies from: rhollerith_dot_com
comment by Sinclair Chen (sinclair-chen) · 2023-04-25T17:31:16.344Z · LW(p) · GW(p)

Moderation is hard yo

Y'all had to read through pyramids of doom containing forum drama last week. Or maybe, like me, you found it too exhausting and tried to ignore it.

Yesterday Manifold made more than $30,000, off a single whale betting in a self-referential market designed like a dollar auction, and also designed to get a lot of traders. It's the biggest market yet, 2200 comments, mostly people chanting for their team. Incidentally parts of the site went down for a bit.

I'm tired.

I'm no longer as worried about series A. But also ... this isn't what I wanted to build. The rest of the team kinda feels this way too. So does the whale in question.

Once upon a time, someone at a lw meetup asked me, half joking, that I please never build a social media site.

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-04-26T04:58:28.898Z · LW(p) · GW(p)

Update: Monetary policy is hard yo

Isaac King ended up regretting his mana purchase a lot after it started to become clear that he was losing in the whales vs minnows market. We ended up refunding most of his purchase (and deducting his mana accordingly, bringing his manifold account deeply negative).

Effectively, we're bailing him out and eating the mana inflation :/

Aside: I'm somewhat glad my rant here has not gotten much upvotes/downvotes ... it probably means the meme war and the spammy "minnow" recruitment calls hasn't reached here much, fortunately...

comment by Sinclair Chen (sinclair-chen) · 2023-04-15T07:27:33.096Z · LW(p) · GW(p)

Lesswrong is too long.
It's unpleasant for busy people and slow readers. Like me.

Please edit before sending. Please put the most important ideas first, so I can skim or tap out when the info gets marginal.  You may find you lose little from deleting the last few paragraphs.

Replies from: Viliam
comment by Viliam · 2023-04-15T20:58:04.754Z · LW(p) · GW(p)

Some articles have an abstract, but most don't. Perhaps the UI could nudge authors towards writing one.

Now when you click "New Post", there are text fields: title and body. It could be: title, abstract/summary, body. So that if you hate writing abstracts, you can still leave it empty and it works like previously, but it would always remind you that this option exists.

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-04-16T02:09:10.947Z · LW(p) · GW(p)

That's a bad design imo. I'd bet it wouldn't get used much.

Generally more options are bad. All UI options incur an attention cost on everyone who sees it even if the feature is niche. (There's also dev upkeep cost.)

Fun fact: Manifold's longform feature, called Posts, used to have required subtitles. The developer who made it has since left, but I think the subtitles were there to solve a similar thing - there was a page that showed all the posts and a little blurb about each one.
One user wrote "why is this a requirement" as their subtitle. So I took out the subtitle from the creation flow, and instead used the first few lines of the body as the preview. I think not showing a preview at all, and just making it more compact, would also be good and I might do that instead.

An alternate UI suggestion: show from the edit screen what would show up "above the fold" in the hover over preview. No, I think that's still too much, but its still better than an optional abstract field.

comment by Sinclair Chen (sinclair-chen) · 2023-04-15T09:09:14.000Z · LW(p) · GW(p)

Mr Beast's philanthropic business model:

  1. Donate money
  2. Use the spectacle to record a banger video
  3. Make more money back on the views via ads and sponsors.
  4. Go back to step 1
  5. Exponential profit!

EA can learn from this.

  • Imagine a Warm Fuzzy economy where the fuzzies themselves are sold and the profit funds the good
    • The EA castle is good actually
    • When picking between two equal goods, pick the one that's more MEMEY
    • Content is outreach. Content is income
  • YouTube's feedback popup is like RLHF. Mr Beast says because of this you can't game the algo, just have to make good videos that people actually enjoy. Youtube has solved aligning human optimizers, enough that the top YouTuber got there by donating money and helping people.
  • Mr Beast plans to give all his money away. He eschews luxury in favor of degenerately investing it all.
    • Unlike FTX it does good now. With his own money.
  • Mr Beast is scope sensitive
  • Mr Beast has an international audience.
    • circle-of-concern of his content has grown to reflect that
  • The philanthropy is mostly cash transfers. In briefcases.

Isn't this effective altruism? At least with a lowercase "e"
Why isn't EA talking about this?
Should I edit up a real post on this?

Replies from: Viliam
comment by Viliam · 2023-04-15T21:03:50.731Z · LW(p) · GW(p)

As far as I know, Mr Beast is the first who tried this model. I wonder what happens when people notice that this model works, and suddenly he will have a lot of competition? Maybe Moloch will find a way to eat all the money. Like, maybe the most meme-y donations will turn out to be quite useless, but if you keep optimizing for usefulness you will lose the audience.

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-04-16T02:51:00.019Z · LW(p) · GW(p)

Other creators are trying videos like "I paid for my friend's art supplies". I think Mr. Beast currently has a moat as the person who's willing to spend the most and therefore get the craziest videos.
I hope do-nice-things becomes a content genre that displaces some of the stunts, pranks, reaction videos, mean takedowns etc. common on popular youtube.

People put up with the ads because they want to watch the video. They want to watch the video because it makes them feel warm and happy. Therefore, it is the warm fuzzies, the feeling of caring, love, and empathy, that pays for the philanthropy in the first place. You help the people in the video, ever so slightly, just by wanting them to be helped, and thus clicking the video to see them be helped. It's altruism and wholesomeness at scale. If this eats 20% of the media economy, I will be glad.

The counterfactual is not that people donate more money or attention to a more effective cause.
The counterfactual is people watching other media, or watching less and doing something else. 

Yes, Make-a-Wish tier ineffectiveness "I helped this kid with cancer" might be just as compelling content as "1000 blind people see for the first time." I don't think that by default the altruism genre has a high impact. Mr Beast just happens to be nerdy and entrepreneurial in effective ways, so far.

I do think there's an opportunity to shape this genre, contribute to it, and make it have a high impact.

comment by Sinclair Chen (sinclair-chen) · 2024-02-21T21:07:17.102Z · LW(p) · GW(p)

woah I didn't even know lw team was working on a "pre 2024 review" feature using prediction markets probabilities integrated into the UI. super cool!

comment by Sinclair Chen (sinclair-chen) · 2023-07-12T10:49:00.223Z · LW(p) · GW(p)

I am a GOOD PERSON

(Not in the EA sense. Probably more closer to an egoist or objectivist or something like that. I did try to be vegan once and as a kid I used to dream of saving the world. I do try to orient my mostly fun-seeking life to produce big postive impact as a side effect, but mostly trying big hard things is cuz it makes mee feel good

Anyways this isn't about moral philosophy. It's about claiming that I'm NOT A BAD PERSON, GENERALLY. I definitely ABIDE BY BROADLY AGREED SOCIAL NORMS in the rationalist community. Well, except when I have good reason to think that the norms are wrong, but even then I usually follow society's norms unless I believe those are wrong, in which case I do what I BELIEVE IS RIGHT:

I hold these moral truths to be evident: that all people, though not created equal, deserve a baseline level of respect and agency, and that that bar should be held high, that I should largely listen to what people want, and not impose on them what they do not want, especially when they feel strongly. That I should say true things and not false things, such that truth is created in people's heads, and though allowances are made for humor and irony, that I speak and think in a way reflective of reality and live in a way true to what I believe. That I should maximize my fun, aliveness, pleasure, and all which my body and mind find meaningful, and avoid sorrow and pain except when that makes me stronger. and that I should likewise maximize the joy of those I love, for my friends and community, for their joy is part of my joy and their sorrow is part of my sorrow. and that I will behave with honor towards strangers, in hopes that they will behave with honor towards me, such that the greater society is not diminished but that these webs of trust grow stronger and uplift everyone within.

Though I may falter in being a fun person, or a nice person, I strive strongly to not falter in being a good person.

This post is brought to you by: someone speculating that I claim to be a bad person. You may very well dispute whether I live up to the moral code outlined above, or whether I live up to a your moral code or one you think is better. I encourage you to point out my mistakes. However, I will never claim to be a person who no longer abides by the part of the moral law necessary to cooperate with the rationalist community and broader society. I acknowledge that any community I am a part of has the right to remove me if they no longer believe that I will abide by their stated and unstated rules. I do not fear this happening, yet I strive to prevent it from happening, because I follow my own code. I declare this not out timid sense of danger, but out of a sense of self determination, to see if this community will allow me to grow the strengths that I share with it.

comment by Sinclair Chen (sinclair-chen) · 2023-09-14T05:57:13.015Z · LW(p) · GW(p)

EA forum has better UI than lesswrong. a bunch of little things are just subtly better. maybe I should start contributing commits hmm

Replies from: james-fisher
comment by Jim Fisher (james-fisher) · 2023-09-18T17:09:26.912Z · LW(p) · GW(p)

Please say more! I'd love to hear examples.

(And I'd love to hear a definition of "better". I saw you read my recent post [LW · GW]. My definition of "better" would be something like "more effective at building a Community of Inquiry". Not sure what your definition would be!)

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-09-20T00:10:05.417Z · LW(p) · GW(p)

As in visually looks better.
- LW font still has bad kerning on apple devices that makes it harder to read I think, and makes me tempted to add extra spaces at the end of sentences. (See this github issue)
- Agree / Disagree are reactions rather than upvotes. I do think the reversed order on EAF is weird though
- EAF's Icon set is more modern

comment by Sinclair Chen (sinclair-chen) · 2023-09-05T03:44:54.000Z · LW(p) · GW(p)

The s-curve in prediction market calibrations

https://calibration.city/manifold displays an s-curve for the market calibration chart. this is for non-silly markets with >5 traders.

Image

This is what it looks like at close:

this means the true probability is farther from 50% than the price makes it seem.

The calibration is better at 1/4 of the way to close:

Image

you might think it's because markets closing near 20% are weird in some way (unclear resolution) but markets 3/4 of the way to close also show the s-curve. Go see for yourself.

The CSPI tournament also exhibited this s-curve. In that article it points out predictit does too!

I left a comment there on why I think this might be the case:  the AMM is inefficient, limited balances, people chasing higher investment returns elsewhere.

I'm curious what people think. And also curious if polymarket or the indian prediction markets also have a similar curve.

---

should I make a real post for this? to-do's are: look into polymarket data (surely it exists? it's blockchain) and fit a curvy formula to this.

comment by Sinclair Chen (sinclair-chen) · 2023-04-19T16:17:35.488Z · LW(p) · GW(p)

If I got a few distinct points in reply to a comment or post, should I put my points in seperate comments or a single one?

Replies from: r, Dagon
comment by RomanHauksson (r) · 2023-04-22T08:08:37.670Z · LW(p) · GW(p)

I'm biased towards generously leaving multiple comments in order to let people upvote the most important insights to the top.

comment by Dagon · 2023-04-20T03:44:25.191Z · LW(p) · GW(p)

Either choice can work well, depending on how separable the points are, and how "big" they are (including how much text you will devote to them, but also how controversial or interesting you predict they'll be in spawning further replies and discussion).  You can also mix the styles - one comment for multiple simple points, and one with depth on just one major expository direction.  

I'd bias toward "one reply" most of the time, and perhaps just skipping some less-important points in order to give sufficient weight to the steelman of the thing you're responding to.

Replies from: jam_brand
comment by jam_brand · 2023-04-21T03:33:16.143Z · LW(p) · GW(p)

Another hybrid approach if you have multiple substantive comments is to silo each of them in their own reply to a parent comment you've made to serve as an anchor point to / index for your various thoughts. This also has the nice side effect of still allowing separate voting on each of your points as well as (I think) enabling them, when quoting the OP, to potentially each be shown as an optimally-positioned side-comment [LW · GW].

comment by Sinclair Chen (sinclair-chen) · 2024-02-22T14:21:23.981Z · LW(p) · GW(p)

I see smart ppl often try to abstract, generalize, mathify, in arguments that are actually emotion/vibes issues. I do this too.
The neurotypical finds this MEAN. but the real problem is that the math is wrong

Replies from: Dagon, Vladimir_Nesov
comment by Dagon · 2024-02-22T16:17:03.638Z · LW(p) · GW(p)

From what I've seen, the math is fine, but the axioms chosen and mappings from the vibes to the math are wrong.  The results ARE mean, because they try to justify that someone's core intuitions are wrong.

Amusingly, many people's core intuitions ARE wrong (especially about economics and things that CAN be analyzed with statistics and math), and they find it very uncomfortable when it's pointed out.

comment by Vladimir_Nesov · 2024-02-22T14:38:08.176Z · LW(p) · GW(p)

For it to make sense to say that the math is wrong, there needs to be some sort of ground truth, making it possible for math to also be right, in principle. Even doing the math poorly is exercise that contributes to eventually making the math less wrong.

comment by Sinclair Chen (sinclair-chen) · 2023-07-09T08:57:51.120Z · LW(p) · GW(p)

LW posts about software tools are all yay open source, open data, decentralization, and user sovereignty! I kinda agree ... except for the last part. Settings are bad. If users can finely configure everything then they will be confused and lost.

The good kind of user sovereignty comes from simple tools that can be used to solve many problems.

Replies from: Dagon
comment by Dagon · 2023-07-09T15:32:25.546Z · LW(p) · GW(p)

This comes from over-aggregation of the idea of "users".  There is a very wide range of capabilities, interest, and availability of time for different people to configure things.  For almost all systems, the median and modal user is purely using defaults.

People on LW who care enough to post about software are likely significant outliers in their emphasis on such things. I know I am - I strongly prefer transparency and openness in my tooling, and this is a very different preference even from many of my smart coworkers.

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-07-09T21:11:37.036Z · LW(p) · GW(p)

I think the vast majority of people who use software tools are busy. Technically-inclined intelligent people value their time more highly than average.

Most of the user interactions you do in your daily life are seemless like your doorknob. You pay attention to the few things that you configure because they are fun or annoying or because you get outsized returns from configuring it - aka the person who made it did a bad job serving your needs.

Configuring something is generally something you do because it is fun, not because it is efficient. If it takes a software engineer a few hours to assemble a computer, then the opportunity cost makes it more expensive than buying the latest macbook. I say this as someone who has spent a few hundred dollars and a few hours building four custom ergonomic split keyboards. It is better than my laptop keyboard, but not like 10x better

I guess there is a bias/feature where if you build something, you value it more. But Ikea still sends you one set of instructions. Good user sovereignty is when hobbyists hack together different ikea sets or mod it with stuff from the hardware store. It would be bad for a furniture company to expected every user to do that, unless they specifically target that niche and deliver so much more value to them to make up for the smaller market.

I think a lot of people are hobbyists on something, but no rational person is a hobbyist on everything. Your users will be too busy hacking on their food (i.e. cooking) or customizing the colors and shape of their garments and won't bother to hack on your product.

Replies from: Dagon
comment by Dagon · 2023-07-10T00:27:41.468Z · LW(p) · GW(p)

I don't know what the vast majority thinks, but I suspect people value their time differently, not necessarily more or less.  Technically-minded people see the future time efficiency of current time spent tweaking/configuring AND enjoy/value that time more, because they're solving puzzles at the same time.

After the first few computer builds, the fun and novelty factor declines greatly, but the optimization pressure (performance and price/performance) remains pretty strong for some.  I do know some woodworkers who've made furniture, and it's purely for the act of creation, not any efficiency.

Not sure what my point is here, except that most of these decisions are more individually variant than the post implied.

comment by Sinclair Chen (sinclair-chen) · 2023-04-26T05:12:27.282Z · LW(p) · GW(p)

despite the challenge, I still think being a founder or early employee is incredibly awesome

coding, product, design, marketing, really all kinds of building for a user - is the ultimate test.
it's empirical, challenging, uncertain, tactical, and very real.

if you succeeds, you make something self-sustaining that continues to do good.
if you fail, it will do bad. and/or die.

and no one will save you.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-26T05:31:58.261Z · LW(p) · GW(p)

strongly agreed except that...

continues to survive and continues to do good are not guaranteed to be the same and often are directly opposed. it is much easier to build a product that keeps getting used than one that should.

it's possible that yours is good for the world, but I suspect mine isn't. I regret helping make it.

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-05-01T07:21:25.531Z · LW(p) · GW(p)

it's easy to have a mission and almost every startup has one. only 10% of startups survive. but yes, surviving and continuing to do good is strictly harder.

what was your company?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-05-01T09:46:00.245Z · LW(p) · GW(p)

I'm the other founder of vast.ai besides Jacob Cannell [LW · GW]. I left in early 2019 partially induced by demotivation about the mission, partially induced by wanting a more stable job, partially induced by creative differences, and a mix of other smaller factors.

comment by Sinclair Chen (sinclair-chen) · 2024-04-08T20:16:10.763Z · LW(p) · GW(p)

lactose intolerence is treatable with probiotics, and has been since 1950. they cost $40 on amazon.
works for me at least.

comment by Sinclair Chen (sinclair-chen) · 2023-07-16T02:32:35.593Z · LW(p) · GW(p)

The latest ACX book review of The Educated Mind is really good! (as a new lens on rationality. am more agnostic about childhood educational results though at least it sounds fun.)

- Somantic understanding is logan's Naturalism [? · GW]. It's your base layer that all kids start with, and you don't ignore it as you level up.
- incorporating heroes into science education is similar to an idea from Jacob Crawford that kids should be taught a history of science & industry - like what does it feel like to be the Wright brothers, tinkering on your device with no funding, defying all the academics that are saying that heavier than air flight is impossible. How did they decide on what materials, designs? If you are a kid that just wants to build model rockets you'd prefer to skip straight to the code of the universe, but I think most kids would be engaged by this.  A few kids will want to know the aerodynamics equations, but a lot more kids would want to know the human story.
- the postrats are Ironic, synthesizing the rationalist Philosophy with stories, jokes, ideals, gossip, "vibes".

fun, joy, imagination are important for lifelong learning and competence!

anyways go read the actual review

comment by Sinclair Chen (sinclair-chen) · 2023-07-14T10:14:26.557Z · LW(p) · GW(p)

LWers worry too much, in general. Not talking about AI.

I mean ppl be like Yud's hat is bad for the movement. Don't name the house after a fictional bad place. Don't do that science cuz it makes the republicans stronger. Oh no nukes it's time to move to New Zealand.

Remember when Thiel was like "rationalists went from futurists to luddites in 20 years" well he was right.

Replies from: Dagon, sinclair-chen
comment by Dagon · 2023-07-14T15:01:37.704Z · LW(p) · GW(p)

Do you mean "LWers" or "a specific subgroup who happens to be deeply into LW"? Overgeneralization of groups is the root of at least some evil.

comment by Sinclair Chen (sinclair-chen) · 2023-07-14T10:15:23.849Z · LW(p) · GW(p)

it is time to build.
A C C E L E R A T E*

*not the bad stuff

comment by Sinclair Chen (sinclair-chen) · 2023-04-25T17:38:43.385Z · LW(p) · GW(p)

Moderating lightly is harder than moderating harshly.
Walled gardens are easier than creating a community garden of many mini walled gardens.
Platforms are harder than blogs.
Free speech is more expensive than unfree speech.
Creating a space for talk is harder than talking.

The law in the code and the design is more robust than the law in the user's head
yet the former is much harder to build.

comment by Sinclair Chen (sinclair-chen) · 2024-01-15T16:39:40.565Z · LW(p) · GW(p)

People should be more curious about what the heck is going on with trans people on a physical, biological, level. I think this is could be a good citizen science research project for y'all since gender dysphoria afflicts a lot of us in this community, and knowledge leads to better detection and treatment. Many trans-women especially do a ton of research/experimentation themselves. Or so I hear. I actually haven't received any mad-science google docs of research from any trans person yet. What's up with that? Who's working on this?

Where I'd start maybe:

- https://transfemscience.org/
- Dr Powers's powerpoint deck
- r/diyhrt

My open questions

- trans is comorbid with adhd, autism, and connective tissue disorders.
  - what's the playbook for connective tissue disorders? Should more people try supplementing collagen? it's cheap!
- trans is slightly genetic. which genes?
- do any non-human animals have similar conditions?
- is the late-transitioning + female-attraction / early-transitioning + male-attraction typology for trans-women real? like are these traits actually bimodal or is it  more linear and correlated?
- best ways to maintain sexual desire and performance on feminizing hrt.

but probably best research directions are ones I haven't thought of. there's so much we don't know (or or maybe just so much I don't know) just gotta keep pulling threads until we get answers!

comment by Sinclair Chen (sinclair-chen) · 2023-07-28T07:40:42.786Z · LW(p) · GW(p)

anyone else super hyped about superconductors?
gdi i gotta focus on real work. in the morning

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-07-28T22:41:03.041Z · LW(p) · GW(p)

No because prediction markets are low

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-08-05T18:18:52.484Z · LW(p) · GW(p)

you and i have very different conceptions of low

Replies from: MakoYass
comment by mako yass (MakoYass) · 2023-08-06T01:15:34.232Z · LW(p) · GW(p)

Update: I'm not excited because deploying this thing in most applications seems difficult:

  • it's a ceramic, not malleable, so it's not going in powerlines apparently this was surmounted before, though those cables ended up being difficult and more expensive than the fucken titanium-niobium alternatives.
  • because printing it onto a die sounds implausible and because it wouldn't improve heat efficiency as much as people would expect (because most heat comes from state changes afaik? Regardless, this).
  • because 1D
  • because I'm not even sure about energy storage applications, how much energy can you contain before the magnetic field is strong enough to break the coil or just become very inconvenient to shield from neighboring cars? Is it really a lot?
Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2023-08-07T04:17:07.446Z · LW(p) · GW(p)

if it is superconducting, I'm excited not just for this material but of the (lost soviet?) theories outlined in the original paper - in general new physics leads to new technology and in particular it would imply other room-temp standard pressure superconductors are possible.

idk it could be the next carbon nanotubes (as in not actually useful for much at our current tech level) or it could be the next steel / (not literally, I mean in terms of increasing rate of gdp growth). like if it allows for more precise magnetic sensors that leads to further materials science innovation, or just like gets us to fusion or something.

I'm not a physicist, just a gambler, but I have a hunch that if the lk-99 stuff pans out that we're getting a lot of positive EV dice rolls in the near future.

comment by Sinclair Chen (sinclair-chen) · 2023-07-28T07:35:39.486Z · LW(p) · GW(p)

There's something meditative about reductionism.
Unlike mindfulness you go beyond sensation to the next baser, realer level of the physics
You don't, actually. It's all in your head. It's less in your eyes and fingertips. In some ways, it's easier to be wrong.

Nonetheless cuts through a lot of noise - concepts, ideologies, social influences.

comment by Sinclair Chen (sinclair-chen) · 2023-04-16T03:00:25.779Z · LW(p) · GW(p)

In 2022 we tried to buy policy. We failed.
This time let's get the buy-in of the polis instead.

comment by Sinclair Chen (sinclair-chen) · 2024-03-27T08:24:31.127Z · LW(p) · GW(p)

let people into your heart, let words hurt you. own the hurt, cry if you must. own the unsavory thoughts. own the ugly feelings. fix the actually bad. uplift the actually good. 

emerge a bit stronger, able to handle one thing more.

comment by Sinclair Chen (sinclair-chen) · 2023-04-29T16:45:55.782Z · LW(p) · GW(p)

my approach to AI safety is world domination