Posts

Would you pay for a search engine limited to rationalist sites? 2023-08-02T18:06:12.620Z
How does one learn to create models? 2021-11-05T02:57:42.056Z
How do I improve at being strategic? 2021-01-17T01:36:14.070Z

Comments

Comment by Conor (conor) on Medical Roundup #2 · 2024-04-14T04:42:48.974Z · LW · GW

"The problem is that Johnson is expecting this to translate into defeating aging, which I very much do not expect."

I'm fairly confident Johnson is betting on future tech solving aging and his goal is to live long enough to be there for it by creating measurements and therapies for the health of every organ.

From his site: 

"2023: don’t die because we don’t know how long and well we can live "

"This time, our time, right now - the early 21st century - will be defined by the radical evolution of intelligence: human, AI and biology. Our opportunity is to be this exciting future. "
https://protocol.bryanjohnson.com/ 

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-04T02:26:45.662Z · LW · GW

Hey Adam, please review some of replies I've made to other commentators for issues I don't address here.

>ease of use

A keyboard shortcut, chrome extension that serves the results in a side bar or some other spot, autocomplete in the search bar, or bookmark would remove that friction. 

If I want to go to lesswrong, I hit ctrl-t for a new tab, type "les" and chrome completes the url. The same would apply.

>cognitive overhead

I do not think about those things for something that delivers me consistent value. If the starting premise is "I don't value this." It doesn't matter what comes after it.

>less time browsing

Wanting to spend less time doing semi-productive browsing isn't something a better search engine can fix - unless it's because the result quality is the reason the time is semi-productive.

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-04T02:11:49.996Z · LW · GW

Tried that previously. It limits the search results and it doesn't rank the results it simply spits out first results it finds on the first domain it searches. 

You can give the one I made a try to see what it mean. Don't be fooled by the number of pages it lists at the bottom - that's fake:  https://cse.google.com/cse?cx=cced60b51960f6137

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-04T01:59:43.373Z · LW · GW

Use cases: superconductor, Ukraine war, LLM development, diet or exercise, dealing with anxiety, etc. But you would only get results from a curated list of sites with higher epistemic standards.

I should have been more explicit in my initial post. I was relying on the word "rationalist" to do too much. 

No worries about negativity. It is exactly what I want, so thank you.

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-04T01:53:39.372Z · LW · GW

Yes, but instead of searching one domain (lesswrong), it would search ~100+ curated domains. Google currently limits the domains to ten.

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-02T22:08:18.055Z · LW · GW

>drowning in stuff to read

Suppose you wanted to find content on prioritizing what you read by people with similar interests or with higher standards than most writers in the google search results. 

Do you expect a search of LW will be more likely to deliver what you want than a search of LW +100 other sites? 

>Google is free, and supports limiting queries to specific domains.

The limit is ten sites.

>just search LW or EA

What if you could do both in one place plus search all these and the ACX's blogroll and similar sites?

>solution in search of a problem

Google search quality seems to not satisfy a large number of people: link. Not to say this idea will fix that for everyone.

 >value delivered

What if it cost $1-6 a month? Would you try it if it was free? Would you donate once or regularly if you liked it?

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-02T20:12:58.327Z · LW · GW

Unfortunately, I've found that appending rationalist to queries doesn't get the desired results. Instead you get this: link

If you could limit your search results to sites with a higher level of epistemics, would that be more compelling? There might be default set of sites, which you could customize and submit requests for additions to the corpus.

What price point would change your mind? Is the idea compelling enough that you would try a demo?

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-02T20:06:20.680Z · LW · GW

What utility would it need to provide to change your mind?

How about a search of sites curated for a higher level of epistemics? Can you think of any searches you might do where that would be useful?

Suppose it cost between $2-6 a month? Or, what price point would be enticing for you to try it?

Comment by Conor (conor) on Would you pay for a search engine limited to rationalist sites? · 2023-08-02T20:01:19.717Z · LW · GW

My understanding is that google limits the search space to ten sites.

 >set of sites of my choosing

Perhaps a standard set of sites that could be customized with the option to submit requests for additions.

Thanks for the ideas.

Comment by Conor (conor) on Transformative AGI by 2043 is <1% likely · 2023-06-11T12:55:26.119Z · LW · GW

Teenagers generally don't start learning to drive until they have had fifteen years to orient themselves in the world.

AI and teenagers are not starting from the same point so the comparison does not map very well.

Comment by Conor (conor) on [link] Guide on How to Learn Programming · 2023-03-05T04:15:34.991Z · LW · GW

How did it go? Please share even if it didn't work out it could be helpful for others.

Comment by Conor (conor) on Book Review: The Beginning of Infinity · 2021-10-22T00:18:02.899Z · LW · GW

Yes, but i'm not sure how that follows from your original question.

What can you do with a bad explanation that you can't do with no explanation?

Comment by Conor (conor) on Book Review: The Beginning of Infinity · 2021-10-19T18:59:33.573Z · LW · GW

Deutsch specifies good explanations (laws of nature, scientific theories), and claims the rapid increase of good explanations is because of the invention of the scientific method, and thus explanations are essential for progress.

A bad explanation allows me to make (bad) sense of the world, which makes it appear less chaotic and threatening. 

Ah yes, the spirits are causing the indigestion. Now I know that I need only do a specific dance to please them and the discomfort will resolve. 

The alternative is suffering for no apparent reason or recourse. At least until we find a good explanation for indigestion.

Comment by Conor (conor) on Book Review: The Beginning of Infinity · 2021-10-19T02:11:29.168Z · LW · GW

I think I wasn't clear. An explanation that isn't accurate is still an explanation to Deutsch, it just isn't a good one. Microbiology or bread-spirits are both explanations for rising bread.

Comment by Conor (conor) on Book Review: The Beginning of Infinity · 2021-10-18T06:48:36.188Z · LW · GW

"Our ancestors followed many practices which work, but for which they had no explanation."

That would be very surprising for a species that reflexively attempts to explain things.

Also, in the book, he specifies that's he's explaining the unprecedented rate of consistent progress from the scientific revolution onward.

Edit: I was mistaken. He is trying to explain all progress.

Comment by Conor (conor) on Optimal Exercise · 2021-07-02T17:23:17.077Z · LW · GW

Seven years later, would you modify this scheme? 

Is there validity to the volume/consistency over intensity argument? Training 1/2 max reps every day vs going to failure 2-3 times week.

An illustration:

10 reps is your pull-up max. 

Volume/consistency: 5 reps every day for 35 a week vs Intensity: 2-3 workouts for 20-30 reps a week.

Over a year that's 1820 vs 1040-1560.

Firas Zahabi outlines it here: 

Comment by Conor (conor) on The 5-Second Level · 2021-06-14T21:16:43.195Z · LW · GW

Example

  1. I am working on a hard problem and A. I notice a thought proposing a distraction from my current task, B. but I stop myself and continue my current activity.
  2.  
    1. Perceptually recognize a thought proposing a distraction from my current task.
    2. Feel the need for explicit reasons why I would change tasks.
    3. Experience an aversion to changing tasks without explicit reasons.
    4. Ask why I want to change to that task, to what end, and why now.
  3. Exercise

Recognizing the distractions. I'm struggling to come up with an idea on how to do this other than a form of awareness or attention meditation.

Comment by Conor (conor) on Extreme Rationality: It's Not That Great · 2021-06-13T05:06:59.630Z · LW · GW

What are the other posts in your top five?

Comment by Conor (conor) on John_Maxwell's Shortform · 2021-06-09T03:00:38.399Z · LW · GW

Did you end up trying the microneedling? I'm curious about that route.

Comment by Conor (conor) on John_Maxwell's Shortform · 2021-06-07T05:41:49.148Z · LW · GW

How are things progressing?

Comment by Conor (conor) on How do I improve at being strategic? · 2021-01-23T04:46:24.392Z · LW · GW

I suppose the next step after passing the desire test, would be to actually verify that the goal will, in reality, provide that thing I imagine makes me go mmmm by researching and testing. 

I imagine walking around dressed like a doctor and telling people I'm a doctor. Adding M.D. to my online dating profile, job shadowing, going to neighborhoods where doctors live, luring some doctors into my van, learning to sew, digging a pit in my cellar, and buying some night vision goggles and buying a bunch of lotion...

Luckily, I don't want to be a doctor.

Comment by Conor (conor) on How do I improve at being strategic? · 2021-01-22T02:47:16.344Z · LW · GW

If you are fearful of offending people go to an online or in person marketplace and start low-balling people...

 

That... is a great idea and I can see how to expand on it into other arenas.

Since I posted this question I've been working primarily on strategy and through that have realized improving my productivity would be a wise decision. Since they seem so intertwined (productivity is the strategic use of time and resources) I've split my time up into 40% strategy, 40% productivity, 20% execution of other goal-oriented tasks. 

I've drafted some ways to measure progress:

Productivity

Largely derived from: Thank you notes from my future self

If I could go back and redo the work, how long would it take me to make the same amount of progress? Divide the time-to-redo by the original duration. 

Ex. I spent 4 hours writing a draft. Looking back I could have saved 1 hour by researching more thoroughly before starting to write. Score: 75% efficient.

Tracking Method

Record what I did during the day in the evening. 
Score it with the above method and add a hidden confidence score. 
Score it again 3 days later. 
Track the difference for calibration.
Ask why that score was selected.

Also Track:

Time spent working.
Consistency of adhering to my work schedule: 5 days a week.

Strategy

Tracking Method

Rate of changes to strategy guide. (little iffy on this one).
I win more than I lose. (Games, negotiating, etc.)
Goals accomplished.
 

Thanks again for the advice.

Comment by Conor (conor) on How to Measure Anything · 2021-01-21T04:38:05.534Z · LW · GW

For whomever reads this that is as innumerate as I am and is confused about the example simulation with the excel formula "=norminv(rand(), 15, (20–10)/3.29)", I hope my explanation below helps (and is correct!).

The standard error/deviation* of 3.29 is such because that's the correct value for the confidence interval of 90%. That number is determined by the confidence interval used. It is not the standard deviation of $10-$20. Don't ask me why, I don't know, yet. 

Additionally, you can't just paste that formula into excel. Remove the range (20-10) and keep the standard error. 

At least that's the best understanding I have of it thus far. I could be wrong!

*Standard deviation is for entire populations and standard error is for samples of populations.

Edit: fixed link to Monte Carlo spreadsheet & all the other downloads for the book

Comment by Conor (conor) on How do I improve at being strategic? · 2021-01-18T03:17:35.881Z · LW · GW

So far, I think of Strategy as a method for determining tactics to achieve a goal, and may include developing a step-by-step plan. I see a variety of techniques fitting this framework: 

I'll check out hammertime. Thanks for the suggestion.

 


 

Comment by Conor (conor) on Humans are not automatically strategic · 2021-01-14T06:15:24.889Z · LW · GW

How has your strategy (a-h) changed since you wrote this? Are there resources you can share for learning to be more strategic? A method for finding quality resources? Methods for practicing and assessing strategic skill?

Comment by Conor (conor) on Fourth Wave Covid Toy Modeling · 2021-01-07T01:16:42.179Z · LW · GW

How would vaccine refusers impact this model?

https://www.latimes.com/california/story/2020-12-31/healthcare-workers-refuse-covid-19-vaccine-access

https://www.news5cleveland.com/news/continuing-coverage/coronavirus/local-coronavirus-news/more-than-half-of-ohios-nursing-home-staff-refusing-vaccine-as-first-round-is-nearly-over

Comment by Conor (conor) on Welcome to LessWrong! · 2021-01-06T05:02:28.022Z · LW · GW

Could you expand on what makes the typography noteworthy? I'm completely unaware of this topic, but curious.

Comment by Conor (conor) on Training Regime Day 7: Goal Factoring · 2020-09-14T00:36:07.726Z · LW · GW

Jacob Fisker has a method called the reverse fishbone diagram.

You draw a horizontal line and that is the action.

Above the line you draw a diagonal forward slanted line for each positive first order effect of taking that action and below you do the same for negative effects.

On those initial branches, you branch off second order effects, up or down pointed depending on their valence until you have a sketch roughly resembling a fish skeleton with as many orders of effects as you can come up with.

You then count the upward and downward lines and compare how the effects serve other goals you have to determine if this is a good action to continue. Of course, how you weight each effect matters, so you can try to take that into account maybe by bolding important effects.

I prefer this method because it is closer to comprehensive by including more consequences in a slightly more elegant manner than the mind map approach.

I'm assuming there is a goal evaluation lesson coming up, so I won't comment on confusing means/actions with ends/goals.

Comment by Conor (conor) on Training Regime Day 6: Seeking Sense · 2020-09-13T00:10:14.719Z · LW · GW

System 1 doesn't make sense?

Comment by Conor (conor) on Training Regime Day 1: What is applied rationality? · 2020-07-16T22:48:56.147Z · LW · GW

Applied rationality: Methods for fostering quick, efficient, and well-informed decision-making toward a goal.

Winter is nearly here and you need a door for your house to keep out the cold. In your workspace there is a large block of an unknown type of wood. Using only what you can assertain about it from your senses and experiences, you determine which tool to use for each circumstance you uncover as you reduce the block into the best door you can make given the time, tools, and knowledge available.

Edit: thanks for the post. It was very helpful.