Posts

Comments

Comment by TedSanders on We don’t trade with ants · 2023-01-12T22:46:31.621Z · LW · GW

Great post.

I don't think communicating trades is the only issue. Even if we could communicate with ants, e.g. "Please clean this cafeteria floor and we'll give you 5 kg of sugar" "Sure thing, human", I think there are still barriers.

  • Can the ants formulate a good plan for cleaning the floor?
  • Can the ants tell when the floor is clean enough?
  • Can the ants motivate their team?
  • Can the ants figure out where to deposit debris, and figure this out if a human janitor accidentally leaves the bin in a different place than yesterday

There's a lot to the task of cleaning the cafeteria floor beyond is it mechanically possible for the worker and can the worker speak English well enough to articulate a trade.

Comment by TedSanders on The Point of Trade · 2021-07-07T00:29:55.517Z · LW · GW

A spatial framing:

(1) All objects have positions in space
(2) The desire by people to consume and use objects is not uniform over space (cars are demanded in Los Angeles more than Antarctica)
(3) The productive capacity to create and improve objects is not uniform over space (it's easier to create iron ore from an Australian mine, or a car at a Detroit factory)
(4) Efficiently satisfying the distribution of desires over space by the distribution of productive capacity over space necessarily involves linking separate points in space through transportation of goods
(5) Owning an object is easier when it is near you and harder when it is far from you

Summing up, satisfying preferences requires transportation, and transportation is easier if ownership is transferred along with the physical object. Therefore it is advantageous to trade.

Comment by TedSanders on How feasible is long-range forecasting? · 2019-10-11T17:38:52.168Z · LW · GW

I spent years trading in prediction markets so I can offer some perspective.

If you step back and think about it, the question 'How well can the long-term future be forecasted?' doesn't really have an answer. The reason for this is that it completely depends on the domain of the forecasts. Like, consider all facts about the universe. Some facts are very, very predictable. In 10 years, I predict the Sun will exist with 99.99%+ probability. Some facts are very, very unpredictable. In 10 years, I have no clue whether the coin you flip will come up heads or tails. As a result, you cannot really say the future is predictable or not predictable. It depends on which aspect of the future you are predicting. And even if you say, ok sure it depends, but like what's the average answer - even then, the only the way to arrive at some unbiased global sense of whether the future is predictable is to come up with some way of enumerating and weighing all possible facts about the future universe... which is an impossible problem. So we're left with the unsatisfying truth that the future is neither predictable or unpredictable - it depends on which features of the future you are considering.

So when you show the plot above, you have to realize it doesn't generalize very well to other domains. For example, if the questions were about certain things - e.g., will the sun exist in 10 years - it would look high and flat. If the questions were about fundamentally uncertain things - e.g., what will the coin flip be 10 years from now - it would look low and flat. The slope we observe in that plot is less a property of how well the future can be predicted and more a property of the limited set of questions that were asked. If the questions were about uncertain near-term geopolitical events, then that graph shows the rate that information came in to the market consensus. It doesn't really tell us about the bigger picture of predicting the future.

Incidentally, this was my biggest gripe with Tetlock and Gardner's Superforecasting book. They spent a lot of time talking about how Superforecasters could predict the future, but almost no time talking about how the questions were selected and how if you choose different sets of counterfactual questions you can get totally different results (e.g., experts cannot predict the future vs rando smart people can predict the future). I don't really fault them for this, because it's a slippery thorny issue to discuss. I hope I have given you some flavor of it here.

Comment by TedSanders on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T04:02:51.974Z · LW · GW

Rationalists should have mental models of the world that say if aliens/AI were out there, a few rare and poorly documented UFO encounters is not at all how we would find out. These stories are not worth the oxygen it takes to contemplate them.

In general, thinking more rationally can change confidence levels in only two directions: either toward more uncertainty or toward more certainty. Sometimes, rationalism says to open your mind, free yourself of prejudice, and overcome your bias. In these cases, you will be guided toward more uncertainty. Other time, rationalism says, c'mon, use your brain and think about the world in a way that's deeply self-consistent and don't fall for surface-level explanations. In these cases, you will be guided toward more certainty.

In my opinion, this is a case where rationalism should make us more certain, not less. Like, if there were aliens, is this really how we would find out? Obviously no.


Comment by TedSanders on Disincentives for participating on LW/AF · 2019-05-10T20:28:08.147Z · LW · GW

My hypothesis: They don't anticipate any benefit.

Personally, I prefer to chat with friends and high-status strangers over internet randos. And I prefer to chat in person, where I can control and anticipate the conversation, rather than asynchronously via text with a bunch of internet randos who can enter and exit the conversation whenever they feel like it.

For me, this is why I rarely post on LessWrong.

Seeding and cultivating a community of high value conversations is difficult. I think the best way to attract high quality contributors is to already have high quality contributors (and perhaps having mechanisms to disincentivize the low quality contributors). It's a bit of a bootstrapping problem. LessWrong is doing well, but no doubt it could do better.

That's my initial reaction, at least. Hope it doesn't offend or come off as too negative. Best wishes to you all.

Comment by TedSanders on If you've attended LW/SSC meetups, please take this survey! · 2019-03-26T00:41:44.423Z · LW · GW

Observation: I tried to take your survey, but discovered it's only for people who have attended meetups.

Recommendation: Edit your title to be 'If you've attended a LW/SSC meetup, please take the meetups survey!'

Anticipated result: This will save time for non-meetup people who click the survey, start to fill it out, and then realize it wasn't meant for them.

Comment by TedSanders on Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?" · 2019-02-06T20:45:31.139Z · LW · GW

Re: your request for collaboration - I am skeptical of ROI of research on AI X-risk, and I would be happy to help offer insight on that perspective, either as a source or as a giver of feedback. Feel free to email me at {last name}{first name}@gmail.com

I'm not an expert in AI, but I have a PhD in semiconductors (which gives me perspective on hardware) and currently work on machine learning at Netflix (which gives me perspective on software). I also was one of the winners of the SciCast prediction market a few years back, which is evidence that my judgment of near-term tech trends is decently calibrated.

Comment by TedSanders on What went wrong in this interaction? · 2018-12-12T22:59:13.124Z · LW · GW

I didn't perceive either of you as hostile.

I think you each used words differently.

For example, you interpret the post as saying, "metoo has never gone too far."

What the post actually said was, "I've heard people complain that it 'goes too far,' but in my experience the cases referred to that way tend to be cases where someone... didn't endure much in the way of additional consequences."

I read that sentence that as much more limited in scope than your interpretation. (And because it says 'tend' and not 'never', supplying a couple of data points isn't enough information, by itself, to challenge the author's conclusion.)

In addition, you interpreted "metoo" as broadly meaning action against those accused of sexual misconduct.

However, the author interprets "metoo" more narrowly, as meaning action against those accused of sexual misconduct that would otherwise not have occurred in a counterfactual world without the #metoo movement that took off in 2017.

So in the end you didn't seem to disagree with the author's point, just their word usage.

I can empathize why the author wasn't eager to sustain the interaction with you. You used words differently and asked a bunch of questions asking the author to explain themselves. The author may have logically perceived the conversation as a cost, not a benefit.

This is my perception of your conversation. I hope it is helpful to you.

Comment by TedSanders on [deleted post] 2018-12-03T22:35:02.229Z

If the housekeeper were to earn a wage of 3x rent, 15 other housemates would be required at those price points. That's a lot of cooking and cleaning.

Comment by TedSanders on No Really, Why Aren't Rationalists Winning? · 2018-11-05T21:08:10.062Z · LW · GW

What does winning look like?

I think I might be a winner. In the past five years: I have won thousands of dollars across multiple prediction market contests. I earned a prestigious degree (PhD Applied Physics from Stanford) and have held a couple of prestigious high-paying jobs (first as a management consultant at BCG, and now an algorithms data scientist at Netflix). I have a fulfilling social life with friends who make me happy. I give tens of thousands to charity. I enjoy posting to Facebook and surfing the internet. I have the means and motivation to keep learning about areas outside my expertise. I floss and exercise and generally am satisfied with my health.

I think I could be considered both a rationalist and a winner.

But I post rarely to LessWrong because my rational perception is that it takes effort but does not provide return. Generally I think my shortcomings are shortcomings of execution rather than irrationality, and those are the areas I aim to improve upon. My arena for self-improvement is my workplace and my life, not a website. As a result, my stories like mine might be underrepresented in your sampling.

If rationalists were winning, how would we know? What would winning look like?

Comment by TedSanders on look at the water · 2018-10-23T00:34:13.153Z · LW · GW

I think this is why attending universities and otherwise surrounding yourself with smart people is crucial. Their game will elevate your game. I often find myself learning more after someone smart asks me questions about a topic I thought I already knew. And the more this happens, the more I am able to short-circuit the process and preemptively ask those questions of myself.

Comment by TedSanders on Do Animals Have Rights? · 2018-10-18T00:06:50.376Z · LW · GW

"Thus, if we had to give animals rights – this would result in us being their slaves."

If we give other citizens the right to not be murdered, does that make us their slaves? Obviously not.

If we give animals the right to not be murdered, does that make us their slaves? Again, obviously not.

I'm not sure how someone thinks that giving rights means slavery. Obviously obligations can fall into a spectrum of severity, but I don't think the entire spectrum is worth labeling "slavery."

Comment by TedSanders on [deleted post] 2018-10-16T01:15:23.990Z

This is excellent. Thank you for writing it!

Comment by TedSanders on Psychology Replication Quiz · 2018-08-31T20:29:19.266Z · LW · GW

Interesting. I was surprised at how predictable the studies were. It felt like results that aligned with my intuition were likely to be replicated, and results that didn't (e.g., priming affecting a pretty unrelated task) were unlikely to be replicated. Makes me wonder - what's the value of this science if a layperson like me can score 18/18 (with 3 I don't knows) by gut feel after reading only a paragraph or two? Hmm.

(Then again, I guess my attitude of finding predictable results low-value is what has incentivized so much bad science in the hunt for counterintuitive results with their higher rewards.)

Comment by TedSanders on Why focus on AI? · 2018-04-09T00:37:43.850Z · LW · GW

Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:

  • Less knowledge about AGI. Because there is less knowledge about AGI than pandemics or climate change, it's easier to share opinions before feeling ignorant and withdrawing from conversations. This results in more conversations.
  • A disbelieving public. Implicit in arguments 'for' a position is the presumption that many people are 'against' that position. That is, believing 'X is true' is by itself insufficient to motivate someone to argue for X; someone will only argue for X if they additionally believe others don't believe X. In the case of AI, perhaps arguments for AI risk are more likely to encounter disagreement than believing in pandemic risk. This encountered disagreement spurs more conversations.
  • Positive feedback. The more a community reads, thinks, and talks about an issue, the more things they find to say and the more sophisticated their thinking becomes. This begets more conversations on the topic, in a reinforcing feedback loop.

(Disclaimer: I personally don't worry about AI, am skeptical that AGI will happen in the next 100 years, am skeptical that AGI will take over Earth in under 100 years, but nonetheless recognize that these are more than 0% probable. I don't have a great mental model of why others disagree, but believe that it can be partly explained by software people being more optimistic than hardware people, since software people have experienced more amazing success in the past couple decades.)

Comment by TedSanders on [Draft for commenting] Near-Term AI risks predictions · 2018-04-05T06:38:29.797Z · LW · GW

Generally yes, I think it's better when titles reveal the answer rather than the question alone. "Dangerous AI timing" sounds a bit awkward to my ear. Maybe a title like "Catastrophically dangerous AI is plausible before 2030" would work.

Comment by TedSanders on [Draft for commenting] Near-Term AI risks predictions · 2018-04-03T23:35:15.299Z · LW · GW

I think it's great that you and other people are investing time and thought into writing articles like these.

I also think it's great that you're soliciting early feedback to help improve the work.

I left some comments that I hope you find helpful.

Comment by TedSanders on What useless things did you understand recently? · 2017-07-03T01:32:43.048Z · LW · GW

Is this actually true? Do you have a source? I have tried Googling for it.

My understanding is that the sky's blue color was caused by Rayleigh scattering. This scattering is higher for shorter wavelengths. There's no broad peak in scattering associated with nitrogen absorption lines (which I imagine would be very narrowband, rather than broadband).

Wikipedia's article on Rayleigh scatting mentions oxygen twice but makes no reference to your theory.

https://en.wikipedia.org/wiki/Rayleigh_scattering

Comment by TedSanders on What useless things did you understand recently? · 2017-07-03T01:25:19.977Z · LW · GW

Wavelengths of visible light are around ~500 nm. Even infrared is on the order of micrometers. I don't think the spikes that we're imagining are micrometers apart.

Comment by TedSanders on Against responsibility · 2017-04-01T01:53:49.370Z · LW · GW

Thanks for the long and thoughtful post.

My main question: Who are these 'people' that you seem to be arguing against?

It sounds like you're seeing people who believe:

  • "You - you, personally - are responsible for everything that happens."

  • "No one is allowed their own private perspective - everyone must take the public, common perspective."

  • Other humans are not independent and therefore warring with them is better than trading with them ("If you don't treat them as independent... you will default to going to war against them... rather than trading with them")

  • To do good, "you will try to minimize others' agency"

And the people who hold the aforementioned beliefs are:

  • "the people around me applying utilitarianism"

  • "many effective altruists"

  • people with ideas "commonplace in discussions with effective altruists"

I guess I struggled to engage with the piece because my experiences with 'people' are very different than your experiences with 'people.' I don't think anyone I know would claim to think the things that you think many effective altruists. I loosely consider myself an effective altruist and I certainly don't hold those beliefs.

I think one way to get more engagement would be to argue against specific claims that specific people have spoken or written. It would feel more concrete and less strawmanny, I think. That's a general principle of good writing that I'm trying to employ more myself.

Anyway, great work writing this post and thinking through these issues!

Comment by TedSanders on What are some science mistakes you made in college? · 2014-03-25T23:27:37.631Z · LW · GW

The best technique I use for "being careful" is to imagine the ways something could go wrong (e.g., my fingers slip and I drop something, I trip on my feet/cord/stairs, I get distracted for second, etc.). By imagining the specific ways something can go wrong, I feel much less likely to make a mistake.

Comment by TedSanders on Meetup : Small Berkeley Meetup · 2012-05-10T01:45:52.102Z · LW · GW

Ah, thanks.

Comment by TedSanders on Meetup : Small Berkeley Meetup · 2012-05-09T03:08:47.117Z · LW · GW

What differentiates a small meetup from a regular meetup?