Posts

Introducing Metaforecast: A Forecast Aggregator and Search Tool 2021-03-07T19:03:35.920Z
Forecasting Prize Results 2021-02-19T19:07:09.420Z
Prize: Interesting Examples of Evaluations 2020-11-28T21:11:22.190Z
Squiggle: Technical Overview 2020-11-25T20:51:00.098Z
Squiggle: An Overview 2020-11-24T03:00:32.872Z
Working in Virtual Reality: A Review 2020-11-20T23:14:28.707Z
Epistemic Progress 2020-11-20T19:58:07.555Z
Announcing the Forecasting Innovation Prize 2020-11-15T21:12:39.009Z
Are the social sciences challenging because of fundamental difficulties or because of imposed ones? 2020-11-10T04:56:13.100Z
Open Communication in the Days of Malicious Online Actors 2020-10-07T16:30:01.935Z
Can we hold intellectuals to similar public standards as athletes? 2020-10-07T04:22:20.450Z
Expansive translations: considerations and possibilities 2020-09-18T15:39:21.514Z
Multivariate estimation & the Squiggly language 2020-09-05T04:35:01.206Z
Epistemic Comparison: First Principles Land vs. Mimesis Land 2020-08-21T22:28:09.172Z
Existing work on creating terminology & names? 2020-01-31T12:16:32.650Z
Terms & literature for purposely lossy communication 2020-01-22T10:35:47.162Z
Predictably Predictable Futures Talk: Using Expected Loss & Prediction Innovation for Long Term Benefits 2020-01-08T12:51:01.339Z
[Part 1] Amplifying generalist research via forecasting – Models of impact and challenges 2019-12-19T15:50:33.412Z
[Part 2] Amplifying generalist research via forecasting – results from a preliminary exploration 2019-12-19T15:49:45.901Z
Introducing Foretold.io: A New Open-Source Prediction Registry 2019-10-16T14:23:47.229Z
ozziegooen's Shortform 2019-08-31T23:03:24.809Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z
Ideas for Next Generation Prediction Technologies 2019-02-21T11:38:57.798Z
Predictive Reasoning Systems 2019-02-20T19:44:45.778Z
Impact Prizes as an alternative to Certificates of Impact 2019-02-20T00:46:25.912Z
Can We Place Trust in Post-AGI Forecasting Evaluations? 2019-02-17T19:20:41.446Z
The Prediction Pyramid: Why Fundamental Work is Needed for Prediction Work 2019-02-14T16:21:13.564Z
Short story: An AGI's Repugnant Physics Experiment 2019-02-14T14:46:30.651Z
Three Kinds of Research Documents: Exploration, Explanation, Academic 2019-02-13T21:25:51.393Z
The RAIN Framework for Informational Effectiveness 2019-02-13T12:54:20.297Z
Overconfident talking down, humble or hostile talking up 2018-11-30T12:41:54.980Z
Stabilize-Reflect-Execute 2018-11-28T17:26:39.741Z
What if people simply forecasted your future choices? 2018-11-23T10:52:25.471Z
Current AI Safety Roles for Software Engineers 2018-11-09T20:57:16.159Z
Prediction-Augmented Evaluation Systems 2018-11-09T10:55:36.181Z
Critique my Model: The EV of AGI to Selfish Individuals 2018-04-08T20:04:16.559Z
Expected Error, or how wrong you expect to be 2016-12-24T22:49:02.344Z
Graphical Assumption Modeling 2015-01-03T20:22:21.432Z
Understanding Who You Really Are 2015-01-02T08:44:50.374Z
Why "Changing the World" is a Horrible Phrase 2014-12-25T06:04:48.902Z
Reference Frames for Expected Value 2014-03-16T19:22:39.976Z
Creating a Text Shorthand for Uncertainty 2013-10-19T16:46:12.051Z
Meetup : San Francisco: Effective Altruism 2013-06-23T21:48:34.365Z

Comments

Comment by ozziegooen on Open and Welcome Thread - May 2021 · 2021-05-04T05:37:13.416Z · LW · GW

A bit more info;

I lived at 20Mission, which was technically an SRO. I enjoyed the setting quite a bit, though I've heard they've had trouble recently with COVID. That said, most of the other SROs I know of nearby (in the Mission, SF), are really not nice places. (lots of drugs and some violence).

https://www.20mission.com/

There's been discussion of having "Micro-Units" in SF, but they're heavily regulated. It seems like small progress is being made.

https://socketsite.com/archives/2012/11/microunits_approved_for_san_francisco_capped_for_market.html

Comment by ozziegooen on ozziegooen's Shortform · 2021-05-02T15:55:55.931Z · LW · GW

Thanks! I think precommitement is too narrow (I don't see dying as a precommitement). Optionality seems like a solid choice for adding. "Options" are a financial term, so something a bit more generic seems appropriate.

Comment by ozziegooen on ozziegooen's Shortform · 2021-05-02T03:18:56.040Z · LW · GW

Are there any good words for “A modification of one’s future space of possible actions”, in particular, changes that would either remove/create possible actions, or make these more costly or beneficial? I’m using the word “confinements” for negative modifications, not sure about positive modifications (“liberties”?). Some examples of "confinements" would include:

  • Taking on a commitment
  • Dying
  • Adding an addiction
  • Golden handcuffs
  • Starting to rely on something in a way that would be hard to stop
Comment by ozziegooen on Working in Virtual Reality: A Review · 2021-04-30T13:58:13.302Z · LW · GW

Good luck!

To be clear, my Mac is connected via ethernet (but there's no way to connect the headset with a wire). I'm really not sure how the iPhone would work, or if they support it.

I believe things are much nicer for Windows computers.

I'm hoping Apple releases their own soon, though rumors have it that the upcoming unit will be very expensive $1k to $3k). 

Comment by ozziegooen on Working in Virtual Reality: A Review · 2021-04-28T23:56:45.383Z · LW · GW

The latency is too bad for my particular setup. It depends on how good the wifi connection is in your room.

Yea, I've looked into the SIP workaround, but am reluctant to implement that now. I'm hoping they just making drivers for Big Sur, but it's taking more time than I'd like.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2021-04-28T15:56:48.258Z · LW · GW

Hi!

I made this mistake of upgrading my computer to Big Sur, which has problems with wifidirect. I also changed rooms in the house, so now my wifi signal isn't quite as good. This makes a very noticeable difference. 

I still use this for writing here and there, but mostly I'm waiting for wifi direct support and/or better setups to come out. I'm keeping a close eye on developments. 

Comment by ozziegooen on The Scout Mindset - read-along · 2021-04-22T01:40:59.985Z · LW · GW

Thanks so much, that makes a lot of sense.

Reviewing works can be tricky, because I'd focus on very different aspects when targeting different people. When describing books to potential readers, I'd focus on very different aspects than when trying to comment on how good of a job the author did to advance the topic. 

In this case the main issue is that I wasn't sure what kind of book to expect, so wanted to make that clear to other potential readers. It's like when a movie has really scary trailers but winds up being being a nice romantic drama

Some natural comparison books in this category are Superforecasting and Thinking Fast and Slow, where the authors basically took information from decades of their own original research. Of course, this is an insanely high bar and really demands an entire career. I'm curious how you would categorize The Scout Mindset. ("Journalistic?" Sorry if the examples I pointed to seemed negative)

I think you specifically did a really good job given the time you wanted to allocate to it (you probably didn't want to wait another 30 years to publish), but that specific question isn't particularly relevant to potential readers, so it's tricky to talk about all things at once.

I'd also note that I think there's also a lot of non-experimental work that could be done in the area, similar to The Elephant in the Brain, or many Philosophical works (I imagine habryka thinks similarly). This sort of work would probably sell much worse, but is another avenue I'm interested in for future research.

(About The Village, I just bring this up because it was particularly noted for people having different expectations from what the movie really was. I think many critics really like it at this point.)

Comment by ozziegooen on The Scout Mindset - read-along · 2021-04-21T05:33:15.610Z · LW · GW

(I originally posted this to Goodreads)

TDLR: A good book with mass appeal to help people care more about being accurate. Fairly easy to read, which makes it easy to recommend to many people.

I've met Julia a few times and am friendly with her. I'd be happy if this book does well, and expect that to lead to a (slightly) more reasonable world.

That said, in the interest of having a Scout Mindset, I want to be honest about my impression.

The Scout Mindset is the sort of book I'm both happy with and frustrated by. I'm frustrated because this is a relatively casual overview of what I wish were a thorough Academic specialty. I felt similarly with The Life You Can Save when that was released.

Another way of putting this is that I was sort of hoping for an academic work, but instead, think of this more as a journalistic work. It reminds me of Vice Documentaries (which I like a lot) and Malcolm Gladwell (in a nice way), instead of Superforecasting or The Elephant in the Brain. That said, journalistic works have their unique contributions in the literature, it's just a very different sort of work.

I just read through the book on Audible and don't have notes. To write a really solid review would take more time than I have now, so instead, I'll leave scattered thoughts.

1. The main theme of the book is the dichotomy of "The Scout Mindset" vs. "The Soldier Mindset", and more specifically, why the Scout Mindset is (almost always?) better than the Solider Mindset. Put differently, we have a bunch of books about "how to think accurately", but surprisingly few on "you should even try thinking accurately." Sadly, this latter part has to be stated, but that's how things are.

2. I was expecting a lot of references to scientific studies, but there seemed to be a lot more text on stories and a few specific anecdotes. The main studies I recall were a very few seemingly small psychological studies, which at this point I'm fairly suspect of. One small note: I found it odd that Elon Musk was described multiple times as something like an exemplar of honesty. I agree with the particular examples pointed to, but I believe Elon Musk is notorious for making explicit overconfident statements.

3. Motivated reasoning is a substantial and profound topic. I believe it already has many books detailing not only that it exists, but why it's beneficial and harmful in different settings. The Scout Mindset didn't seem to engage with much of this literature. It argued that "The Scout Mindset is better than the Soldier Mindset", but that seems like an intense simplification of the landscape. Lies are a much more integral part of society than I think they are given credit for here, and removing them would be a very radical action. If you could go back in time and strongly convince particular people to be atheistic, that could be fatal.

4. The most novel part to me was the last few chapters, on "Rethinking Identity". This section seems particularly inspired by the blog post Keep Your Identity Small by Paul Graham, but of course, goes into more detail. I found the mentioned stories to be a solid illustration of the key points and will dwell on these more.

5. People close to Julia's work have heard much of this before, but maybe half or so seemed rather new to me.

6. As a small point, if the theme of the book is about the benefits of always being honest, the marketing seemed fairly traditionally deceiving. I wasn't sure what to expect from the cover and quotes. I could easily see potential readers getting the wrong impression looking at the marketing materials, and there seems to be little work to directly make the actual value of the book more clear. There's nothing up front that reads, "This book is aiming to achieve X, but doesn't do Y and Z, which you might have been expecting." I guess that Julia didn't have control over the marketing.

Comment by ozziegooen on The Scout Mindset - read-along · 2021-04-21T05:32:41.488Z · LW · GW

Review/Overview Thread

Comment by ozziegooen on Are the social sciences challenging because of fundamental difficulties or because of imposed ones? · 2021-04-15T03:21:48.944Z · LW · GW

I think in a more effective world, "Digital Humanities" would just be called "Humanities" :) 

Comment by ozziegooen on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2021-04-09T06:41:05.897Z · LW · GW

That sounds right. However, I think that being properly calibrated is a really big deal, and a major benefit compared to other approaches. 

On the part:

But I'd guess that more explicit, less "black box" approaches for predicting what Y will say will tend to either be more robust to distributional shift or more able to fail gracefully, such as recognising that uncertainty is now much higher and there's a need to think more carefully.

If there are good additional approaches that are less black-box, I see them ideally being additions to this rough framework. There are methods to encourage discussion and information sharing, including with the Judge / the person's beliefs who is being predicted.

Comment by ozziegooen on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2021-04-07T15:56:17.425Z · LW · GW

Thanks for the attention on this point.

I think I'm very nervous about trying to get at "Truth". I definitely don't mean to claim that we were confident that this work gets us much closer to truth; more that it can help progress a path of deliberation. The expectation is that it can get us closer to the truth than most other methods, but we'll still be several steps away. 

I imagine that there are many correlated mistakes society is making. It's really difficult to escape that. I'd love for future research to make attempts here, but I suspect it's a gigantic challenge, both for research and social reasons. For example, in ancient Egypt, I believe it would have taken some intense deliberation to both realize that the popular religion was false, and also to be allowed to say such.

Comment by ozziegooen on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2021-04-07T15:49:35.382Z · LW · GW

Fair points. I think that the fact that they can predict one's beliefs is minor evidence they will be EV-positive to listen to. You also have to take into account the challenge of learning from them.

All that said, this sort of technique is fairly prosaic. I'm aiming for a future much better; where key understandings are all in optimized prediction applications and people generally pay attention to those. 

Comment by ozziegooen on [Part 1] Amplifying generalist research via forecasting – Models of impact and challenges · 2021-04-07T15:46:33.158Z · LW · GW

This sounds roughly right to me. I think concretely this wouldn't catch people off guard very often. We have a lot of experience trying to model the thoughts of other people, in large part because we need to do this to communicate with them. I'd feel pretty comfortable basically saying, "I bet I could predict what Stuart will think in areas of Anthropology, but I really don't know his opinions of British politics". 

If forecasters are calibrated, then on average they shouldn't be overconfident. It's expected there will be pockets where they are, but I think the damage caused here isn't particularly high. 

Comment by ozziegooen on Why We Launched LessWrong.SubStack · 2021-04-01T07:33:55.863Z · LW · GW

This looks really promising to me. I hate to say though, I can't currently afford this. I think I can solve this by launching a new paid Substack for all of QURI's upcoming content.

I'd probably recommend this strategy to other researchers who can't afford the costs to the LessWrong Substack, and soon the QURI Substack. 

Comment by ozziegooen on “PR” is corrosive; “reputation” is not. · 2021-02-23T05:01:40.641Z · LW · GW

I feel mixed about this. 

My guess is that Anna means something fairly specific by "honor", but there are many cases of people using honor or similar abstractions to justify some really terrible things (lots of violence, for example). So if you were to tell most people to "maximize honor instead of do PR", I could see this going quite poorly.

https://en.wikipedia.org/wiki/Culture_of_honor_(Southern_United_States)

For one thing, for many people, in many important situations, "not saying anything at all" is a really good thing. Think of prisoners who don't plead the fifth, or many other legal cases or otherwise. Arguably Trump and Elon Musk have been fairly damaging on Twitter to themselves.

I think a lot of PR professionals are quite bad, but this is true for most professions. I imagine in a lot of (good) cases their advice is "don't say really stupid stuff", and much of the time their clients really could use hearing that. 

Comment by ozziegooen on Creating A Kickstarter for Coordinated Action · 2021-02-05T04:00:14.716Z · LW · GW

Happy to see enthusiasm and initiative here. I tried starting a few web startups in college and after; almost none worked, but the experience was useful.

I think you're massively underestimating the challenge of this. There were many products like Kickstarter before Kickstarter. To be successful you often need a combination of technical talent, marketing, and luck. This isn't to say that it's not worth a more intense endeavor; just that I wouldn't be as optimistic as your post sounds.

You sound quite similar to other young entrepreneurs I have known. Some wind up being successful, sometimes in part because of their overconfidence. Entrepreneurial claims often work better in other markets than rationalism / EA; when posting in places like this especially I suggest keeping things a bit more humble. 

So overall I think this is quite a bit more challenging than I think you think it is, but I don't want to discourage you from trying (though I really hope the process doesn't involve you burning over-hyped colleagues). I imagine more people making attempts at these things could be quite good overall.

Comment by ozziegooen on Graphical Assumption Modeling · 2020-12-08T02:52:00.323Z · LW · GW

Fixed

Comment by ozziegooen on The LessWrong 2018 Book is Available for Pre-order · 2020-12-03T04:54:19.816Z · LW · GW

Just want to flag that I like the idea of topic-specific books, perhaps with an additional author to help rewrite things and make them consistent and clean. It's especially enticing if you can find labor to do it that doesn't have a high opportunity cost for other LessWrong style things. 

Comment by ozziegooen on The LessWrong 2018 Book is Available for Pre-order · 2020-12-03T04:50:29.601Z · LW · GW

+1 for being able to be open about small disagreements like this online :)

Comment by ozziegooen on The LessWrong 2018 Book is Available for Pre-order · 2020-12-03T04:50:04.819Z · LW · GW

Thanks for the reasoning here. I also don't want to detract people from purchasing these books, I imagine if people really wanted they could write the dates on them manually. 

That said - 

To better explain my intuitions here:

In 5 years from now, I care about whether the essays came out in 2018 or in 2017 if I am trying to find a particular one in a book, or recommend one to another person. Ordering is really simple to remember compared to other kinds of naming one could use. When going between different books the date is particularly relevant because names and concepts will change over time. I'd hope that 10 years from now much of the 2018 content will look antiquated and old. 

If you're just aiming for "timeless and good quality posts" (this sounds like the value proposition for the readers you are referring to), then I don't understand the need to only choose ones from 2018. Many good ones came out before 2018 that I imagine would be interesting to readers. That said, if you plan on releasing them on yearly intervals later I'd imagine some restriction might be necessary. Or, it could be that whenever a few topics seem to have come full circle or be in a good place for a book, you publish a book focused on those topics. 

I agree that "LessWrong Review 2018" sounds strange, but there are other phrases that could have with 2018 in them. Many Academic periodicals (including things like Philosophy, which are at least as timeless as LessWrong content) have yearly collections. With those I don't assume I need to read all of the old ones before reading the current year, that would take quite a while (it becomes more obvious after a few are out). I imagine the name could be something like, "LessWrong Highlighted Content: 2018" or "The Best of LessWrong: 2018". 

It's very possible that there's kind of a "free pass" for the first 1-3 years, if this is a repeating thing, and then you could start adding the year. It's not that big a deal if there are just 2-3 of these, but I imagine it will get to be annoying if there are 5+ (and by that time it will be more obvious if it's an issue or not)
 

Comment by ozziegooen on The LessWrong 2018 Book is Available for Pre-order · 2020-12-03T04:08:41.739Z · LW · GW

If the main thing that separates this book from the 2019 and 2020 books is that it's the collection of posts from 2018, it's counterintutive to me that that's not the prominent feature of the title here. Other "journals of the year" often make the year really prominent.

I feel like 5 years from now I'm going to have trouble remembering that "A Map That Reflects the Territory" refers to the 2018 edition, and some other equally elegant but abstract name refers to the 2019 edition. 

If you do go with really premium books especially, I'd recommend considering making the date the prominent bit. Honestly I expect to memorize the "lesswrong"ness from the branding (which is distinct), so the year seems like the most important part to me. 

That said, I feel like I'm not exactly in the target audience (generally don't prefer physical books), so it would come down to the preferences of others. 

I realize you've probably thought about this a lot and have reasons, just giving my 2 cents. 

Comment by ozziegooen on The LessWrong 2018 Book is Available for Pre-order · 2020-12-02T18:48:44.037Z · LW · GW

The books look very pretty, nice work.

Is this content from 2018 specifically, or is it taken from all of historic LessWrong? My impression was that this was from the 2018 review, but I don't see anything about that in the description above. 

If it is from the 2018 review, do you have ideas on how you will differentiate the 2019/2020/etc versions?

Comment by ozziegooen on Prize: Interesting Examples of Evaluations · 2020-11-28T22:23:29.493Z · LW · GW

Thank you for raising the issue. Happy to clarify further. 

By evaluation we refer essentially[1] to the definition on Wikipedia page here

Evaluation is a systematic determination of a subject's merit, worth and significance, using criteria governed by a set of standards. It can assist an organization, program, design, project or any other intervention or initiative to assess any aim, realisable concept/proposal, or any alternative, to help in decision-making; or to ascertain the degree of achievement or value in regard to the aim and objectives and results of any such action that has been completed

By "interesting" we mean what will do well on the listed rubric. We're looking for examples that would be informative for setting up new research evaluation setups. This doesn't mean the examples have to deal with research, but rather that they bring something new to the table that could be translated. For example, maybe there's a good story of a standardized evaluation that made a community or government significantly more or less effective.

[1]  I say "essentially" because I can imagine that maybe someone will point out some unintended artifact in the definition that goes against our intuitions, but I think that this is rather unlikely to be a problem.

Comment by ozziegooen on Delegated agents in practice: How companies might end up selling AI services that act on behalf of consumers and coalitions, and what this implies for safety research · 2020-11-28T21:54:38.864Z · LW · GW

I find this interesting, thanks for working on it. I’ve been thinking about similar things for a while and have heard related discussions, but I’m happy to have more standardized terminology and the links to existing literature.

I am more interested in how this could be used improve our thinking abilities for broad range of valuable purposes, rather than on the implications specifically for them to be unsafe.

Comment by ozziegooen on Squiggle: Technical Overview · 2020-11-26T04:21:33.659Z · LW · GW

Thanks! Yea, this is quite similar to Guesstimate. I think and hope that in the future estimation/monte carlo tech will be closely integrated with forecasting systems.

Comment by ozziegooen on Squiggle: An Overview · 2020-11-25T19:36:44.961Z · LW · GW

Thanks for the suggestion. 

My background is more in engineering than probability, so have been educating myself on probability and probability related software for this. I've looked into copulas a small amount but wasn't sure how tractable they would be. I'll investigate further.

Comment by ozziegooen on Squiggle: An Overview · 2020-11-25T19:34:46.989Z · LW · GW

Thanks for the feedback!

It's a good point. This is a kind of thing I've been wrestling a lot, though of course is fairly surface level compared to the main architecture.

I don't have a personal preference on this. I agree that mixture is more technically correct term, but it seems like many (not very technical people) find "multimodal" more intuitive. To many people I think "mixture" sounds more generic, as from where they are standing, "mixture" could mean several things.

I'll keep the option in mind and ask for further preferences.

Comment by ozziegooen on Epistemic Progress · 2020-11-24T02:30:15.265Z · LW · GW

I'm not sure what you are looking for. Most people know very little in the space of all the things one could find out in books and the like, much which is useful to some extent. If you're curious what things I specifically think are true but the public doesn't yet know of, then continue to read my blog posts; it's a fair bit of stuff, but rather specific.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-22T23:38:16.906Z · LW · GW

There are a few options with the $15/month package with Immersed. No forest, but there is one above the clouds, and one in a cave (no treasure though). With the free package you just get a few 360 photos to choose from (no depth)

Other apps have more options, but they only support Windows generally.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-22T23:37:04.375Z · LW · GW

The default strap. It's not that great, but for me, tolerable. I'm giving it a few months before upgrading, as I'm hoping more straps will be available. (the Oculus ones are sold out)

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-22T17:39:56.302Z · LW · GW

Fixed, thanks! It was a small error in how the url was typed.

Comment by ozziegooen on Epistemic Progress · 2020-11-22T17:33:27.117Z · LW · GW

Comparing groups of forecasters who worked on different question sets only using simple accuracy measures like brier scores is basically not feasible. You're right that forecasters can prioritize easier questions and do other hacks. 

This post goes into detail on several incentive problems:
https://forum.effectivealtruism.org/posts/ztmBA8v6KvGChxw92/incentive-problems-with-current-forecasting-competitions

I don't get the impression that platforms like Metaculus or GJP bias their questions much to achieve higher brier scores. This is one reason why they typically focus more on their calibration graphs, and on direct question comparisons between platforms. 

All that said, I definitely think we have a lot of room to get better at doing comparisons of forecasting between platforms.

Comment by ozziegooen on Epistemic Progress · 2020-11-22T02:28:48.801Z · LW · GW

Kudos for the thinking here, I like the take.

There's a whole lot to "making people more correct about things." I'm personally a lot less focused on trying to make sure the "masses" believe things we already know, than I am in improving the epistemic abilities of "best" groups. From where I'm standing, I imagine even the "best" people have a long way to improve. I personally barely feel confident about a bunch of things and am looking for solutions where I could be more confident. More "super intense next level prediction markets" and less "fighting conspiracy theories".

I do find the topic of epistemics of "the masses" to be interesting, it's just different. CSER did some work in this area, and I also liked the podcast about Taiwan's approach to it (treating lies using epidemic models, similar to how you mention.)
 

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-22T02:19:07.631Z · LW · GW

I guess to me it didn't seem too bad. I've found that talking to people with simple avatars in VR and similar seems surprisingly fine, I'd imagine that in practice you'd get used to this. That said, I also imagine the technology will continue to improve. Deepfakes are getting quite realistic.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-21T20:52:18.129Z · LW · GW

Good point. 

One thing I noticed is that HTML could really be optimized for 3d viewing. Right now computer screens are totally flat, but with VR, you could take advantage of the extra dimension. In general I'd be quite curious about 3D web pages, it seems like there's a lot of innovation to be done. My quick hunch is that it won't radically change UX (things would have to be accessible to people with one eye, for instance, and it's very user-convenient to not need to adjust the third dimension, like having a 3-d mouse), but I imagine it could still lead to a bunch of UI changes.

Big Screen allows you to watch 3D movies, which is pretty cool (though they charge a fair bit for them).

https://www.reddit.com/r/bigscreen/comments/ck4xrc/where_can_i_get_3d_movies_and_play_them_in/ 

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-21T20:47:41.195Z · LW · GW

Me too. Long-term impacts in general can be tricky to study.

I imagine that there are a whole bunch of parameters to play with in VR. There are different technologies for the headsets, and within it, you have options regarding brightness and similar. My guess is that theoretically it could be good or better than many regular monitor setups, but I'm not sure how long it will take to find that. 

Comment by ozziegooen on Epistemic Progress · 2020-11-21T19:33:15.973Z · LW · GW

I think this is a really important question, one I'd like to explore further in future work.

I agree that there are areas where being locally incorrect can be pragmatically useful. Real people are bad at lying, so it's often locally EV-positive to believe something that is false.

The distinction I was focused on here though is on correct truths that are valuable vs. ones that aren't. Among correct beliefs, there's a broad spectrum in how useful those beliefs are. I think we could get pretty far optimizing valuable truths, before we get into the territory of marginally valuable untruths. 

How to Measure Anything gets into the distinction of information that is only true vs. information that is both true and also highly valuable; or relevant for important decisions. That's what I was going for when I wrote this.

 

Comment by ozziegooen on Why is there a "clogged drainpipe" effect in idea generation? · 2020-11-21T01:24:27.583Z · LW · GW

This is one of the key insights of Getting Things Done.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-20T23:50:14.850Z · LW · GW

I'd be interested too. My impression is that Immersed is the only option that allows for computer screen input on Mac and Linux machines. Windows has more options. 

Hopefully with the increasing popularity of VR devices there will be more competition coming in. 

That said, I would note that Immersed was mostly fine for me. The main frustrations were the lack of resolution and the fact that it seemed to tire my eyes a bit. I'm not sure how much better Immersed could (realistically) be in ways that would get me to use it more now, as a solo user, at this point.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-20T23:37:40.768Z · LW · GW

I think the Quest 2 is a fair bit better, though I've only used the Quest 1 for around 40 minutes. The resolution and screen door effect have large improvements.

I think good straps can help with the physical comfort, but agreed it's an issue.

Comment by ozziegooen on Working in Virtual Reality: A Review · 2020-11-20T23:32:01.691Z · LW · GW

Ah, thanks!

It was written in VR, I think it will take some time to get used to proofreading and stuff in it :)

Comment by ozziegooen on Epistemic Progress · 2020-11-20T22:22:43.524Z · LW · GW

A few quick thoughts here:

  1. Effective Altruism has actively been trying to penetrate academia. There are several people basically working in Academia full-time (mainly around GPI, CSER, CHAI, and FHI) focused on EA, but very few focused on LessWrong-style rationality. It seems to me like the main candidates for people to do this all decided to work on AI safety directly.
  2. I'd note that in order to introduce "LW-style rationality" to Academia, you'd probably want to chunk it up and focus accordingly. I think epistemics is basically one subset.
  3. I personally expect much of the valuable work on Rationality/Epistemics to come from nonprofits and individuals, not academic institutions.
Comment by ozziegooen on Announcing the Forecasting Innovation Prize · 2020-11-16T22:01:37.094Z · LW · GW

Thanks!

Comment by ozziegooen on Announcing the Forecasting Innovation Prize · 2020-11-16T18:09:56.527Z · LW · GW

Thanks for the idea. I'm hesitant to do it for this round at least. One of the main reasons why we are doing this is to test the hypothesis that this will encourage more writing, and giving much of the prize to previous entries would work against that. 

I'm curious though, do you have thoughts on what a proposal would look like? Like, we accept entries from the last month, or last year? 

I would note that if you want feedback on recently written posts, I'd be happy to help there. Reaching out seems fine to me.

Comment by ozziegooen on ozziegooen's Shortform · 2020-11-13T23:56:42.471Z · LW · GW

I was thinking of it less for life extension, and more for a quality of life and cost improvement.

Comment by ozziegooen on ozziegooen's Shortform · 2020-11-13T21:25:15.201Z · LW · GW

I think brain-in-jar or head-in-jar are pretty underrated. By this I mean separating the head from the body and keeping it alive with other tooling. Maybe we could have a few large blood processing plants for many heads, and the heads could be connected to nerve I/O that would be more efficient than finger -> keyboard IO. This seems fairly easier than uploading, and possibly doable in 30-50 years.

I can't find much about how difficult it is. It's obviously quite hard and will require significant medical advances, but it's not clear just how many are needed. 

From Dr. Brain Wonk, 

Unless you do a body transplant (a serious idea pioneered by surgeon Robert White), the technology to sustain an isolated head for more than a few days doesn't exist. Some organs essential for homeostasis, such as the liver and hematopoietic system, still have no artificial replacements... supporting organs without the aid of a living body for even brief periods of time is difficult and expensive.

So, we could already do this for a few days, which seems like a really big deal. Going from that to indefinite stays (or, just as long as the brain stays healthy) seems doable.

In some ways this would be a simple strategy compared to other options of trying to improve the entire human body. In many ways, all of the body parts that can be replaced with tech are liabilities. You can't get colon cancer if you don't have a colon. 

A few relevant links of varying quality:

https://www.quora.com/Is-it-possible-to-keep-a-human-brain-alive-without-its-body-If-so-how-long-could-it-be-kept-living-if-not-forever

https://www.discovermagazine.com/technology/could-a-brain-be-kept-alive-in-a-jar

https://www.reddit.com/r/askscience/comments/g141q/can_a_brain_be_kept_alive_in_a_jar/

https://en.wikipedia.org/wiki/Isolated_brain

https://www.technologyreview.com/2018/04/25/240742/researchers-are-keeping-pig-brains-alive-outside-the-body/
 

Comment by ozziegooen on Are the social sciences challenging because of fundamental difficulties or because of imposed ones? · 2020-11-11T18:17:05.152Z · LW · GW

I agree we do have things similar to engineering, but these fields seem to be done differently than if they were in the hands of engineers. Industrial engineering is thought to be a field of engineering, but operations research is often considered part of "applied mathematics" (I think). I find it quite interesting that information theory is typically taught as a "electrical engineering" class, but the applications are really just all over the place.

My honest guess is that the reasons why some things are considered "engineering", and thus respected and organized as an option for "engineers", and other areas that could be are not, is often due to cultural and historic factors. The lines seem quite arbitrary to me right now.

Comment by ozziegooen on Are the social sciences challenging because of fundamental difficulties or because of imposed ones? · 2020-11-10T20:53:54.360Z · LW · GW

Agreed that humans are complicated, but I still think there are a lot of reasons to suggest that we can get pretty far with relatively obtainable measures. We have the history of ~100 Billion people at this point to guide us. There are clear case studies of large groups being influenced in intentional predictable ways. Religious groups and wartime information efforts clearly worked. Marketing campaigns clearly seem to influence large numbers of people in relatively predictable ways. And much of that has been with what seems like relatively little scientific understanding at the scale that could be done with modern internet data. 

We don't need perfect models of individuals to be able to make big, deliberate changes.

Comment by ozziegooen on Are the social sciences challenging because of fundamental difficulties or because of imposed ones? · 2020-11-10T20:48:17.180Z · LW · GW

I think as far as science goes, much of old celestial mechanic findings were rather "simple". Human systems definitely seem less predictable than those. However, there are many other technical things we can predict well that aren't human. Computer infrastructures are far more complicated than many celestial mechanics, and we can predict their behavior decently enough (expect that computers won't fail for certain duration, or expect very complex chains of procedures to continue to function). 

It's expected that we can predict the general population trends 10-50 years out in the future. There are definitely some human aggregate aspects that are fairly easy to predict. We can similarly predict with decent certainty that many things won't happen. The US, for all of its volatility, seems very unlikely to become a radical Buddhist nation or split up into many parts any time soon. In many ways modern society is quite boring. 

The US also has a somewhat homogeneous culture. Many people are taught very similar things, watch similar movies, etc. We can predict quite a bit already about people.
https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/?sh=4d09ff3a6668

(Sorry to focus on the US, but it's one example that comes to mind, and easier to discuss than global populations at large)