ofer's Shortform
post by Ofer (ofer) · 2019-11-26T14:59:40.664Z · LW · GW · 31 commentsContents
32 comments
31 comments
Comments sorted by top scores.
comment by Ofer (ofer) · 2019-11-26T14:59:40.853Z · LW(p) · GW(p)
Nothing in life is as important as you think it is when you are thinking about it.
--Daniel Kahneman, Thinking, Fast and Slow
To the extent that the above phenomenon tends to occur, here's a fun story that attempts to explain it:
At every moment our brain can choose something to think about (like "that exchange I had with Alice last week"). How does the chosen thought get selected from the thousands of potential thoughts? Let's imagine that the brain assigns an "importance score" to each potential thought, and thoughts with a larger score are more likely to be selected. Since there are thousands of thoughts to choose from, the optimizer's curse [LW · GW] makes our brain overestimate the importance of the thought that it ends up selecting.
comment by Ofer (ofer) · 2020-07-15T06:03:42.179Z · LW(p) · GW(p)
[COVID-19 related]
[EDIT: for frequent LW readers this is probably old news. TL;DR: people may catch COVID-19 by inhaling tiny droplets that are suspended in the air. Opening windows is good and being outdoors is better. Get informed about masks.]
There seems to be a growing suspicion that many people catch COVID-19 due to inhaling tiny droplets that were released into the air by infected people (even by just talking/breathing). It seems that these tiny droplets can be suspended in the air (or waft through the air) and may accumulate over time. (It's unclear to me how long they can remain in the air for - I've seen hypotheses ranging from a few minutes to 3 hours.)
Therefore, it seems that indoor spaces may pose much greater risk of catching COVID-19 than outdoor spaces, especially if they are poorly ventilated. So consider avoiding shared indoor spaces (especially elevators), keeping windows open when possible, and becoming more informed about masks.
Replies from: Viliam↑ comment by Viliam · 2020-07-15T18:05:06.755Z · LW(p) · GW(p)
I thought this was old news, so now I worry what else may still be considered controversial in some places. Here is a small infodump, at a risk of saying many obvious [LW · GW]things, just to be sure:
- There are two basic mechanisms how Covid-19 spreads. Most people get it from droplets in air. It is also possible to get it by touching an infected surface, and then touching your eyes, nose, or mouth. The droplets can remain in air for a few hours. How long the surface remains infected depends on temperature (longer in cold), but it's also hours, possibly days.
- Even seemingly healthy people can spread the virus. The incubation period (how long people seem healthy but already spread the virus) is 5 days on average, but can be even 14 days long. The virus is spread not only by coughing, but also by breathing, especially by talking and singing.
- The probability of catching the virus, and the severity of the resulting disease depend (among other things) on the initial virus load. In other words, getting the virus can be pretty bad, but getting 10× the amount of virus is still worse.
- Some people have much greater risk of dying, e.g. the elderly and the sick. On the other hand, children have high (although not perfect) resistance. When evaluating the risk for youself, please don't only look at the statistics of Covid-19 deaths; people who spent months in the hospital and have permanently scarred lung tissue and reduced breathing capacity are still counted among the living. In other words, just because you are 20 or 30, it doesn't mean it is safe for you to expose yourself to the coronavirus.
- The virus is protected by a lipid layer; soap or highly concentrated alcohol (70% or more) disrupt the layer and destroy the virus. Ultraviolet light also destroys the virus, but it takes time, and you should not apply it directly to your body.
So, the most important (and the most disputed on internet) conclusion is that even partial protection against Covid-19 can make a big difference. A face mask that reduces the amount of virus by 80% is a great thing and everyone should be using it, because (1) it reduces the chance of getting the coronavirus; (2) if you get the virus anyway, it reduces the severity of the disease; and (3) if most people would use it, it would reduce the "R" value, possibly low enough that the pandemic would just stop long before reaching everyone.
You should avoid sharing a closed space with other people. (If the ventilation circulates the air between the rooms, it makes it effectively one big room. Good ventilation only circulates the air between the office and the outside, never between different offices.) If you must enter the shared closed space, don't remain there for too long, keep your face mask on, and preferably don't talk much. On the other hand, when you are outside, and not in the "spitting range" of other people, the mask is unnecessary.
Wash your hands with soap when you get home, also clean the mask, and the phone if you used it outside. If you bought something, if possible, rub it with soap or alcohol. (Alternatively, leave it untouched at room temperature for 1 day.)
If you meet other people, meet them outside. Keep the face mask on, unless you are e.g. hiking. Don't shake hands.
Install some video conference software on your elderly relatives' computers, so that you can safely remain in contact. If they live close to you, consider shopping for them, but don't come into their home.
EDIT: Also, you should ventilate your home a lot. Otherwise, if one person happens to catch the virus, the remaining ones will get a pretty high initial virus load. (Indeed, statistically, the first person in a household to get sick typically has the least serious outcome, because they only got infected once, while everyone else gets exposed for prolonged periods of time.)
Replies from: Richard_Kennaway, ofer↑ comment by Richard_Kennaway · 2020-07-15T20:39:27.854Z · LW(p) · GW(p)
What about protecting your eyes? People who work with pathogens know that accidentally squirting a syringeful into your eye is a very effective way of being infected. I always wear cycling goggles (actually the cheapest safety glasses from a hardware store) on my bicycle to keep out wind, grit, and insects, and since all this I wear them in shops also.
Replies from: Viliam↑ comment by Ofer (ofer) · 2020-07-15T20:11:55.756Z · LW(p) · GW(p)
Thank you for writing this!
When the upside of informing people who didn't get the memo is so large, saying the obvious things seems very beneficial. (I already knew almost all the info in this infodump, but it will probably slightly affect the way I prioritize things.)
I thought this was old news
It's probably old news to >90% of frequent LW readers (I added a TL;DR to save people time). It's not news to me. I wrote my original post for FB and then decided to post here too. (To be clear, I don't think it's old news to most people in general, at least not in the US or Israel).
Replies from: Patterncomment by Ofer (ofer) · 2020-11-01T16:28:53.254Z · LW(p) · GW(p)
[Question about reinforcement learning]
What is the most impressive/large-scale published work in RL that you're aware of where—during training—the agent's environment is the real world (rather than a simulated environment)?
comment by Ofer (ofer) · 2020-08-24T21:45:53.619Z · LW(p) · GW(p)
It seems that the research team at Microsoft that trained Turing-NLG (the largest non-sparse language model other than GPT-3, I think) never published a paper on it. They just published a short blog post, on February. Is this normal? The researchers have an obvious incentive to publish such a paper, which would probably be cited a lot.
[EDIT: hmm maybe it's just that they've submitted a paper to NeurIPS 2020.]
[EDIT 2: NeurIPS permits putting the submission on arXiv beforehand, so why haven't they?]
comment by Ofer (ofer) · 2022-01-15T19:48:55.112Z · LW(p) · GW(p)
PSA for Edge browser users: if you care about privacy, make sure Microsoft does not silently enable syncing of browsing history etc. (Settings->Privacy, search and services).
They seemingly did so to me a few days ago (probably along with the Windows "Feature update" 20H2); it may be something that they currently do to some users and not others.
comment by Ofer (ofer) · 2020-02-14T12:33:04.426Z · LW(p) · GW(p)
I'm curious how antitrust enforcement will be able to deal with progress in AI. (I know very little about antitrust laws.)
Imagine a small town with five barbershops. Suppose an antitrust law makes it illegal for the five barbershop owners to have a meeting in which they all commit to increase prices by $3.
Suppose that each of the five barbershops will decide to start using some off-the-shelf deep RL based solution to set their prices. Suppose that after some time in which they're all using such systems, lo and behold, they all gradually increase prices by $3. If the relevant government agency notices this, who can they potentially accuse of committing a crime? Each barbershop owner is just setting their prices to whatever their off-the-shelf system recommends (and that system is a huge neural network that no one understands at a relevant level of abstraction).
Replies from: Dagon, Pattern↑ comment by Dagon · 2020-02-14T17:27:50.755Z · LW(p) · GW(p)
This doesn't require AI, it happens anywhere that competing prices are easily available and fairly mutable. AI will be no more nor less liable than humans making the same decisions would be.
Replies from: ofer↑ comment by Ofer (ofer) · 2020-02-14T21:17:32.415Z · LW(p) · GW(p)
This doesn't require AI, it happens anywhere that competing prices are easily available and fairly mutable.
It happens without AI to some extent, but if a lot of businesses will be setting prices via RL based systems (which seems to me likely), then I think it may happen to a much greater extent. Consider that in the example above, it may be very hard for the five barbers to coordinate a $3 price increase without any communication (and without AI) if, by assumption, the only Nash equilibrium is the state where all the five barbers charge market prices.
AI will be no more nor less liable than humans making the same decisions would be.
People sometimes go to jail for illegally coordinating prices with competitors; I don't see how an antitrust enforcement agency will hold anyone liable in the above example.
↑ comment by Pattern · 2020-02-14T17:38:05.573Z · LW(p) · GW(p)
In theory, antitrust issues could be less of an issue with software, because a company could be ordered to make the source code for their products public. (Though this might set up bad incentives over the long run, I don't think this is how such things are usually handled - microsoft's history seems relevant.)
Replies from: ofer↑ comment by Ofer (ofer) · 2020-02-14T21:16:13.155Z · LW(p) · GW(p)
Suppose the code of the deep RL algorithm that was used to train the huge policy network is publicly available on GitHub, as well as everything else that was used to train the policy network, plus the final policy network itself. How can an antitrust enforcement agency use all this to determine whether an antitrust violation has occurred? (in the above example)
comment by Ofer (ofer) · 2020-09-14T18:47:47.813Z · LW(p) · GW(p)
[COVID-19 related]
(Probably already obvious to most LW readers.)
There seems to be a lot of uncertainty about the chances of COVID-19 causing long-term effects (including for young healthy people who experience only mild symptoms). Make sure to take this into account when deciding how much effort you're willing to put into not getting infected.
Replies from: capybaralet↑ comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2020-09-15T07:14:45.486Z · LW(p) · GW(p)
Any ideas where to get good up-to-date info on that? Ideally I'd like to hear if/when we have any significant reductions in uncertainty! :D
Replies from: ofer↑ comment by Ofer (ofer) · 2020-09-15T15:03:10.986Z · LW(p) · GW(p)
I can't think of any specific source to check recurrently, but you can recurrently google [covid-19 long term effects] and look for new info from sources you trust.
comment by Ofer (ofer) · 2020-03-26T20:43:56.292Z · LW(p) · GW(p)
Uneducated hypothesis: All hominidae species tend to thrive in huge forests, unless they've discovered fire. From the moment a species discovers fire, any individual can unilaterally burn the entire forest (due to negligence/anger/curiosity/whatever), and thus a huge forest is unlikely to serve as a long-term habitat for many individuals of that species.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-03-27T19:01:36.782Z · LW(p) · GW(p)
Counterpoint: people today inhabit forested/jungled areas without burning everything down by accident (as far as I know; it's the kind of fact I would expect to have heard about if true), and even use fire for controlled burns to manage the forest/jungle.
comment by Ofer (ofer) · 2021-05-20T20:53:06.039Z · LW(p) · GW(p)
There are ~4 minutes of Sam Harris & Lex Fridman talking about existential risks from AI that I liked a lot; starting here: https://youtu.be/4dC_nRYIDZU?t=8785
comment by Ofer (ofer) · 2021-01-22T12:20:45.726Z · LW(p) · GW(p)
[COVID-19 related]
It was nice to see this headline:
My own personal experience with respirators is that one with headbands (rather than ear loops) and a nose clip + nose foam is more likely to seal well.
comment by Ofer (ofer) · 2020-12-14T11:49:10.292Z · LW(p) · GW(p)
[Online dating services related]
The incentives of online dating service companies are ridiculously misaligned with their users'. (For users who are looking for a monogamous, long-term relationship.)
A "match" between two users that results in them both leaving the platform for good is a super-negative outcome with respect to the metrics that the company is probably optimizing for. They probably use machine learning models to decide which "candidates" to show a user at any given time, and they are incentivized to train these models to avoid matches that cause users to leave their platform for good. (And these models may be way better at predicting such matches than any human).
Replies from: Dagon↑ comment by Dagon · 2020-12-14T18:08:48.999Z · LW(p) · GW(p)
I think this is looking at obvious incentives, and ignoring long-term incentives. It seems likely that owners/funders of platforms have both data and models of customer lifecycles and variability, including those who are looking to hook-up and those who are looking for long-term partners (and those in-between and outside - I suspect there is a large category of "lookey-lous", who pay but never actually meet anyone), and the interactions and shifts between those.
Assuming that most people eventually exit, it's FAR better if they exit via a match on the platform - that likely influences many others to take it seriously.
Replies from: TurnTrout↑ comment by TurnTrout · 2020-12-14T18:13:48.428Z · LW(p) · GW(p)
Assuming that most people eventually exit, it's FAR better if they exit via a match on the platform - that likely influences many others to take it seriously.
Why is this true? Is there any word-of-mouth benefit for e.g. Tinder at this point, which plausibly outweighs the misaligned incentives ofer points out?
Replies from: Dagon↑ comment by Dagon · 2020-12-14T19:30:03.160Z · LW(p) · GW(p)
I don't know much about their business and customer modeling specifically. In other subscription-based information businesses, a WHOLE LOT of weight is put on word of mouth (including reviews and commentary on social media), and it's remarkably quantifiable how valuable that is. For the cases I know of, the leaders are VERY cognizant of the Goodhart problem that the easiest-to-measure things encourage churn, at the expense of long-term satisfaction.
comment by Ofer (ofer) · 2020-09-28T14:17:47.675Z · LW(p) · GW(p)
[researcher positions at FHI]
(I'm not affiliated with FHI.)
FHI recently announced: "We have opened researcher positions across all our research strands and levels of seniority. Our big picture research focuses on the long-term consequences of our actions today and the complicated dynamics that are bound to shape our future in significant ways. These positions offer talented researchers freedom to think about the most important issues of our era in an environment with other brilliant minds willing to constructively engage with a broad range of ideas. Applications close 19th October 2020, noon BST."
comment by Ofer (ofer) · 2020-06-07T18:10:59.281Z · LW(p) · GW(p)
Paul Christiano's definition of slow takeoff may be too narrow, and sensitive to a choice of "basket of selected goods".
(I don't have a background in economy, so the following may be nonsense.)
Paul Christiano operationalized slow takeoff as follows:
There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)
My understanding is that "world output" is defined with respect to some "basket of selected goods" (which may hide in the definition of inflation). Let's say we use whatever basket the World Bank used here.
Suppose that X years from now progress in AI makes half of the basket extremely cheaper to produce, but makes the other half only slightly cheaper to produce. The increase in the "world output" does not depend much on whether the first half of the basket is now 10x cheaper or 10,000x cheaper. In both cases the price of the basket is dominated by its second half.
If the thing we care about here is whether "incredibly powerful AI will emerge in a world where crazy stuff is already happening (and probably everyone is already freaking out)"—as Paul wrote—we shouldn't consider the above 10x and 10,000x cases to be similar.
comment by Ofer (ofer) · 2020-02-29T22:27:58.040Z · LW(p) · GW(p)
[Coronavirus related]
If some organization had perfect knowledge about the location of each person on earth (at any moment); and got an immediate update on any person that is diagnosed with the coronavirus, how much difference could that make in preventing the spread of the coronavirus?
What if the only type of action that the organization could take is sending people messages? For example, if Alice was just diagnosed with the coronavirus and 10 days ago she was on a bus with Bob, now Bob gets a message: "FYI the probability you have the coronavirus just increased from 0.01% to 0.5% due to someone that was near you 10 days ago. Please self-quarantine for 4 days." (These numbers are made up, obviously.)
comment by Ofer (ofer) · 2019-12-23T07:05:07.247Z · LW(p) · GW(p)
It seems likely that as technology progresses and we get better tools for finding, evaluating and comparing services and products, advertising becomes closer to a zero-sum game between advertisers and advertisees.
The rise of targeted advertising and machine learning might cause people who care less about their privacy (e.g. people who are less averse to giving arbitrary apps access to a lot of data) to be increasingly at a disadvantage in this zero-sum-ish game.
Also, the causal relationship between 'being a person who is likely to pay above-market prices' and 'being offered above-market prices' may gradually become stronger.
Replies from: ofer↑ comment by Ofer (ofer) · 2020-01-13T07:06:17.851Z · LW(p) · GW(p)
I crossed out the 'caring about privacy' bit after reasoning that the marginal impact of caring more about one's privacy might depend on potential implications of things [LW(p) · GW(p)] like "quantum immortality" (that I currently feel pretty clueless about).