Posts
Comments
Here are some attributes I've noticed among people who self-identify as rationalists. They are:
- Overwhelmingly white and male. In the in-person or videoconference meetups I've attended, I don't think I've met more than a couple non-white people, and perhaps 10% were non-male.
- Skew extremely young. I would estimate the median age is somewhere in the early to mid 20s. I don't think I've ever met a rat over the age of 50. I'm not saying that they don't exist, but they seem extremely underrepresented relative to the general population.
- Overweight the impact / power of rationalism, despite having life outcomes that are basically average for people with similar socioeconomic backgrounds and demographics
- Tend to be more willing than average to admit that they're wrong if pressed on a factual issue, but have extreme confidence in subjective beliefs (e.g., values, philosophy, etc). This might just be a side effect of the age issue, since I think this describes most people in this age group. Or perhaps the overconfidence in subjective beliefs is just normal, but seems high relative to the willingness to switch beliefs on more factual matters.
- Have a very high "writing and talking / doing" ratio. I think this is a selection bias kind of issue: people who are actually out doing stuff in the world probably don't have a lot of time to engage in a community that strongly values multi-page essays with a half-dozen subheadings. Although perhaps this is also just another side effect of the age skew.
- Undervalue domain knowledge relative to first-principles thinking. As just one example, many rats will gladly outline what they believe are likely Ukraine / Russia outcomes despite not having any particular expertise in politics, international relations, or military strategy. Again, perhaps this is normal relative to the general population and it just seems unusual given rat values.
Is this like "have the hackathon participants do manual neural architecture search and train with L1 loss"?
Ah, I misinterpreted your question. I thought you were looking for ideas for your team that was participating in the hackation, not as the organizer of the hackation.
In my experience, most hackathons are judged qualitatively, so I wouldn't worry about ideas (mine or others') without a strong metric
Do a literature survey for the latest techniques on detecting if a image/prose text/piece of code is computer-generated or human-generated. Apply it to a new medium (i.e. if it's an article about text, borrow techniques to apply it to images, or vice-versa).
Alternatively, take the opposite approach and show AI safety risks. Can you train a system that looks very accurate, but gives incorrect output on specific examples that you choose during training? Just as one idea, some companies use face recognition as a key part of their security system. Imagine a face recognition system that labels 50 "employees" that are images of faces you pull from the internet, including images of Jeff Bezos. Train that system to correctly classify all the images, but also label anyone wearing a Guy Fawkes mask as Jeff Bezos. Think about how you would audit something like this if a malicious employee handed you a new set of weights and you were put in charge of determining if they should be deployed or not.
>75% confidence: No consistent strong play in simple game of imperfect information (e.g. battleship) for which it has not been specifically trained.
>50% confidence: No consistent "correct" play in a simple game of imperfect information (e.g. battleship) for which it has not been specifically train. Correct here means making only valid moves, and no useless moves. For example, in battleship a useless move would be attacking the same grid coordinate twice.
>60% confidence: Bad long-term sequence memory, particularly when combined with non-memorization tasks. For example, suppose A=1, B=2, etc. What is the sum of the characters in a given page of text (~500 words)?
Above 99% certainty:
Run inference in reasonable latency (e.g. < 1 second for text completion) on a typical home gaming computer (i.e. one with a single high-powered GPU).
Didn't this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.
Edit: Don't pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted.
But if I had to use the billion dollars on evil AI specifically, I'd use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.
How exactly would you do this? Lots of places market "AI powered" hedge funds, but (as someone in the finance industry) I haven't heard much about AI beyond things like regularized regression actually giving significant benefit.
Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?
+1, CLion is vastly superior to VsCode or emacs/vi for capabilities and ease of setup, particularly for C++ and Rust
It seems like this is a single building version of a gated community / suburb? In "idealized" America (where by idealized I mean somewhat affluent, morally homogeneous within the neighborhood, reasonably safe, etc), all the stuff you're describing already happens. Transportation for kids is provided by carpools or by the school, kids wander from house to house for meals and play, etc. Families get referrals for help (housekeeping, etc) from other families, or because there are a limited number of service providers in the area. In general, these aren't the hard things about having kids.
In my experience, here are the hard things:
- The early months / years are miserable. The kid wakes you up in the middle of the night and won't go back to sleep and you don't know why. You're in a constant state of sleep deprivation. This happened to me even though I had a night nanny for the first few months (which was hugely helpful, but did not completely eliminate the problem). I got off easier than my friends who had such a problem that they finally hired a "sleep coach" (yes this is a thing).
- Your kid is sick, and you need to take care of them. You could outsource this if you had live-in help, but in practice there is a biological imperative to make you want to do the caretaking yourself.
- Your kid has physical or mental issues. This doesn't necessarily mean anything like they're in a wheelchair or have severe learning disabilities, it could mean something like attention issues or delayed fine motor skills.
- The kid needs almost constant supervision, particularly in the early years. Again you can outsource this to a limited extent (e.g., with daycare) but as a parent you want to spend some time with them (because if not, why have the kid at all?)
- Even when things are going smoothly, there are significant coordination costs. Do you and your partner both need to stay late at work? Figure out who's going to pick up the child from school (and make sure the school has all the appropriate forms allowing that person to pick up), arrange childcare for the night (will you be home early enough to put your kid to bed?), etc.
- You finally got home and you're dead tired. Unfortunately at 3am your kid wakes you up because they had a bad dream. This happens more than once per week, for various reasons.
- There's a trade-off between living in the best place for your work and living in the best place for your kid. Would it be better for you to live in the heart of Manhattan (or wherever) for your job and career socializing? Probably yes. Is it the best place to raise kids? Probably no.
- You can never again give 110%. You know those couple of weeks you had crunch period and had to work 80 hours? You can't do that anymore. No one else can actually replace you as a parent for your own kid. Or rather you can, but you have to be aware that you're now actively putting on the trade of "sell relationship with child."
I feel like I have all the things you state are required to have a huge edge, and yet...my edge is not obvious to me. Most of the money-making opportunities in DeFi seem to involve at least one of:
- That's that look, at least on the surface, like market manipulation
- Launching products that are illegal in the US, at least without tons of regulatory work (exchanges, derivatives platforms, new tokens, etc)
- Taking on significant crypto beta risk (i.e., if the crypto market goes down, my investment drops as much as any other crypto investor's)
Yield farming does look attractive, and I plan to invest some stablecoins in the near future.
Despite being a webcomic, I think this is a funny, legitimate, and scathing critique of the philosophic life and to some extent the philosophy of rationality
https://www.smbc-comics.com/comic/think
I don't have an answer for the actual question you're asking (baseline side effects), however I would like to offer my experiences with nootropics. A number of years ago, I went through a phase where I tried a large variety of nootropics, including doing some basic psychometric tests on a daily basis (Stroop test, dual n-back, etc).
It's remarkably hard to find a test that measures cognitive ability and is immune to practice effects, but I figured some testing was better than just subjective assessments of how I felt.
In all my testing, I only found a very, very small handful of drugs that had any measurable effect:
- Caffeine helped significantly (or rather, not having caffeine hurt, since I was consuming a lot of caffeine on a regular basis).
- Modafinil / Armodafinil was amazing. I had to stop taking it though because I eventually developed an allergic reaction. If you try this, get an actual prescription for it rather than trying to buy it from a sketchy offshore pharmacy
- Alcohol had a much longer negative effect than I would have thought. If I drank on Monday night, I could still see the effects on Tuesday night, and I didn't rebound until Wednesday. This was for even moderate drinking (i.e., 1 cocktail).
Everything else I tried (*-racetam with and without choline, L-theanine, etc) had no measurable effects. I suspect nicotine might have had a measurable effect, but I wasn't willing to risk dependence.
Finally, I would suggest that nootropics are mostly small scale optimizations compared to the benefit you'll see from eating healthy, exercising, getting enough sleep, and maintaining a healthy body weight. If you haven't optimized these, take care of that first and you'll get a much bigger result from your efforts.
I think I mis-pasted the link. I have edited it, but it's suppose to go to https://www.aqr.com/Insights/Perspectives/A-Gut-Punch
I do agree that it increases the variance of outcomes. I think it decreases the mean, but I'm less sure about that. Here's one way I think it could work, if it does work: If some people are generally pessimistic about their chances of success, and this causes them to update their beliefs closer to reality, then Altman's advice would help. That is, if some people give up too easily, it will help them, while the outside world (investors, the market, etc) will put a check on those who are overly optimistic. However, I think it's still important to note that "not giving up" can lead not just to lack of success, but also to value destruction (Pets.com; Theranos; WeWork).
Thanks for the "Young Rationalists" link, I hadn't read that before. I think there are a fair number of successful rationalists, but they mostly focus on doing their work rather than engaging with the rationalist community. One example of this is Cliff Asness - here's a essay by him that takes a strongly rationalist view.
Almost always, the people who say “I am going to keep going until this works, and no matter what the challenges are I’m going to figure them out”, and mean it, go on to succeed. They are persistent long enough to give themselves a chance for luck to go their way.
I've seen this quote (and similar ones) before. I believe that this approach is extremely flawed, to the point of being anti-rationalist. In no particular order, my objections are:
- It is necessarily restricted to the people Altman knows. As a member of the social, technological, and financial elite, Altman associates with people who have an extremely high base rate for being successful relative to the general population (even relative to the general American population).
- The "and mean it" opens to the door to a No True Scotsman fallacy. The person didn't succeed even though they said they wouldn't give up? They must have not really meant it.
- It gives zero weight to the expected value of the work. There are lots of people whose implicit strategy is "No matter my financial challenges, I am never going to give up playing the lottery every week until I get rich. If I run out of money I am going to figure out how to overcome that challenge so I can continue to buy lottery tickets." More seriously, there are lots of important unsolved problems that humanity has been working on for multiple lifetimes without success. I am literally willing to bet against the success of anyone who believes in Altman's quote and works on deciding if P=NP, finding a polynomial time algorithm for integer factorization, or similar problems.
- It gives zero weight to opportunity cost. If the person wasn't banging their head against whatever they were working on, they could probably switch to a better problem. Recognizing this, Silicon Valley simultaneously glorifies "Not Giving Up", and "The Pivot". One explanation for this apparent contradiction is that the true work that SV wants people to not give up on is "generating returns for investors."
- In general, it is suspicious that Altman's advice aligns so perfectly with the behavior you would want if you were an angel or VC. That is, you would want the team to work as hard as possible to generate a return without giving up, ignoring opportunity costs, while the investor maintains the option to continue to invest or not. Note that no investor would say, "I will invest as much money as necessary into this startup until it works, and no matter what the challenges are we will figure out how to raise more money for them."
- A rationalist approach would evaluate the likelihood of overcoming known challenges, the likelihood that an unknown challenge would cause a failure, the expected value of the venture, and the opportunity costs, and then periodically re-evaluate to decide whether to give up or not. Altman's advice to explicitly not do this is self-deceptive, magical thinking.
The Moneyball story would be a good example of this. Essentially all of sports dismissed the quantitative approach until the A's started winning with it in 2002. Now quantitative management has spread to other sports like basketball, soccer, etc.
You could make a similar case for quantitative asset management. Pairs trading, one of the most basic kinds of quantitative trading, was discovered in the early 1980s (claims differ whether it was Paul Wilmott, Bamberger & Tartaglia at Morgan Stanley, or someone else). While the computation power to make this kind of trading easy was certainly more widely available starting in the 80s, nothing would have prevented someone from investing sooner in the research required for this style of trading. (Instead of, for instance, sending their analysts to become registered pseudoscientists)
Yeah, someone else suggested a novel nootropic drug as one answer - online education is basically an alternative form of that drug that is easier to realize (or at least, it's hard is a very different way)
...there are somewhere between six and ten billion people. At any given time, most of them are making mud bricks or field-stripping their AK-47s. - Neal Stephenson, Snow Crash
When we think of new technologies, we typically think of expensive, high-tech innovations, like energy production, robotics, etc. I would suggest that broader adoption of existing technologies, including social technologies, would have a bigger global impact.
For example, one technology that could dramatically impact GDP is improved managerial technology. This paper describes a study of this in India. Among the findings in the paper (or in references that it cites):
100% productivity spreads between the 10th and 90th percentile in US commodity-producing firms
A ratio of the 90th to the 10th percentiles of total factor productivity is 5.0 in Indian and 4.9 in Chinese firms
After improving management in the studied firms, "We estimate that within the first year productivity increased by 17%; based on these changes we impute that annual profitability increased by over $300,000. These better-managed firms also appeared to grow faster, with suggestive evidence that better management allowed them to delegate more and open more production plants in the three years following the start of the experiment"
FWIW, world GDP growth rates have if anything been decreasing over the last ~80 years
I don't have any immediate ideas on long positions - the AI winter isn't AI failing per se, right? It's just that we stop making progress so we're stuck where we are.
Maybe something like Doordash? They filed for an IPO recently, and if you think autonomous robots aren't going to drive down the cost of logistics then last-mile logistics companies might be underpriced. I have much less confidence in this kind of trade though.
You can short some AI ETFs. https://etfdb.com/themes/artificial-intelligence-etfs/ has a list, although some of those are obviously miscategorized - check the holdings to see how much you agree that they're representative.
You're left with market risk (i.e., beta) when you do this, but if you have a diversified portfolio you're probably okay with not putting on an additional specific hedge. That is, if you're right and the whole market rallies (but your ETF rallies less), you'll be okay.
If you want to be more tactical, I would look at companies that are AI-exposed and have insane P/Es. You mention Nvidia having gaming hardware, but NVDA's PE is something like 135.92 right now, which prices in huge levels of growth. Compare 2016, when their P/E was 20-30. An AI winter would collapse the expected growth rate, leading to a corresponding drop in stock price. If you're not convinced on NVDA, you can make a similar case for other companies whose growth narrative is driven by AI.
Finally, you should ideally have a view on when your thesis is going to play out or what the catalyst will be. Remember that during the dotcom boom/bust, "everyone" agreed that the market was nuts, but it kept going up for quite a while. And of course you should think about how to size your position and how to manage your risk while you have the position on. As the saying goes, the market can stay irrational longer than you can stay solvent.
A meeting quality score, as described in the patent referenced in this article (https://www.geekwire.com/2020/microsoft-patents-technology-score-meetings-using-body-language-facial-expressions-data/ )
Some additional ideas: There's a large variety of "loss functions" that are used in machine learning to score the quality of solutions. There are a lot of these, but some of the most popular are below. A good overview is at https://medium.com/udacity-pytorch-challengers/a-brief-overview-of-loss-functions-in-pytorch-c0ddb78068f7
* Mean Absolute Error (a.k.a. L1 loss)
* Mean squared error
* Negative log-likelihood
* Hinge loss
* KL divergence
* BLEU loss for machine translation (https://www.wikiwand.com/en/BLEU)
There's also a large set of "goodness of fit" measures that evaluate the quality of a model, including simple things like r^2 but also more exotic tests to do things like compare distributions. Wikipedia again has a good overview (https://www.wikiwand.com/en/Goodness_of_fit)
Microsoft TrueSkill (Multiplayer ELO-like system, https://www.wikiwand.com/en/TrueSkill)
I originally read this EA as "Evolutionary Algorithms" rather than "Effective Altruism", which made me think of this paper on degenerate solutions to evolutionary algorithms (https://arxiv.org/pdf/1803.03453v1.pdf). One amusing example is shown in a video at https://twitter.com/jeffclune/status/973605950266331138?s=20