Posts
Comments
I'm planning to donate $1000[1] but not until next year (for tax purposes). If there was a way that pledge that I would.
- ^
I'm committing to donating $1000 but it might end up being more when I actually think through all of the donations I plan to do next year.
I showed up and some other people were in the room :(
I'm finishing up packing but won't make it there until 2:15 or so.
Haha, well that dosage probably would probably cause weight loss.
All of the sources I can find give the density as exactly 4 oz = 1/2 cup, although maybe this is just an approximation that's infecting other data sources?
https://www.wolframalpha.com/input?i=density+of+butter+*+(1%2F2+cup)+in+ounces
But 1/2 cup of butter weighs 4 ounces according to every source I can find: https://www.wolframalpha.com/input?i=density+of+butter+*+(1%2F2+cup)+in+ounces
Which means a 4 ounce stick of butter is 1/2 cup by volume.
It sounds like 1/2 cup of butter (8 tbps) weighs 4 oz, so shouldn't this actually work out so each of those sections actually is 1 tbsp in volume, and it's just a coincidence (or not) that the density of butter is 1 oz / 2 fl oz?
The problem is that lack of money isn't the reason there's not enough housing in places that people want to live. Zoning laws intentionally exclude poor people because rich people don't want to live near them. Allocating more money to the problem doesn't really help (see: the ridiculous amount of money California spends on affordable housing), and if you fixed the part where it's illegal, the government spending isn't necessary because real estate developers would build apartments without subsidies if they were allowed to.
Also, the most recent election shows that ordinary people really, really don't like inflation, so I don't think printing trillions of dollars for this purpose is actually more palatable.
You're right, I was taking the section saying "In this new system, the only incentive to do more and go further is to transcend the status quo in some way, and earn recognition for a unique contribution." too seriously. On a second re-read, it seems like your proposal is actually just to print money to give people food stamps and housing vouchers. I think the answer to why we don't do that is that we do that.
Food is essentially a solved problem in the United States, and the biggest problem with housing vouchers is that there physically isn't enough housing in some areas. Printing more money doesn't cause more housing to exist (it could change incentives, but incentives don't matter much when building housing for poor people is largely illegal).
I think you've re-invented Communism. The reason we don't implement it is that in practice it's much worse for everyone, including poor people.
I'll try to make it but I might be moving that day so I'm not sure :\
Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.
But is this because SQLite is unusually buggy, or because its code is unusually open, short and readable and thus understandable by an AI? I would guess that MySQL (for example) has significantly worse vulnerabilities but they're harder to find.
I don't know anything about you in particular, but if you know alignment researchers who would recommend you, could you get them to refer you either internally or through their contacts?
This is actually why a short position (a complicated loan) would theoretically work. If we all die, then you, as someone else's counterparty, never need to pay your loan back.
(I think this is a bad idea, but not because of counterparty risk)
I think the idea is that short position pays off up-front, and then you don't need to worry about the loan if everyone's dead.
If by paying off you mean this bet actually working I think you're right though. It seems more likely that the stock market would go up in the short term, forcing you to cover at a higher price and losing a bunch of money. And if the market stays flat, you'll still lose money on interest payments unless doom is coming this year.
I'll be out of town (getting married on the 25th) but I'd be happy to do something the weekend after.
I don't think this is actually the rule by common practice (and not all bad things should be illegal). For example, if one of your friends/associates says something that you think is stupid, going around telling everyone that they said something stupid would generally be seen as rude. It would also be seen as crazy if you overheard someone saying something negative about their job and then going out of your way to tell their boss.
In both cases there would be exceptions, like if if the person's boss is your friend or safety reasons like you mentioned, but I think by default sharing negative information about people is seen as bad, even if it's sometimes considered low-levels of bad (like with gossip).
I also agree with this to some extent. Journalists should be most concerned about their readers, not their sources. They should care about accurately quoting their sources because misquoting does a disservice to their readers, and they should care about privacy most of the time because having access to sources is important to providing the service to their readers.
I guess this post is from the perspective of being a source, so "journalists are out to get you" is probably the right attitude to take, but it's good actually for journalists to prioritize their readers over sources.
The convenient thing about journalism is that the problems we're worried about here are public, so you don't need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it's accused of.
The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers' identifies you might still get a good enough signal to noise ratio, especially if you include positive 'reviews'.
I largely agree with this article but I feel like it won't really change anyone's behavior. Journalists act the way they do because that's what they're rewarded for. And if your heuristic is that all journalists are untrustworthy, it makes it hard for trustworthy journalists to get any benefit from that.
A more effective way to change behavior might be to make a public list of journalists who are or aren't trustworthy, with specific information about why ("In [insert URL here], Journalist A asked me for a quote and I said X, but they implied inaccurately that I believe Y" "In [insert URL here], Journalist B thought that I believe P but after I explained that I actually believe Q, they accurately reflected that in the article", or just boring ones like "I said X and they accurately quoted me as saying X", etc.).
It would be very surprising to me if such ambitious people wanted to leave right before they had a chance to make history though.
They can't do that since it would make it obvious to the target that they should counter-attack.
As an update: Too much psyllium makes me feel uncomfortably full, so I imagine that's part of the weight loss effect of 5 grams of it per meal. I did some experimentation but ended up sticking with 1 gram per meal or snack, in 500 gram capsules and taken with water.
I carry 8 of these pills (enough for 4 meals/snacks) in my pocket in small flat pill organizers.
It's still too early to assess the impact on cholesterol but this helps with my digestive issues, and it seems to help me not overeat delicious foods to the same extent (i.e. on a day where I previously would have eaten 4 slices of pizza for lunch, I find it easy to eat 2 slices + psyllium instead).
Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.
I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?
You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).
I think it's important that AIs will be created within an existing system of law and property rights. Unlike animals, they'll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.
I think you disagree with Eliezer on a different crux (whether the alignment problem is easy). If we could create AI's that follows the existing system of law and property rights (including the intent of the laws, and doesn't exploit loopholes, and doesn't maliciously comply with laws, and doesn't try to get the law changed, etc.) then that would be a solution to the alignment problem, but the problem is that we don't know how to do that.
I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.
Do you mean in the sense that people who aren't Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn't save people in other countries because that's bad somehow?
The argument using Bernard Arnault doesn't really work. He (probably) won't give you $77 because if he gave everyone $77, he'd spend a very large portion of his wealth. But we don't need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.
(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don't think this particular argument in the specific way it was written in this post works)
I'm only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don't think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.
The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn't that stopping them hurts other people (it doesn't), it's just that you can't stop them all. You don't need to be a utilitarian to think that if it's raining planes, Superman should start by catching the 747's.
- ^
For example, high-paying finance jobs are high-stress and many people don't like working them, but they're not actually bad for the world.
One listed idea was that you can buy reservations at one website directly from the restaurant, with the price going as a downpayment. The example given was $1,000 for a table for two at Carbone, with others being somewhat less. As is pointed out, that fixes the incentives for booking, but once you show up you are now in all-you-can-eat mode at a place not designed for that.
I've been to several restaurants that do some form of this, from a small booking fee that gets refunded when you check in, to just paying entirely up-front (for restaurants with pre-set menus).
This is built into OpenTable so it's not even that hard. I'm really confused why more restaurants don't do this.
I'm not a video creator, but I wonder if this could be turned into a useful tool that takes the stills from a video and predicts which ones will get the highest engagement.
Also if anyone's interested in the other meetups I mentioned, there's:
- The Millenial Social Club meetup group plays board games every Friday in the food court in Lincoln Tower South in Bellevue. The group is always huge (30+ people). It looks like they started doing it on Sundays recently too. https://meetu.ps/e/Ns6hh/blHm6/i
- There's a Seattle Rationalists reading group that meets on Mondays in Seattle. https://meetu.ps/e/NrycV/blHm6/I
- Seattle Effective Altruists occasionally has social meetups in Redmond but I don't know when the next will be: https://meetu.ps/e/Ns3Gt/blHm6/I
If anyone finds any other social rationalist-adjacent meetups on the east side I'd love to know, since I'm not really into book clubs and getting into Seattle is too hard after work.
In case anyone's wondering, the lights I talked about were these:
I have 8 of the 4 ft 5000K version (they're cheaper in 4-packs). I have them plugged into a switched outlet and daisy-chained together, and they're attached at the top of the wall to make it look like light is coming down from all around. They're tedious to setup by worth it in my opinion.
I like the 5000k version but some people might like warmer light like 4000k (or 6500k if you really like blue). https://www.waveformlighting.com/home-residential/which-led-light-color-temperature-should-i-choose
There's probably cheaper similarly-good lights available but Waveform's marketing materials worked on me: https://www.waveformlighting.com/high-cri-led
I have a severely ‘unbalanced’ portfolio of assets for this reason, and even getting rid of the step-up on death would not change that in many cases.
What would be the point of not realizing gains indefinitely if we got rid of the step-on on death?
I don't enjoy PT or exercise, but mostly because it's boring / feels like a waste of time. My peanut butter is to do that involve exercise but where the purpose isn't strictly exercise or where I get some other benefit:
- Biking to work every day takes me about the same amount of time as driving and is more fun. Hills weren't fun so I got an e-bike and with sufficient assist they became fun again. As I get more in shape, I find myself turning the assist down because I don't really need it.
- Biking to restaurants and bars is also fun.
- I like going on walks with friends and talking, so why not do that while walking up a mountain?
- I joined a casual dodgeball league for fun and meeting people, and as a side effect do the cardio equivalent of two hours of jogging every Sunday.
- Indoor rock climbing feels a little bit like exercise, but it's also a group activity that involves a lot of downtime just talking.
(I've yet to find a good way to mix my also-shoulder PT into anything fun, so I just keep exercise bands at my desk at work)
It would be expensive, but it's not a hard constraint. OpenAI could almost certainly raise another $600M per year if they wanted to (they're allegedly already losing $5B per year now).
Also the post only suggests this pay structure for a subset of employees.
For companies that are doing well, money isn't a hard constraint. Founders would rather pay in equity because it's cheaper than cash[1], but they can sell additional equity and pay cash if they really want to.
- ^
Because they usually give their employees a bad deal.
A century ago, it was predicted that by now, people would be working under 20 hours a week.
And this prediction was basically correct, but missed the fact that it's more efficient to work 30-40 hours per week while working and then take weeks or decades off when not working.
The extra time has gone to more leisure, less child labor, more schooling, and earlier retirement (plus support for people who can't work at all).
The Overpopulation FAQs is about overpopulation, not necessarily water scarcity. Water scarcity can contribute to overpopulation, but it is only one of multiple potential causes.
My point is that when LessWrongers see not enough water for a given population, we try to fix the water not the people.
I wrote that EA is mostly misguided because it makes faulty assumptions. And to the contrary, I did praise a few things about EA.
Yes, I read your argument that preventing people from dying of starvation and/or disease is bad:
In some ways, the justification for EA assumes a fallacy of composition since EA believes that people can and should help everyone. [...] To the contrary, I’d argue that a lot of charities that supposedly have the greatest amount of “good” for humanity would contribute to overpopulation, which would negate their benefits in the long run. For example, programs to prevent malaria, provide clean water, and feed starving families in Sub-Saharan Africa would hasten the Earth’s likelihood of becoming overpopulated and exacerbate dysgenics.
So yes, maybe this is my cult programming, but I would rather we do the hard work of supporting a higher population (solar panels, desalination, etc.) than let people starve to death.
I'm partially downvoting this for the standard reason that I want to read actual interesting posts and not posts about "Why doesn't LessWrong like my content? Aren't you a cult if you don't agree with me?".
But I'm also downvoting because I specifically think it's good that LessWrong doesn't have a bunch of posts about how we're going to run out of water(?!) if we don't forcibly sterilize people, or that EA is bad because altruism is bad. Sorry, I just can't escape my cult programming here. Helping people is Good Actually and I'd rather solve resource shortages by making more.
This is an interesting idea, but I found these images and descriptions confusing and not really helpful.
One other thing I didn't think to mention in the post above is that I used to think of fiber as one category, so if I was eating something "high fiber" like vegetables or oats, I wouldn't take psyllium since "I'm already getting fiber", and then I'd feel worse. Since reading this, I'm taking psyllium with my oats and it improved the experience a lot (since the psyllium helps counteract the irritating effects of the insoluble fiber in oats).
I've had the same experience a few times and can confirm that it's not great. At this point I drink a whole glass of water when I take it, and I usually take it with a meal (my theory is that this might mix it up more so even if there's not enough water, it won't be one solid clump).
I'm excited, I've never actually had an ACX meetup in the town I live in.
The food court in Lincoln Square South has been working surprisingly well for a boardgame meetup. I wonder if it would work for this too.
I think you might be living in a highly-motivated smart and conscientious tech worker bubble. A lot people are hard to convince to even show up to work consistently, let alone do things no one is telling them to do. And then even if they are self-motivated, you run into problems with whether their ideas are good or not.
Individual companies can solve this by heavily filtering applicants (and paying enough to attract good ones), but you probably don't want to filter and pay your shelf-stockers like software engineers. Plus if you did it at across all of society, you'd leave a lot of your workers permanently unemployed.
One important thing is that you don't have to pick one or the other. I plan to take psyllium for IBS plus eat oats (high in soluble non-fermenting fiber) for the microbiome benefits and improved cholesterol benefits. Both should help with weight loss (in similar ways) and cholesterol (oats will help more because the fiber they contain ferments into substances that also reduce cholesterol, but both will reduce cholesterol via the bile removal method).
Insoluble fiber doesn't help with any problems that I have, and excerbates my IBS, so I plan to (weakly) avoid it. So I will continue eating foods high in insoluble fiber if they're good for me in other ways (oats) or tasty (pineapple), but I'll avoid concentrated forms (wheat bran) and foods high in them that I don't like anyway (whole wheat).
This linked article goes into some options for that: https://toughsf.blogspot.com/2020/07/tethers-all-way.html
- You can use the tether to catch payloads on the way down and boost the tether back up while also reducing the payload's need for heat shielding
- You can use more efficient engines with low thrust/weight ratios to reboost the tether
- There are some propellent-free options that use the magnetic field to reboost the tether in exchange for energy (I'm unsure if the energy needs are practical or not)
If you had a way to catch them, I think you could just throw rocks down the gravity well and catch them for a boost too.
Doesn't that just make it even more confusing? I guess we also buy taxis for our groceries, but the overhead is much lower when you're buying hundreds of dollars worth of groceries instead of a $10 burrito. Plus, these prices all tracked each other from 2000-2010, but Instacart didn't even exist until 2012.
Ah, I misread the quote you included from Nathan Helm-Burger. That does make more sense.
This seems like a good idea in general, and would probably make one of the things Anthropic is trying to do (find the "being truthful" neuron) easier.
I suspect this labeling and using the labels is still harder that you think though, since individual tokens don't have truth values.
I looked through the links you posted and it seems like the push-back is mostly around things you didn't mention in this post (prompt engineering as an alignment strategy).
You could probably use an OTP app for this.
- Alice generates a random OTP secret and adds it to her OTP app as "Bob".
- Bob adds the same OTP secret in his app as "Alice"
To confirm the other's identity:
- Alice asks Bob for the code his app is showing under "Alice"
- Alice confirms that her phone is showing the same code under "Bob"
- If Bob wants proof of Alice's identity, he can ask her for the next code to show up
I think this works similarly to your written down sentences, but you'll never run out. It has the same problem in situations where people don't have their stuff though (although your family is probably more likely to have their phone than a random piece of paper).
One piece of complexity is that OTP depends on the time, so if you're sufficiently de-synchronized the numbers won't align perfectly (although if Bob keeps reading off numbers, eventually one of them should show up on Alice's app).
I can't speak for anyone else, but the reason I'm not more interested in this idea is that I'm not convinced it could actually be done. Right now, big AI companies train on piles of garbage data since it's the only way they can get sufficient volume. The idea that we're going to produce a similar amount of perfectly labeled data doesn't seem plausible.
I don't want to be too negative, because maybe you have an answer to that, but maybe-working-in-theory is only the first step and if there's no visible path to actually doing what you propose, then people will naturally be less excited.