Posts
Comments
I don't think this is actually the rule by common practice (and not all bad things should be illegal). For example, if one of your friends/associates says something that you think is stupid, going around telling everyone that they said something stupid would generally be seen as rude. It would also be seen as crazy if you overheard someone saying something negative about their job and then going out of your way to tell their boss.
In both cases there would be exceptions, like if if the person's boss is your friend or safety reasons like you mentioned, but I think by default sharing negative information about people is seen as bad, even if it's sometimes considered low-levels of bad (like with gossip).
I also agree with this to some extent. Journalists should be most concerned about their readers, not their sources. They should care about accurately quoting their sources because misquoting does a disservice to their readers, and they should care about privacy most of the time because having access to sources is important to providing the service to their readers.
I guess this post is from the perspective of being a source, so "journalists are out to get you" is probably the right attitude to take, but it's good actually for journalists to prioritize their readers over sources.
The convenient thing about journalism is that the problems we're worried about here are public, so you don't need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it's accused of.
The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers' identifies you might still get a good enough signal to noise ratio, especially if you include positive 'reviews'.
I largely agree with this article but I feel like it won't really change anyone's behavior. Journalists act the way they do because that's what they're rewarded for. And if your heuristic is that all journalists are untrustworthy, it makes it hard for trustworthy journalists to get any benefit from that.
A more effective way to change behavior might be to make a public list of journalists who are or aren't trustworthy, with specific information about why ("In [insert URL here], Journalist A asked me for a quote and I said X, but they implied inaccurately that I believe Y" "In [insert URL here], Journalist B thought that I believe P but after I explained that I actually believe Q, they accurately reflected that in the article", or just boring ones like "I said X and they accurately quoted me as saying X", etc.).
It would be very surprising to me if such ambitious people wanted to leave right before they had a chance to make history though.
They can't do that since it would make it obvious to the target that they should counter-attack.
As an update: Too much psyllium makes me feel uncomfortably full, so I imagine that's part of the weight loss effect of 5 grams of it per meal. I did some experimentation but ended up sticking with 1 gram per meal or snack, in 500 gram capsules and taken with water.
I carry 8 of these pills (enough for 4 meals/snacks) in my pocket in small flat pill organizers.
It's still too early to assess the impact on cholesterol but this helps with my digestive issues, and it seems to help me not overeat delicious foods to the same extent (i.e. on a day where I previously would have eaten 4 slices of pizza for lunch, I find it easy to eat 2 slices + psyllium instead).
Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.
I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?
You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).
I think it's important that AIs will be created within an existing system of law and property rights. Unlike animals, they'll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.
I think you disagree with Eliezer on a different crux (whether the alignment problem is easy). If we could create AI's that follows the existing system of law and property rights (including the intent of the laws, and doesn't exploit loopholes, and doesn't maliciously comply with laws, and doesn't try to get the law changed, etc.) then that would be a solution to the alignment problem, but the problem is that we don't know how to do that.
I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.
Do you mean in the sense that people who aren't Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn't save people in other countries because that's bad somehow?
The argument using Bernard Arnault doesn't really work. He (probably) won't give you $77 because if he gave everyone $77, he'd spend a very large portion of his wealth. But we don't need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.
(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don't think this particular argument in the specific way it was written in this post works)
I'm only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don't think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.
The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn't that stopping them hurts other people (it doesn't), it's just that you can't stop them all. You don't need to be a utilitarian to think that if it's raining planes, Superman should start by catching the 747's.
- ^
For example, high-paying finance jobs are high-stress and many people don't like working them, but they're not actually bad for the world.
One listed idea was that you can buy reservations at one website directly from the restaurant, with the price going as a downpayment. The example given was $1,000 for a table for two at Carbone, with others being somewhat less. As is pointed out, that fixes the incentives for booking, but once you show up you are now in all-you-can-eat mode at a place not designed for that.
I've been to several restaurants that do some form of this, from a small booking fee that gets refunded when you check in, to just paying entirely up-front (for restaurants with pre-set menus).
This is built into OpenTable so it's not even that hard. I'm really confused why more restaurants don't do this.
I'm not a video creator, but I wonder if this could be turned into a useful tool that takes the stills from a video and predicts which ones will get the highest engagement.
Also if anyone's interested in the other meetups I mentioned, there's:
- The Millenial Social Club meetup group plays board games every Friday in the food court in Lincoln Tower South in Bellevue. The group is always huge (30+ people). It looks like they started doing it on Sundays recently too. https://meetu.ps/e/Ns6hh/blHm6/i
- There's a Seattle Rationalists reading group that meets on Mondays in Seattle. https://meetu.ps/e/NrycV/blHm6/I
- Seattle Effective Altruists occasionally has social meetups in Redmond but I don't know when the next will be: https://meetu.ps/e/Ns3Gt/blHm6/I
If anyone finds any other social rationalist-adjacent meetups on the east side I'd love to know, since I'm not really into book clubs and getting into Seattle is too hard after work.
In case anyone's wondering, the lights I talked about were these:
I have 8 of the 4 ft 5000K version (they're cheaper in 4-packs). I have them plugged into a switched outlet and daisy-chained together, and they're attached at the top of the wall to make it look like light is coming down from all around. They're tedious to setup by worth it in my opinion.
I like the 5000k version but some people might like warmer light like 4000k (or 6500k if you really like blue). https://www.waveformlighting.com/home-residential/which-led-light-color-temperature-should-i-choose
There's probably cheaper similarly-good lights available but Waveform's marketing materials worked on me: https://www.waveformlighting.com/high-cri-led
I have a severely ‘unbalanced’ portfolio of assets for this reason, and even getting rid of the step-up on death would not change that in many cases.
What would be the point of not realizing gains indefinitely if we got rid of the step-on on death?
I don't enjoy PT or exercise, but mostly because it's boring / feels like a waste of time. My peanut butter is to do that involve exercise but where the purpose isn't strictly exercise or where I get some other benefit:
- Biking to work every day takes me about the same amount of time as driving and is more fun. Hills weren't fun so I got an e-bike and with sufficient assist they became fun again. As I get more in shape, I find myself turning the assist down because I don't really need it.
- Biking to restaurants and bars is also fun.
- I like going on walks with friends and talking, so why not do that while walking up a mountain?
- I joined a casual dodgeball league for fun and meeting people, and as a side effect do the cardio equivalent of two hours of jogging every Sunday.
- Indoor rock climbing feels a little bit like exercise, but it's also a group activity that involves a lot of downtime just talking.
(I've yet to find a good way to mix my also-shoulder PT into anything fun, so I just keep exercise bands at my desk at work)
It would be expensive, but it's not a hard constraint. OpenAI could almost certainly raise another $600M per year if they wanted to (they're allegedly already losing $5B per year now).
Also the post only suggests this pay structure for a subset of employees.
For companies that are doing well, money isn't a hard constraint. Founders would rather pay in equity because it's cheaper than cash[1], but they can sell additional equity and pay cash if they really want to.
- ^
Because they usually give their employees a bad deal.
A century ago, it was predicted that by now, people would be working under 20 hours a week.
And this prediction was basically correct, but missed the fact that it's more efficient to work 30-40 hours per week while working and then take weeks or decades off when not working.
The extra time has gone to more leisure, less child labor, more schooling, and earlier retirement (plus support for people who can't work at all).
The Overpopulation FAQs is about overpopulation, not necessarily water scarcity. Water scarcity can contribute to overpopulation, but it is only one of multiple potential causes.
My point is that when LessWrongers see not enough water for a given population, we try to fix the water not the people.
I wrote that EA is mostly misguided because it makes faulty assumptions. And to the contrary, I did praise a few things about EA.
Yes, I read your argument that preventing people from dying of starvation and/or disease is bad:
In some ways, the justification for EA assumes a fallacy of composition since EA believes that people can and should help everyone. [...] To the contrary, I’d argue that a lot of charities that supposedly have the greatest amount of “good” for humanity would contribute to overpopulation, which would negate their benefits in the long run. For example, programs to prevent malaria, provide clean water, and feed starving families in Sub-Saharan Africa would hasten the Earth’s likelihood of becoming overpopulated and exacerbate dysgenics.
So yes, maybe this is my cult programming, but I would rather we do the hard work of supporting a higher population (solar panels, desalination, etc.) than let people starve to death.
I'm partially downvoting this for the standard reason that I want to read actual interesting posts and not posts about "Why doesn't LessWrong like my content? Aren't you a cult if you don't agree with me?".
But I'm also downvoting because I specifically think it's good that LessWrong doesn't have a bunch of posts about how we're going to run out of water(?!) if we don't forcibly sterilize people, or that EA is bad because altruism is bad. Sorry, I just can't escape my cult programming here. Helping people is Good Actually and I'd rather solve resource shortages by making more.
This is an interesting idea, but I found these images and descriptions confusing and not really helpful.
One other thing I didn't think to mention in the post above is that I used to think of fiber as one category, so if I was eating something "high fiber" like vegetables or oats, I wouldn't take psyllium since "I'm already getting fiber", and then I'd feel worse. Since reading this, I'm taking psyllium with my oats and it improved the experience a lot (since the psyllium helps counteract the irritating effects of the insoluble fiber in oats).
I've had the same experience a few times and can confirm that it's not great. At this point I drink a whole glass of water when I take it, and I usually take it with a meal (my theory is that this might mix it up more so even if there's not enough water, it won't be one solid clump).
I'm excited, I've never actually had an ACX meetup in the town I live in.
The food court in Lincoln Square South has been working surprisingly well for a boardgame meetup. I wonder if it would work for this too.
I think you might be living in a highly-motivated smart and conscientious tech worker bubble. A lot people are hard to convince to even show up to work consistently, let alone do things no one is telling them to do. And then even if they are self-motivated, you run into problems with whether their ideas are good or not.
Individual companies can solve this by heavily filtering applicants (and paying enough to attract good ones), but you probably don't want to filter and pay your shelf-stockers like software engineers. Plus if you did it at across all of society, you'd leave a lot of your workers permanently unemployed.
One important thing is that you don't have to pick one or the other. I plan to take psyllium for IBS plus eat oats (high in soluble non-fermenting fiber) for the microbiome benefits and improved cholesterol benefits. Both should help with weight loss (in similar ways) and cholesterol (oats will help more because the fiber they contain ferments into substances that also reduce cholesterol, but both will reduce cholesterol via the bile removal method).
Insoluble fiber doesn't help with any problems that I have, and excerbates my IBS, so I plan to (weakly) avoid it. So I will continue eating foods high in insoluble fiber if they're good for me in other ways (oats) or tasty (pineapple), but I'll avoid concentrated forms (wheat bran) and foods high in them that I don't like anyway (whole wheat).
This linked article goes into some options for that: https://toughsf.blogspot.com/2020/07/tethers-all-way.html
- You can use the tether to catch payloads on the way down and boost the tether back up while also reducing the payload's need for heat shielding
- You can use more efficient engines with low thrust/weight ratios to reboost the tether
- There are some propellent-free options that use the magnetic field to reboost the tether in exchange for energy (I'm unsure if the energy needs are practical or not)
If you had a way to catch them, I think you could just throw rocks down the gravity well and catch them for a boost too.
Doesn't that just make it even more confusing? I guess we also buy taxis for our groceries, but the overhead is much lower when you're buying hundreds of dollars worth of groceries instead of a $10 burrito. Plus, these prices all tracked each other from 2000-2010, but Instacart didn't even exist until 2012.
Ah, I misread the quote you included from Nathan Helm-Burger. That does make more sense.
This seems like a good idea in general, and would probably make one of the things Anthropic is trying to do (find the "being truthful" neuron) easier.
I suspect this labeling and using the labels is still harder that you think though, since individual tokens don't have truth values.
I looked through the links you posted and it seems like the push-back is mostly around things you didn't mention in this post (prompt engineering as an alignment strategy).
You could probably use an OTP app for this.
- Alice generates a random OTP secret and adds it to her OTP app as "Bob".
- Bob adds the same OTP secret in his app as "Alice"
To confirm the other's identity:
- Alice asks Bob for the code his app is showing under "Alice"
- Alice confirms that her phone is showing the same code under "Bob"
- If Bob wants proof of Alice's identity, he can ask her for the next code to show up
I think this works similarly to your written down sentences, but you'll never run out. It has the same problem in situations where people don't have their stuff though (although your family is probably more likely to have their phone than a random piece of paper).
One piece of complexity is that OTP depends on the time, so if you're sufficiently de-synchronized the numbers won't align perfectly (although if Bob keeps reading off numbers, eventually one of them should show up on Alice's app).
I can't speak for anyone else, but the reason I'm not more interested in this idea is that I'm not convinced it could actually be done. Right now, big AI companies train on piles of garbage data since it's the only way they can get sufficient volume. The idea that we're going to produce a similar amount of perfectly labeled data doesn't seem plausible.
I don't want to be too negative, because maybe you have an answer to that, but maybe-working-in-theory is only the first step and if there's no visible path to actually doing what you propose, then people will naturally be less excited.
You're right, and I hadn't thought of that. I think you'd still get the overall effect of a real transfer from richer to poorer people, but the way the tax falls on specific people would be different based on how much money they save and whether they save it in the form of dollars, plus whether they get paid in dollars.
An important piece of this is that shifting the relative distribution of money also shifts the distribution of real resources. So absent legal restrictions, if more people have money they want to spend on housing, you should expect more housing to be built, not just for the existing supply to get more expensive (and in exchange, you should expect less of whatever the people paying for the UBI want produced; regardless of whether they pay via taxes or inflation).
A UBI in the US might cause what you're suggesting, since there tend to be more restrictions on needs vs wants. i.e. no one will stop you from building a superyacht if you want, but there's a lot of artificial barriers to building cheap apartments. So if you shift demand from things rich people want to things poor people want, you might get a lot of the money transferred to the owners of the last few cheap apartments that were allowed to be built.
This seems like more of an argument against that kind of law that outlaws anything that rich people don't want though, not an argument against UBI.
You could also take this further and finance a large UBI by printing money, and this would cause (more) inflation, but if you model it out it ends up doing the same sort of transfer from richer people to poorer people as progressive tax financing (people with more money are "taxed" more by inflation).
What should happen if the CNE declares Maduro the winner, but Venezuela's National Assembly refuses to acknowledge Maduro's win and appoints someone else to the presidency? [...] Do we need to wait on resolving the market until we see whether that happens again?
But the market isn't "who will eventually become president", it's "who will win the election (according to official sources)". Like how "who will win the US election (according to AP, Fox and NBC)" and "who will be president on inaugeration day" are different questions.
The standard of "what if the result changes" would make almost any market impossible to resolve. Like what if AP/Fox/NBC call the election for Harris, but then Trump does a coup and threatens them until they announce that actually he won? What if Trump wins but the person who actually gets sworn in is an actor who looks like Trump? Do we need to wait to see if that happens before we resolve the question?
Making most questions not resolve at all is worse than weird edge cases where they resolve in ways people don't like, so I think in the absence of clear rules that the question won't resolve until some standard is met, resolving as soon as possible seems like the best default.
The primary resolution source for this market will be official information from Venezuela
I'm confused about how this is ambiguous? It's sort of awkward that "official information from Venezuela" and "a consensus of credible reporting" give different answers, but it's clear that the official info is primary.
This is a real effect, and this article gives an example with URL's: https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38
":" and "://" are different tokens in this LLM, so prompting with a URL starting with "http:" gives bad results because it can't use the "://" token.
Although this can be improved with a technique called "token healing" that essentially steps backwards in the prompt and then allows any next token that starts with the same characters in the prompt (i.e. in the "http:" example, it steps backwards to "http" and allows any continuation that starts with ":" in its first token).
Note that this only applies at the level of tokens, so in your example it's true that the next token can't be ": T", but with standard tokenizers, you'll also get a token for every substring of your longer tokens, so it could be just "T". Whether this makes things better or worse depends on which usage was more common/better in the training data.
At a recent Senate hearing, Vance accused Big Tech companies of overstating risks from AI in order to justify regulations to stifle competition.
Do any Big Tech companies want more AI regulations?
Yes, but testing that people have memorized the appropriate number of LeetCode questions is much easier than testing that they can write something big without going mad :(
I think the Trojan Horse situation is going to be your biggest blocker, regardless of whether it's a real problem or not. At least in the US, anti-immigration talking points tend to focus on the working age military age men immigrating from a friendly country in order to get jobs. I can't imagine how strong the blowback would be if they were literally Russian soldiers.
There's also a repeated-game concern where once you do this, the incentive is for every poor country to invade its neighbors in the hopes of getting its soldiers a cushy retirement and the ability to send remittances.
One practical concern from the other side is that if soldiers start defecting, the Russian government can hold their families hostage. This is likely already sort-of the case but could be done in a more heavy-handed way if necessary.
That said, I think something like this is probably a good idea if you could someone get past the impossible political situation. US residency alone is worth so much that you might not have to pay soldiers at all (and military age working age immigrants tend to be a net benefit in terms of taxes anyway).
Regarding, "Land Values ARE Property Values", I initially thought LVT advocates were doing a motte-and-bailey (LVT is better because you tax land not improvements; oh actually all improvements are Land), but I think it's more that "land" is defined as everyone else's improvements. So, if I build a casino in the desert, my Land is just land but everyone else's Land around the casino includes proximity to my casino.
I think this causes different problems though, including the one you mention with garbage dumps, especially once you take merging and splitting properties into account.
If Alice and Bob build casinos next to each other in the desert, Alice's casino benefits from being on Land near Bob's casino and Bob's casino benefits from being near Alice's casino, but if Alice buys Bob's casino, she transmutes Bob's casino into an untaxed improvement.
If Bob owns a factory with a toxic waste pool, he can transmute the toxic waste pool into (negative-value) Land by giving it to someone else. Or he can save money on taxes by intentionally creating a toxic waste pool and giving it to someone else.
(And I don't think it's possible to prevent these things without turning your LVT back into a normal property tax)
Yes, but my point is just that you can bring more people in proximity to NYC without more land, either by making transportation faster/better (make the desirable part of NYC bigger) or by building in 3D.
But my argument is about literal land and floor space for living. Literal land is fixed but it doesn't matter because floor space for living isn't (or is fixed at a level way higher than we're likely to reach any time soon).
You might find this blog series interesting since it covers similar things and finds similar results from a different direction: https://breakingthemarket.com/the-most-misunderstood-force-in-the-universe/
I find it suspicious that they randomly stopped posting about their portfolio returns at some point though.
Land is only a raw input though. What matters to people is something like useable floor space, which isn't fixed. One way to increase usable floor space is to build further from population centers (in which case you're using land, but the main cost is building infrastructure and reducing productivity), but another way to increase usable floor space is to build it in top of or below existing buildings, which doesn't require additional land at all (although it has other costs).
That's not really true either. The average house size in the US is getting larger while the average number of people living it in is getting smaller:
https://www.thezebra.com/resources/home/median-home-size-in-us/