Posts

Benefits of Psyllium Dietary Fiber in Particular 2024-08-28T18:13:23.891Z
Increase the tax value of donations with high-variance investments? 2024-03-03T01:39:45.473Z
Clip keys together with tiny carabiners 2024-01-31T04:26:57.388Z
Techniques to fix incorrect memorization? 2023-12-30T21:32:46.922Z
The case for aftermarket blind spot mirrors 2023-10-09T19:30:22.843Z
If you're not a morning person, consider quitting allergy pills 2023-05-24T20:11:07.131Z
Additional space complexity isn't always a useful metric 2023-01-04T21:53:05.049Z
Is asymptomatic transmission less common after vaccination? 2022-02-02T20:53:42.188Z
Are there good classes (or just articles) on blog writing? 2021-04-19T01:10:21.368Z
Have We Been Interpreting Quantum Mechanics Wrong This Whole Time? 2017-05-23T16:38:35.338Z
Building Safe A.I. - A Tutorial for Encrypted Deep Learning 2017-03-21T15:17:54.971Z
Headlines, meet sparklines: news in context 2017-02-18T16:00:46.212Z

Comments

Comment by Brendan Long (korin43) on The Humanitarian Economy · 2024-11-13T23:44:55.424Z · LW · GW

The problem is that lack of money isn't the reason there's not enough housing in places that people want to live. Zoning laws intentionally exclude poor people because rich people don't want to live near them. Allocating more money to the problem doesn't really help (see: the ridiculous amount of money California spends on affordable housing), and if you fixed the part where it's illegal, the government spending isn't necessary because real estate developers would build apartments without subsidies if they were allowed to.

Also, the most recent election shows that ordinary people really, really don't like inflation, so I don't think printing trillions of dollars for this purpose is actually more palatable.

Comment by Brendan Long (korin43) on The Humanitarian Economy · 2024-11-13T19:32:04.139Z · LW · GW

You're right, I was taking the section saying "In this new system, the only incentive to do more and go further is to transcend the status quo in some way, and earn recognition for a unique contribution." too seriously. On a second re-read, it seems like your proposal is actually just to print money to give people food stamps and housing vouchers. I think the answer to why we don't do that is that we do that.

Food is essentially a solved problem in the United States, and the biggest problem with housing vouchers is that there physically isn't enough housing in some areas. Printing more money doesn't cause more housing to exist (it could change incentives, but incentives don't matter much when building housing for poor people is largely illegal).

Comment by Brendan Long (korin43) on The Humanitarian Economy · 2024-11-13T06:39:59.952Z · LW · GW

I think you've re-invented Communism. The reason we don't implement it is that in practice it's much worse for everyone, including poor people.

Comment by Brendan Long (korin43) on Bellevue Library Meetup - Nov 23 · 2024-11-10T05:28:48.474Z · LW · GW

I'll try to make it but I might be moving that day so I'm not sure :\

Comment by Brendan Long (korin43) on AI #89: Trump Card · 2024-11-07T23:21:13.245Z · LW · GW

Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.

But is this because SQLite is unusually buggy, or because its code is unusually open, short and readable and thus understandable by an AI? I would guess that MySQL (for example) has significantly worse vulnerabilities but they're harder to find.

Comment by Brendan Long (korin43) on Feedback request: what am I missing? · 2024-11-02T17:58:21.378Z · LW · GW

I don't know anything about you in particular, but if you know alignment researchers who would recommend you, could you get them to refer you either internally or through their contacts?

Comment by Brendan Long (korin43) on What is the alpha in one bit of evidence? · 2024-10-23T15:12:21.521Z · LW · GW

This is actually why a short position (a complicated loan) would theoretically work. If we all die, then you, as someone else's counterparty, never need to pay your loan back.

(I think this is a bad idea, but not because of counterparty risk)

Comment by Brendan Long (korin43) on What is the alpha in one bit of evidence? · 2024-10-23T15:09:56.023Z · LW · GW

I think the idea is that short position pays off up-front, and then you don't need to worry about the loan if everyone's dead.

If by paying off you mean this bet actually working I think you're right though. It seems more likely that the stock market would go up in the short term, forcing you to cover at a higher price and losing a bunch of money. And if the market stays flat, you'll still lose money on interest payments unless doom is coming this year.

Comment by Brendan Long (korin43) on Bellevue-Redmond USA - ACX Meetups Everywhere Fall 2024 · 2024-10-16T19:09:01.446Z · LW · GW

I'll be out of town (getting married on the 25th) but I'd be happy to do something the weekend after.

Comment by Brendan Long (korin43) on Advice for journalists · 2024-10-08T22:04:48.695Z · LW · GW

I don't think this is actually the rule by common practice (and not all bad things should be illegal). For example, if one of your friends/associates says something that you think is stupid, going around telling everyone that they said something stupid would generally be seen as rude. It would also be seen as crazy if you overheard someone saying something negative about their job and then going out of your way to tell their boss.

In both cases there would be exceptions, like if if the person's boss is your friend or safety reasons like you mentioned, but I think by default sharing negative information about people is seen as bad, even if it's sometimes considered low-levels of bad (like with gossip).

Comment by Brendan Long (korin43) on Advice for journalists · 2024-10-08T20:25:12.741Z · LW · GW

I also agree with this to some extent. Journalists should be most concerned about their readers, not their sources. They should care about accurately quoting their sources because misquoting does a disservice to their readers, and they should care about privacy most of the time because having access to sources is important to providing the service to their readers.

I guess this post is from the perspective of being a source, so "journalists are out to get you" is probably the right attitude to take, but it's good actually for journalists to prioritize their readers over sources.

Comment by Brendan Long (korin43) on Advice for journalists · 2024-10-08T20:18:14.461Z · LW · GW

The convenient thing about journalism is that the problems we're worried about here are public, so you don't need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it's accused of.

The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers' identifies you might still get a good enough signal to noise ratio, especially if you include positive 'reviews'.

Comment by Brendan Long (korin43) on Advice for journalists · 2024-10-07T19:21:06.634Z · LW · GW

I largely agree with this article but I feel like it won't really change anyone's behavior. Journalists act the way they do because that's what they're rewarded for. And if your heuristic is that all journalists are untrustworthy, it makes it hard for trustworthy journalists to get any benefit from that.

A more effective way to change behavior might be to make a public list of journalists who are or aren't trustworthy, with specific information about why ("In [insert URL here], Journalist A asked me for a quote and I said X, but they implied inaccurately that I believe Y" "In [insert URL here], Journalist B thought that I believe P but after I explained that I actually believe Q, they accurately reflected that in the article", or just boring ones like "I said X and they accurately quoted me as saying X", etc.).

Comment by Brendan Long (korin43) on AI #83: The Mask Comes Off · 2024-09-26T18:24:04.607Z · LW · GW

It would be very surprising to me if such ambitious people wanted to leave right before they had a chance to make history though.

Comment by Brendan Long (korin43) on [Completed] The 2024 Petrov Day Scenario · 2024-09-26T17:26:11.779Z · LW · GW

They can't do that since it would make it obvious to the target that they should counter-attack.

Comment by Brendan Long (korin43) on Benefits of Psyllium Dietary Fiber in Particular · 2024-09-24T21:29:24.205Z · LW · GW

As an update: Too much psyllium makes me feel uncomfortably full, so I imagine that's part of the weight loss effect of 5 grams of it per meal. I did some experimentation but ended up sticking with 1 gram per meal or snack, in 500 gram capsules and taken with water.

I carry 8 of these pills (enough for 4 meals/snacks) in my pocket in small flat pill organizers.

It's still too early to assess the impact on cholesterol but this helps with my digestive issues, and it seems to help me not overeat delicious foods to the same extent (i.e. on a day where I previously would have eaten 4 slices of pizza for lunch, I find it easy to eat 2 slices + psyllium instead).

Comment by Brendan Long (korin43) on Why the 2024 election matters, the AI risk case for Harris, & what you can do to help · 2024-09-24T20:53:19.524Z · LW · GW

Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.

I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?

You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).

Comment by Brendan Long (korin43) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-23T22:56:06.988Z · LW · GW

I think it's important that AIs will be created within an existing system of law and property rights. Unlike animals, they'll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.

I think you disagree with Eliezer on a different crux (whether the alignment problem is easy). If we could create AI's that follows the existing system of law and property rights (including the intent of the laws, and doesn't exploit loopholes, and doesn't maliciously comply with laws, and doesn't try to get the law changed, etc.) then that would be a solution to the alignment problem, but the problem is that we don't know how to do that.

Comment by Brendan Long (korin43) on My Critique of Effective Altruism · 2024-09-23T21:17:09.225Z · LW · GW

I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.

Do you mean in the sense that people who aren't Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn't save people in other countries because that's bad somehow?

Comment by Brendan Long (korin43) on The Sun is big, but superintelligences will not spare Earth a little sunlight · 2024-09-23T05:23:27.079Z · LW · GW

The argument using Bernard Arnault doesn't really work. He (probably) won't give you $77 because if he gave everyone $77, he'd spend a very large portion of his wealth. But we don't need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.

(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don't think this particular argument in the specific way it was written in this post works)

Comment by Brendan Long (korin43) on My Critique of Effective Altruism · 2024-09-20T19:57:20.217Z · LW · GW

I'm only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don't think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.

The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn't that stopping them hurts other people (it doesn't), it's just that you can't stop them all. You don't need to be a utilitarian to think that if it's raining planes, Superman should start by catching the 747's.

  1. ^

    For example, high-paying finance jobs are high-stress and many people don't like working them, but they're not actually bad for the world.

Comment by Brendan Long (korin43) on Monthly Roundup #22: September 2024 · 2024-09-17T20:22:21.690Z · LW · GW

One listed idea was that you can buy reservations at one website directly from the restaurant, with the price going as a downpayment. The example given was $1,000 for a table for two at Carbone, with others being somewhat less. As is pointed out, that fixes the incentives for booking, but once you show up you are now in all-you-can-eat mode at a place not designed for that.

I've been to several restaurants that do some form of this, from a small booking fee that gets refunded when you check in, to just paying entirely up-front (for restaurants with pre-set menus).

This is built into OpenTable so it's not even that hard. I'm really confused why more restaurants don't do this.

Comment by Brendan Long (korin43) on Food, Prison & Exotic Animals: Sparse Autoencoders Detect 6.5x Performing Youtube Thumbnails · 2024-09-17T18:25:18.719Z · LW · GW

I'm not a video creator, but I wonder if this could be turned into a useful tool that takes the stills from a video and predicts which ones will get the highest engagement.

Comment by Brendan Long (korin43) on Bellevue-Redmond USA - ACX Meetups Everywhere Fall 2024 · 2024-09-15T03:01:33.561Z · LW · GW

Also if anyone's interested in the other meetups I mentioned, there's:

  • The Millenial Social Club meetup group plays board games every Friday in the food court in Lincoln Tower South in Bellevue. The group is always huge (30+ people). It looks like they started doing it on Sundays recently too. https://meetu.ps/e/Ns6hh/blHm6/i
  • There's a Seattle Rationalists reading group that meets on Mondays in Seattle. https://meetu.ps/e/NrycV/blHm6/I
  • Seattle Effective Altruists occasionally has social meetups in Redmond but I don't know when the next will be: https://meetu.ps/e/Ns3Gt/blHm6/I

If anyone finds any other social rationalist-adjacent meetups on the east side I'd love to know, since I'm not really into book clubs and getting into Seattle is too hard after work.

Comment by Brendan Long (korin43) on Bellevue-Redmond USA - ACX Meetups Everywhere Fall 2024 · 2024-09-15T02:52:26.605Z · LW · GW

In case anyone's wondering, the lights I talked about were these:

https://store.waveformlighting.com/products/centric-daylight-95-cri-t5-led-linear-light-fixture?variant=39433427845222

I have 8 of the 4 ft 5000K version (they're cheaper in 4-packs). I have them plugged into a switched outlet and daisy-chained together, and they're attached at the top of the wall to make it look like light is coming down from all around. They're tedious to setup by worth it in my opinion.

I like the 5000k version but some people might like warmer light like 4000k (or 6500k if you really like blue). https://www.waveformlighting.com/home-residential/which-led-light-color-temperature-should-i-choose

There's probably cheaper similarly-good lights available but Waveform's marketing materials worked on me: https://www.waveformlighting.com/high-cri-led

Comment by Brendan Long (korin43) on Economics Roundup #3 · 2024-09-10T20:28:15.638Z · LW · GW

I have a severely ‘unbalanced’ portfolio of assets for this reason, and even getting rid of the step-up on death would not change that in many cases.

What would be the point of not realizing gains indefinitely if we got rid of the step-on on death?

Comment by Brendan Long (korin43) on Physical Therapy Sucks (but have you tried hiding it in some peanut butter?) · 2024-09-10T17:57:25.015Z · LW · GW

I don't enjoy PT or exercise, but mostly because it's boring / feels like a waste of time. My peanut butter is to do that involve exercise but where the purpose isn't strictly exercise or where I get some other benefit:

  • Biking to work every day takes me about the same amount of time as driving and is more fun. Hills weren't fun so I got an e-bike and with sufficient assist they became fun again. As I get more in shape, I find myself turning the assist down because I don't really need it.
  • Biking to restaurants and bars is also fun.
  • I like going on walks with friends and talking, so why not do that while walking up a mountain?
  • I joined a casual dodgeball league for fun and meeting people, and as a side effect do the cardio equivalent of two hours of jogging every Sunday.
  • Indoor rock climbing feels a little bit like exercise, but it's also a group activity that involves a lot of downtime just talking.

(I've yet to find a good way to mix my also-shoulder PT into anything fun, so I just keep exercise bands at my desk at work)

Comment by Brendan Long (korin43) on Pay Risk Evaluators in Cash, Not Equity · 2024-09-07T20:54:09.543Z · LW · GW

It would be expensive, but it's not a hard constraint. OpenAI could almost certainly raise another $600M per year if they wanted to (they're allegedly already losing $5B per year now).

Also the post only suggests this pay structure for a subset of employees.

Comment by Brendan Long (korin43) on Pay Risk Evaluators in Cash, Not Equity · 2024-09-07T17:31:20.983Z · LW · GW

For companies that are doing well, money isn't a hard constraint. Founders would rather pay in equity because it's cheaper than cash[1], but they can sell additional equity and pay cash if they really want to.

  1. ^

    Because they usually give their employees a bad deal.

Comment by Brendan Long (korin43) on on Science Beakers and DDT · 2024-09-05T20:27:23.947Z · LW · GW

A century ago, it was predicted that by now, people would be working under 20 hours a week.

And this prediction was basically correct, but missed the fact that it's more efficient to work 30-40 hours per week while working and then take weeks or decades off when not working.

The extra time has gone to more leisure, less child labor, more schooling, and earlier retirement (plus support for people who can't work at all).

Comment by Brendan Long (korin43) on A Comparison Between The Pragmatosphere And Less Wrong · 2024-09-04T19:55:18.862Z · LW · GW

The Overpopulation FAQs is about overpopulation, not necessarily water scarcity. Water scarcity can contribute to overpopulation, but it is only one of multiple potential causes.

My point is that when LessWrongers see not enough water for a given population, we try to fix the water not the people.

I wrote that EA is mostly misguided because it makes faulty assumptions. And to the contrary, I did praise a few things about EA.

Yes, I read your argument that preventing people from dying of starvation and/or disease is bad:

In some ways, the justification for EA assumes a fallacy of composition since EA believes that people can and should help everyone. [...] To the contrary, I’d argue that a lot of charities that supposedly have the greatest amount of “good” for humanity would contribute to overpopulation, which would negate their benefits in the long run. For example, programs to prevent malaria, provide clean water, and feed starving families in Sub-Saharan Africa would hasten the Earth’s likelihood of becoming overpopulated and exacerbate dysgenics.

So yes, maybe this is my cult programming, but I would rather we do the hard work of supporting a higher population (solar panels, desalination, etc.) than let people starve to death.

Comment by Brendan Long (korin43) on A Comparison Between The Pragmatosphere And Less Wrong · 2024-09-04T18:07:23.345Z · LW · GW

I'm partially downvoting this for the standard reason that I want to read actual interesting posts and not posts about "Why doesn't LessWrong like my content? Aren't you a cult if you don't agree with me?".

But I'm also downvoting because I specifically think it's good that LessWrong doesn't have a bunch of posts about how we're going to run out of water(?!) if we don't forcibly sterilize people, or that EA is bad because altruism is bad. Sorry, I just can't escape my cult programming here. Helping people is Good Actually and I'd rather solve resource shortages by making more.

Comment by korin43 on [deleted post] 2024-09-01T18:45:01.258Z

This is an interesting idea, but I found these images and descriptions confusing and not really helpful.

Comment by Brendan Long (korin43) on Benefits of Psyllium Dietary Fiber in Particular · 2024-08-30T18:18:00.284Z · LW · GW

One other thing I didn't think to mention in the post above is that I used to think of fiber as one category, so if I was eating something "high fiber" like vegetables or oats, I wouldn't take psyllium since "I'm already getting fiber", and then I'd feel worse. Since reading this, I'm taking psyllium with my oats and it improved the experience a lot (since the psyllium helps counteract the irritating effects of the insoluble fiber in oats).

Comment by Brendan Long (korin43) on Benefits of Psyllium Dietary Fiber in Particular · 2024-08-29T22:22:58.445Z · LW · GW

I've had the same experience a few times and can confirm that it's not great. At this point I drink a whole glass of water when I take it, and I usually take it with a meal (my theory is that this might mix it up more so even if there's not enough water, it won't be one solid clump).

Comment by Brendan Long (korin43) on Bellevue-Redmond USA - ACX Meetups Everywhere Fall 2024 · 2024-08-29T19:40:31.768Z · LW · GW

I'm excited, I've never actually had an ACX meetup in the town I live in.

The food court in Lincoln Square South has been working surprisingly well for a boardgame meetup. I wonder if it would work for this too.

Comment by Brendan Long (korin43) on Why Large Bureaucratic Organizations? · 2024-08-28T21:06:52.735Z · LW · GW

I think you might be living in a highly-motivated smart and conscientious tech worker bubble. A lot people are hard to convince to even show up to work consistently, let alone do things no one is telling them to do. And then even if they are self-motivated, you run into problems with whether their ideas are good or not.

Individual companies can solve this by heavily filtering applicants (and paying enough to attract good ones), but you probably don't want to filter and pay your shelf-stockers like software engineers. Plus if you did it at across all of society, you'd leave a lot of your workers permanently unemployed.

Comment by Brendan Long (korin43) on Benefits of Psyllium Dietary Fiber in Particular · 2024-08-28T18:29:22.874Z · LW · GW

One important thing is that you don't have to pick one or the other. I plan to take psyllium for IBS plus eat oats (high in soluble non-fermenting fiber) for the microbiome benefits and improved cholesterol benefits. Both should help with weight loss (in similar ways) and cholesterol (oats will help more because the fiber they contain ferments into substances that also reduce cholesterol, but both will reduce cholesterol via the bile removal method).

Insoluble fiber doesn't help with any problems that I have, and excerbates my IBS, so I plan to (weakly) avoid it. So I will continue eating foods high in insoluble fiber if they're good for me in other ways (oats) or tasty (pineapple), but I'll avoid concentrated forms (wheat bran) and foods high in them that I don't like anyway (whole wheat).

Comment by Brendan Long (korin43) on The economics of space tethers · 2024-08-22T17:56:11.487Z · LW · GW

This linked article goes into some options for that: https://toughsf.blogspot.com/2020/07/tethers-all-way.html

  • You can use the tether to catch payloads on the way down and boost the tether back up while also reducing the payload's need for heat shielding
  • You can use more efficient engines with low thrust/weight ratios to reboost the tether
  • There are some propellent-free options that use the magnetic field to reboost the tether in exchange for energy (I'm unsure if the energy needs are practical or not)

If you had a way to catch them, I think you could just throw rocks down the gravity well and catch them for a boost too.

Comment by Brendan Long (korin43) on Investigating the Chart of the Century: Why is food so expensive? · 2024-08-16T22:06:44.047Z · LW · GW

Doesn't that just make it even more confusing? I guess we also buy taxis for our groceries, but the overhead is much lower when you're buying hundreds of dollars worth of groceries instead of a $10 burrito. Plus, these prices all tracked each other from 2000-2010, but Instacart didn't even exist until 2012.

Comment by Brendan Long (korin43) on Does VETLM solve AI superalignment? · 2024-08-09T22:58:37.484Z · LW · GW

Ah, I misread the quote you included from Nathan Helm-Burger. That does make more sense.

This seems like a good idea in general, and would probably make one of the things Anthropic is trying to do (find the "being truthful" neuron) easier.

I suspect this labeling and using the labels is still harder that you think though, since individual tokens don't have truth values.

I looked through the links you posted and it seems like the push-back is mostly around things you didn't mention in this post (prompt engineering as an alignment strategy).

Comment by Brendan Long (korin43) on Practical advice for secure virtual communication post easy AI voice-cloning? · 2024-08-09T22:41:27.091Z · LW · GW

You could probably use an OTP app for this.

  1. Alice generates a random OTP secret and adds it to her OTP app as "Bob".
  2. Bob adds the same OTP secret in his app as "Alice"

To confirm the other's identity:

  1. Alice asks Bob for the code his app is showing under "Alice"
  2. Alice confirms that her phone is showing the same code under "Bob"
  3. If Bob wants proof of Alice's identity, he can ask her for the next code to show up

I think this works similarly to your written down sentences, but you'll never run out. It has the same problem in situations where people don't have their stuff though (although your family is probably more likely to have their phone than a random piece of paper).

One piece of complexity is that OTP depends on the time, so if you're sufficiently de-synchronized the numbers won't align perfectly (although if Bob keeps reading off numbers, eventually one of them should show up on Alice's app).

Comment by Brendan Long (korin43) on Does VETLM solve AI superalignment? · 2024-08-09T18:02:37.174Z · LW · GW

I can't speak for anyone else, but the reason I'm not more interested in this idea is that I'm not convinced it could actually be done. Right now, big AI companies train on piles of garbage data since it's the only way they can get sufficient volume. The idea that we're going to produce a similar amount of perfectly labeled data doesn't seem plausible.

I don't want to be too negative, because maybe you have an answer to that, but maybe-working-in-theory is only the first step and if there's no visible path to actually doing what you propose, then people will naturally be less excited.

Comment by Brendan Long (korin43) on Can UBI overcome inflation and rent seeking? · 2024-08-01T22:15:05.911Z · LW · GW

You're right, and I hadn't thought of that. I think you'd still get the overall effect of a real transfer from richer to poorer people, but the way the tax falls on specific people would be different based on how much money they save and whether they save it in the form of dollars, plus whether they get paid in dollars.

Comment by Brendan Long (korin43) on Can UBI overcome inflation and rent seeking? · 2024-08-01T06:08:42.613Z · LW · GW

An important piece of this is that shifting the relative distribution of money also shifts the distribution of real resources. So absent legal restrictions, if more people have money they want to spend on housing, you should expect more housing to be built, not just for the existing supply to get more expensive (and in exchange, you should expect less of whatever the people paying for the UBI want produced; regardless of whether they pay via taxes or inflation).

Comment by Brendan Long (korin43) on Can UBI overcome inflation and rent seeking? · 2024-08-01T02:12:40.970Z · LW · GW

A UBI in the US might cause what you're suggesting, since there tend to be more restrictions on needs vs wants. i.e. no one will stop you from building a superyacht if you want, but there's a lot of artificial barriers to building cheap apartments. So if you shift demand from things rich people want to things poor people want, you might get a lot of the money transferred to the owners of the last few cheap apartments that were allowed to be built.

This seems like more of an argument against that kind of law that outlaws anything that rich people don't want though, not an argument against UBI.

Comment by Brendan Long (korin43) on Can UBI overcome inflation and rent seeking? · 2024-08-01T02:05:08.754Z · LW · GW

You could also take this further and finance a large UBI by printing money, and this would cause (more) inflation, but if you model it out it ends up doing the same sort of transfer from richer people to poorer people as progressive tax financing (people with more money are "taxed" more by inflation).

Comment by Brendan Long (korin43) on Ambiguity in Prediction Market Resolution is Still Harmful · 2024-08-01T01:56:11.080Z · LW · GW

What should happen if the CNE declares Maduro the winner, but Venezuela's National Assembly refuses to acknowledge Maduro's win and appoints someone else to the presidency? [...] Do we need to wait on resolving the market until we see whether that happens again?

But the market isn't "who will eventually become president", it's "who will win the election (according to official sources)". Like how "who will win the US election (according to AP, Fox and NBC)" and "who will be president on inaugeration day" are different questions.

The standard of "what if the result changes" would make almost any market impossible to resolve. Like what if AP/Fox/NBC call the election for Harris, but then Trump does a coup and threatens them until they announce that actually he won? What if Trump wins but the person who actually gets sworn in is an actor who looks like Trump? Do we need to wait to see if that happens before we resolve the question?

Making most questions not resolve at all is worse than weird edge cases where they resolve in ways people don't like, so I think in the absence of clear rules that the question won't resolve until some standard is met, resolving as soon as possible seems like the best default.

Comment by Brendan Long (korin43) on Ambiguity in Prediction Market Resolution is Still Harmful · 2024-07-31T20:57:46.147Z · LW · GW

The primary resolution source for this market will be official information from Venezuela

I'm confused about how this is ambiguous? It's sort of awkward that "official information from Venezuela" and "a consensus of credible reporting" give different answers, but it's clear that the official info is primary.

Comment by Brendan Long (korin43) on How tokenization influences prompting? · 2024-07-29T19:52:34.209Z · LW · GW

This is a real effect, and this article gives an example with URL's: https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38

":" and "://" are different tokens in this LLM, so prompting with a URL starting with "http:" gives bad results because it can't use the "://" token.

Although this can be improved with a technique called "token healing" that essentially steps backwards in the prompt and then allows any next token that starts with the same characters in the prompt (i.e. in the "http:" example, it steps backwards to "http" and allows any continuation that starts with ":" in its first token).

Note that this only applies at the level of tokens, so in your example it's true that the next token can't be ": T", but with standard tokenizers, you'll also get a token for every substring of your longer tokens, so it could be just "T". Whether this makes things better or worse depends on which usage was more common/better in the training data.