Posts

Comments

Comment by brambleboy on Is Success the Enemy of Freedom? (Full) · 2024-11-14T17:13:21.549Z · LW · GW

It's been about 4 years. How do you feel about this now?

Comment by brambleboy on Are we dropping the ball on Recommendation AIs? · 2024-11-11T21:03:25.219Z · LW · GW

Bluesky has custom feeds that can bring in posts from all platforms that use the AT Protocol, but Bluesky is the only such platform right now. Most feeds I've found so far are simple keyword searches, which work nicely for having communities around certain topics, but I hope to see more sophisticated ones pop up.

Comment by brambleboy on Matt Goldenberg's Short Form Feed · 2024-11-05T07:40:08.790Z · LW · GW

While the broader message might be good, the study the video is about didn't replicate.

Comment by brambleboy on Isaac King's Shortform · 2024-10-15T05:43:59.714Z · LW · GW

While most people have super flimsy defenses of meat-eating, that doesn't mean everyone does. Some people simply think it's quite unlikely that non-human animals are sentient (besides primates, maybe). For example, IIRC Eliezer Yudkowsky and Rob Bensinger's guess is that consciousness is highly contingent on factors such as general intelligence and sociality, or something like that.

I think the "5% chance is still too much" argument is convincing, but it begs similar questions such as "Are you really so confident that fetuses aren't sentient? How could you be so sure?"

Comment by brambleboy on (Maybe) A Bag of Heuristics is All There Is & A Bag of Heuristics is All You Need · 2024-10-12T18:34:36.986Z · LW · GW

I agree that origami AIs would still be intelligent if implementing the same computations. I was trying to point at LLMs potentially being 'sphexish': having behaviors made of baked if-then patterns linked together that superficially resemble ones designed on-the-fly for a purpose. I think this is related to what the "heuristic hypothesis" is getting at.

Comment by brambleboy on (Maybe) A Bag of Heuristics is All There Is & A Bag of Heuristics is All You Need · 2024-10-11T06:13:48.171Z · LW · GW

The paper "Auto-Regressive Next-Token Predictors are Universal Learners" made me a little more skeptical of attributing general reasoning ability to LLMs. They show that even linear predictive models, basically just linear regression, can technically perform any algorithm when used autoregressively like with chain-of-thought. The results aren't that mind-blowing but it made me wonder whether performing certain algorithms correctly with a scratchpad is as much evidence of intelligence as I thought.

Comment by brambleboy on If AI is in a bubble and the bubble bursts, what would you do? · 2024-10-11T05:03:17.521Z · LW · GW

Even if you know a certain market is a bubble, it's not exactly trivial to exploit if you don't know when it's going to burst, which prices will be affected, and to what degree. "The market can remain irrational longer than you can remain solvent" and all that.

Personally, while I think that investment will decrease and companies will die off, I doubt there's a true AI bubble, because there are so many articles about it being in a bubble that it couldn't possibly be a big surprise for the markets if it popped, and therefore the hypothetical pop is already priced out of existence. I think it's possible that some traders are waiting to pull the trigger on selling their shares once the market starts trending downwards, which would cause an abrupt drop and extra panic selling... but then it would correct itself pretty quickly if the prices weren't actually inflated before the dip. (I'm not a financial expert so don't take this that seriously)

Comment by brambleboy on What are some beautiful, rationalist artworks? · 2024-10-03T19:05:07.000Z · LW · GW

The fourth image is of the "Z machine", or the Z Pulsed Power Facility, which creates massive electromagnetic pulses for experiments. It's awesome.

Comment by brambleboy on What are your greatest one-shot life improvements? · 2024-10-01T21:06:29.113Z · LW · GW

I can second this. I recommend the chrome extension Unhook, which allows you to disable individual parts of YouTube, and Youtube-shorts block, which makes YouTube shorts play like normal videos.

Comment by brambleboy on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-23T22:15:26.208Z · LW · GW

(Disclaimer: I'm not very knowledgeable about safety engineering or formal proofs)

I notice that whenever someone brings up "What if this unexpected thing happens?", you emphasize that it's about not causing accidents. I'm worried that it's hard to define exactly who caused an accident, for the same reason that deciding who's liable in the legal system is hard.

It seems quite easy to say that the person who sabotaged the stop sign was at fault for the accident. What if the saboteur poured oil on the road instead? Is it their fault if the car crashes from sliding on the oil? Okay, they're being malicious, so they're at fault. But what if the oil spill was an accident from a truck tipping over? Is it the truck driver's fault? What if the road was slippery because of ice? Nobody causes the weather, right? On the contrary: the city could've cleared and salted the roads earlier, but they didn't. In the counterfactual world where they did it earlier, the accident wouldn't have happened.

Okay, how about instead of backward chaining forever, we just check whether the system could have avoided the accident in the counterfactual where it took different actions. The problem is: even in the case where an adversarial stop sign leads to the car crashing, the system potentially could've avoided it. Stop signs are placed by humans somewhat arbitrarily using heuristics to determine if an intersection is risky. Shouldn't the system be able to tell that an intersection is risky, even when there truly isn't a stop sign there?

The paper tackles the problem by formalizing which behaviors and assumptions regarding the movement of cars and pedestrians are "reasonable" or "unreasonable", then proving within the toy model that only unreasonable behavior leads to crashes. Makes sense, but in the real world people don't just follow paths, they do all kinds of things that influence the world. Wouldn't the legal system be simple if we could just use equations like these to determine liability? I'm just not sure we should expect to eventually cover the long tail of potential situations sufficiently enough to make "provably safe" meaningful.

Also, I'm concerned because they don't seem to describe it as a toy model despite the extremely simplified set of things they're considering, and there might be questionable incentives at play to make it seem more sound than it is. From another document on their website:

We believe the industry can come together to create a collaborative platform which provides a “safety seal” that first and foremost will create a safer product, but at the same time will protect OEMs from unreasonable and unwarranted liability assignment by regulators and society.

So they want the cars to be safe, but they also want to avoid liability by proving the accident was someone else's fault.

Comment by brambleboy on Pronouns are Annoying · 2024-09-19T15:19:29.915Z · LW · GW

If random strangers start calling you "she", that implies you look feminine enough to be mistaken for a woman. I think most men would prefer to look masculine for many reasons: not being mistaken for a woman, being conventionally attractive, being assumed to have a 'manly' rather than 'effeminate' personality, looking your age, etc.

If you look obviously masculine, then being misgendered constantly would just be bewildering. Surely something is signaling that you use feminine pronouns.

If it's just people online misgendering you based on your writing, then that's less weird. But I think it still would bother some people for some of the reasons above.

Comment by brambleboy on AI forecasting bots incoming · 2024-09-11T04:26:03.102Z · LW · GW

I predict that implementing bots like these into social media platforms (in their current state) would be poorly received by the public. I think many people's reaction to Grok's probability estimate would be "Why should I believe this? How could Grok, or anyone, know that?" If it were a prediction market, the answer would be "because <economic and empirical explanation as to why you can trust the markets>". There's no equivalent answer for a new bot, besides "because our tests say it works" (making the full analysis visible might help). From these comments, it seems like it's not hard to elicit bad forecasts.  Many people in the public would learn about this kind of forecasting for the first time from this, and if the estimates aren't super impressive, it'll leave a bad taste in their mouths. Meanwhile the media will likely deride it as "Big Tech wants you to trust their fallible chatbots as fortune-tellers now".

Comment by brambleboy on I'm Sorry Fluttershy · 2024-08-22T22:46:55.433Z · LW · GW

The images on this post appear to be broken.

Comment by brambleboy on You should go to ML conferences · 2024-07-28T15:54:02.692Z · LW · GW

If you go on Twitter/X and find the right people, you can get most of the benefits you list here. There are tastemakers that share and discuss intriguing papers, and researchers who post their own papers with explanation threads which are often more useful than the papers themselves. The researchers are usually available to answer questions about their work, and you can read the answers they've given already. You're also ahead of the game because preprints can appear way before conferences.

Comment by brambleboy on When is a mind me? · 2024-07-12T00:14:21.835Z · LW · GW

It may be through extrapolating too much from your (first-person, subjective) experiences with objects that seemingly possess intrinsic, observer-independent properties, like the classical objects of everyday life.

 

Are you trying to say that quantum physics provides evidence that physical reality is subjective, with conscious observers having a fundamental role? Rob implicitly assumes the position advocated by The Quantum Physics Sequence, which argues that reality exists independently of observers and that quantum stuff doesn't suggest otherwise. It's just one of the many presuppositions he makes that's commonly shared on here. If that's your main objection, you should make that clear.

Comment by brambleboy on Stephen Fowler's Shortform · 2024-06-30T17:39:24.200Z · LW · GW

Another example in ML of a "non-conservative" optimization process: a common failure mode of GANs is mode collapse, wherein the generator and discriminator get stuck in a loop. The generator produces just one output that fools the discriminator, the discriminator memorizes it, the generator switches to another, until eventually they get back to the same output again.

In the rolling ball analogy, we could say that the ball rolls down into a divot, but the landscape flexes against the ball to raise it up again, and then the ball rolls into another divot, and so on.

Comment by brambleboy on Monthly Roundup #19: June 2024 · 2024-06-25T18:04:49.861Z · LW · GW

So of course Robin Hanson offered polls on these so-called taboo topics. The ‘controversial’ positions got overwhelming support. The tenth question, whether demographic diversity (race, gender) in the workplace often leads to worse performance got affirmed 54%-17%, and the rest were a lot less close than that. Three were roughly 90%-1%. I realize Hanson has unusual followers, but the ‘taboo questions’ academics want to discuss? People largely agree on the answers, and the academics have decided saying that answer out loud is not permitted.

 

I understand criticizing the censorship of controversial research, but to suggest that these questions aren't actually controversial or taboo outside of academia is absurd to me. People largely agree that “Genetic differences explain non-trivial (10% or more) variance in race differences in intelligence test scores"? Even a politician in a deeply conservative district wouldn't dare say that and risk scaring off his constituents.

Comment by brambleboy on Andrew Burns's Shortform · 2024-06-16T03:01:26.140Z · LW · GW

For those curious about the performance: eyeballing the technical report, it roughly performs at the level of LLama-3 70B. It seems to have an inferior parameters-to-performance ratio because it was only trained on 9 trillion tokens, while the Llama-3 models were trained on 15 trillion tokens. It's also trained with a 4k context length as opposed to Llama-3's 8k. Its primary purpose seems to be the synthetic data pipeline thing.

Comment by brambleboy on Inadequacy and Modesty · 2024-04-15T03:43:15.435Z · LW · GW

I encountered this while I was reading about an obscure estradiol ester, Estradiol undecylate, used for hormone replacement therapy and treating prostate cancer. It's very useful because it has a super long half-life, but it was discontinued. I had to reread the article to be sure I understood that the standard dose chosen arbitrarily in the first trials was hundreds of times larger than necessary, leading to massive estrogen overdoses and severe side effects that killed many people due to cardiovascular complications, and yet these insane doses were typical for decades and might've caused its discontinuation.

Comment by brambleboy on Chaotic Inversion · 2024-03-12T06:43:00.190Z · LW · GW

Although it has been over a decade, decent waterproof phone mounts now exist, too.

Comment by brambleboy on My cover story in Jacobin on AI capitalism and the x-risk debates · 2024-02-13T18:43:40.906Z · LW · GW

Thank you for writing this, this is by far the strongest argument for taking this problem seriously tailored to leftists I've seen and I'll be sharing it. Hopefully the frequent (probably unavoidable) references to EA doesn't turn them off too much.

Comment by brambleboy on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-07T19:01:53.529Z · LW · GW

Here's why determinism doesn't bother me. I hope I get it across.

Deterministic systems still have to be simulated to find out what happens. Take cellular automata, such as Conway's Game of Life or Wolfram's Rule 110, . The result of all future steps is determined by the initial state, but we can't practically "skip ahead" because of what Wolfram calls 'computational irreducibility': despite the simplicity of the underlying program, there's no way to reduce the output to a calculation that's much cheaper than just simulating the whole thing. Same with a mathematical structure like the Mandelbrot Set: its appearance is completely determined by the function, and yet we couldn't predict what we'd see until we computed it. In fact all math is like this.

What I'm getting at is that all mathematical truths are predetermined, and yet I doubt this gives you a sense that being a mathematician is pointless, because obviously these truths have to be discovered. As with the universe: the future is determined, and yet we, or even a hypothetical outsider with a massive computer, have to discover it.

Our position is better than that, though: we're not just looking at the structure of the universe from the outside, we're within it. We're part of what determines the future: it's impossible to calculate everything that happens in the future without calculating everything we humans do. The universe is determined by the process, and the process is us. Hence, our choices determine the future.

Comment by brambleboy on Response to Quintin Pope's Evolution Provides No Evidence For the Sharp Left Turn · 2023-10-06T01:48:29.808Z · LW · GW

I disagree that the Reversal Curse demonstrates a fundamental lack of sophistication of knowledge on the model’s part. As Neel Nanda explained, it’s not surprising that current LLMs will store A -> B but not B -> A as they’re basically lookup tables, and this is definitely an important limitation. However, I think this is mainly due to a lack of computational depth. LLMs can perform that kind of deduction when the information is external, that is, if you prompt it with who Tom Cruise’s mom is, it can then answer who Mary Lee Pfeiffer’s son is. If the LLM knew the first part already, you could just prompt it to answer the first question before prompting it with the second. I suspect that a recurrent model like the Universal Transformer would be able to perform the A -> B to B -> A deduction internally, but for now LLMs must do multi-step computations like that externally with a chain-of-thought. In other words, it can deduce new things, just not in a single forward pass or during backpropagation. If that doesn't count, then all other demonstrations of multi-step reasoning in LLMs don't count either. This deduced knowledge is usually discarded, but we can make it permanent with retrieval or fine-tuning. So, I think it's wrong to say that this entails a fundamental barrier to wielding new knowledge.