Posts
Comments
Pacing is a common stimming behavior. Stimming is associated with autism / sensory processing disorder, but neurotypical people do it too.
This seems too strict to me, because it says that humans aren't generally intelligent, and that a system isn't AGI if it's not a world-class underwater basket weaver. I'd call that weak ASI.
Fatebook has worked nicely for me so far, and I think it'd be cool to use it more throughout the day. Some features I'd like to see:
- Currently tags seem to only be useful for filtering your track record. I'd like to be able to filter the forecast list by tag.
- Allow clicking and dragging the bar to modify probabilities.
- An option to input probabilities in formats besides percentages, such as odds ratios or bits.
- An option to resolve by a specific time, not just a date, plus an option for push notification reminders instead of emails. This would open the door to super short-term forecasts like "Will I solve this problem in the next hour?". I've made a substitute for this feature by making reminders in Google Keep with a link to the prediction.
When I see an event with the stated purpose of opposing highly politically polarized things such as cancel culture and safe spaces, I imagine a bunch of people with shared politics repeating their beliefs to each other and snickering, and any beliefs that are actually highly controversial within that group are met with "No no, that's what they want you to think, you missed the point!" It seems possible to avoid that failure mode with a genuine truth-seeking culture, so I hope you succeeded.
It's been about 4 years. How do you feel about this now?
Bluesky has custom feeds that can bring in posts from all platforms that use the AT Protocol, but Bluesky is the only such platform right now. Most feeds I've found so far are simple keyword searches, which work nicely for having communities around certain topics, but I hope to see more sophisticated ones pop up.
While the broader message might be good, the study the video is about didn't replicate.
While most people have super flimsy defenses of meat-eating, that doesn't mean everyone does. Some people simply think it's quite unlikely that non-human animals are sentient (besides primates, maybe). For example, IIRC Eliezer Yudkowsky and Rob Bensinger's guess is that consciousness is highly contingent on factors such as general intelligence and sociality, or something like that.
I think the "5% chance is still too much" argument is convincing, but it begs similar questions such as "Are you really so confident that fetuses aren't sentient? How could you be so sure?"
I agree that origami AIs would still be intelligent if implementing the same computations. I was trying to point at LLMs potentially being 'sphexish': having behaviors made of baked if-then patterns linked together that superficially resemble ones designed on-the-fly for a purpose. I think this is related to what the "heuristic hypothesis" is getting at.
The paper "Auto-Regressive Next-Token Predictors are Universal Learners" made me a little more skeptical of attributing general reasoning ability to LLMs. They show that even linear predictive models, basically just linear regression, can technically perform any algorithm when used autoregressively like with chain-of-thought. The results aren't that mind-blowing but it made me wonder whether performing certain algorithms correctly with a scratchpad is as much evidence of intelligence as I thought.
Even if you know a certain market is a bubble, it's not exactly trivial to exploit if you don't know when it's going to burst, which prices will be affected, and to what degree. "The market can remain irrational longer than you can remain solvent" and all that.
Personally, while I think that investment will decrease and companies will die off, I doubt there's a true AI bubble, because there are so many articles about it being in a bubble that it couldn't possibly be a big surprise for the markets if it popped, and therefore the hypothetical pop is already priced out of existence. I think it's possible that some traders are waiting to pull the trigger on selling their shares once the market starts trending downwards, which would cause an abrupt drop and extra panic selling... but then it would correct itself pretty quickly if the prices weren't actually inflated before the dip. (I'm not a financial expert so don't take this that seriously)
The fourth image is of the "Z machine", or the Z Pulsed Power Facility, which creates massive electromagnetic pulses for experiments. It's awesome.
I can second this. I recommend the chrome extension Unhook, which allows you to disable individual parts of YouTube, and Youtube-shorts block, which makes YouTube shorts play like normal videos.
(Disclaimer: I'm not very knowledgeable about safety engineering or formal proofs)
I notice that whenever someone brings up "What if this unexpected thing happens?", you emphasize that it's about not causing accidents. I'm worried that it's hard to define exactly who caused an accident, for the same reason that deciding who's liable in the legal system is hard.
It seems quite easy to say that the person who sabotaged the stop sign was at fault for the accident. What if the saboteur poured oil on the road instead? Is it their fault if the car crashes from sliding on the oil? Okay, they're being malicious, so they're at fault. But what if the oil spill was an accident from a truck tipping over? Is it the truck driver's fault? What if the road was slippery because of ice? Nobody causes the weather, right? On the contrary: the city could've cleared and salted the roads earlier, but they didn't. In the counterfactual world where they did it earlier, the accident wouldn't have happened.
Okay, how about instead of backward chaining forever, we just check whether the system could have avoided the accident in the counterfactual where it took different actions. The problem is: even in the case where an adversarial stop sign leads to the car crashing, the system potentially could've avoided it. Stop signs are placed by humans somewhat arbitrarily using heuristics to determine if an intersection is risky. Shouldn't the system be able to tell that an intersection is risky, even when there truly isn't a stop sign there?
The paper tackles the problem by formalizing which behaviors and assumptions regarding the movement of cars and pedestrians are "reasonable" or "unreasonable", then proving within the toy model that only unreasonable behavior leads to crashes. Makes sense, but in the real world people don't just follow paths, they do all kinds of things that influence the world. Wouldn't the legal system be simple if we could just use equations like these to determine liability? I'm just not sure we should expect to eventually cover the long tail of potential situations sufficiently enough to make "provably safe" meaningful.
Also, I'm concerned because they don't seem to describe it as a toy model despite the extremely simplified set of things they're considering, and there might be questionable incentives at play to make it seem more sound than it is. From another document on their website:
We believe the industry can come together to create a collaborative platform which provides a “safety seal” that first and foremost will create a safer product, but at the same time will protect OEMs from unreasonable and unwarranted liability assignment by regulators and society.
So they want the cars to be safe, but they also want to avoid liability by proving the accident was someone else's fault.
If random strangers start calling you "she", that implies you look feminine enough to be mistaken for a woman. I think most men would prefer to look masculine for many reasons: not being mistaken for a woman, being conventionally attractive, being assumed to have a 'manly' rather than 'effeminate' personality, looking your age, etc.
If you look obviously masculine, then being misgendered constantly would just be bewildering. Surely something is signaling that you use feminine pronouns.
If it's just people online misgendering you based on your writing, then that's less weird. But I think it still would bother some people for some of the reasons above.
I predict that implementing bots like these into social media platforms (in their current state) would be poorly received by the public. I think many people's reaction to Grok's probability estimate would be "Why should I believe this? How could Grok, or anyone, know that?" If it were a prediction market, the answer would be "because <economic and empirical explanation as to why you can trust the markets>". There's no equivalent answer for a new bot, besides "because our tests say it works" (making the full analysis visible might help). From these comments, it seems like it's not hard to elicit bad forecasts. Many people in the public would learn about this kind of forecasting for the first time from this, and if the estimates aren't super impressive, it'll leave a bad taste in their mouths. Meanwhile the media will likely deride it as "Big Tech wants you to trust their fallible chatbots as fortune-tellers now".
The images on this post appear to be broken.
If you go on Twitter/X and find the right people, you can get most of the benefits you list here. There are tastemakers that share and discuss intriguing papers, and researchers who post their own papers with explanation threads which are often more useful than the papers themselves. The researchers are usually available to answer questions about their work, and you can read the answers they've given already. You're also ahead of the game because preprints can appear way before conferences.
It may be through extrapolating too much from your (first-person, subjective) experiences with objects that seemingly possess intrinsic, observer-independent properties, like the classical objects of everyday life.
Are you trying to say that quantum physics provides evidence that physical reality is subjective, with conscious observers having a fundamental role? Rob implicitly assumes the position advocated by The Quantum Physics Sequence, which argues that reality exists independently of observers and that quantum stuff doesn't suggest otherwise. It's just one of the many presuppositions he makes that's commonly shared on here. If that's your main objection, you should make that clear.
Another example in ML of a "non-conservative" optimization process: a common failure mode of GANs is mode collapse, wherein the generator and discriminator get stuck in a loop. The generator produces just one output that fools the discriminator, the discriminator memorizes it, the generator switches to another, until eventually they get back to the same output again.
In the rolling ball analogy, we could say that the ball rolls down into a divot, but the landscape flexes against the ball to raise it up again, and then the ball rolls into another divot, and so on.
So of course Robin Hanson offered polls on these so-called taboo topics. The ‘controversial’ positions got overwhelming support. The tenth question, whether demographic diversity (race, gender) in the workplace often leads to worse performance got affirmed 54%-17%, and the rest were a lot less close than that. Three were roughly 90%-1%. I realize Hanson has unusual followers, but the ‘taboo questions’ academics want to discuss? People largely agree on the answers, and the academics have decided saying that answer out loud is not permitted.
I understand criticizing the censorship of controversial research, but to suggest that these questions aren't actually controversial or taboo outside of academia is absurd to me. People largely agree that “Genetic differences explain non-trivial (10% or more) variance in race differences in intelligence test scores"? Even a politician in a deeply conservative district wouldn't dare say that and risk scaring off his constituents.
For those curious about the performance: eyeballing the technical report, it roughly performs at the level of LLama-3 70B. It seems to have an inferior parameters-to-performance ratio because it was only trained on 9 trillion tokens, while the Llama-3 models were trained on 15 trillion tokens. It's also trained with a 4k context length as opposed to Llama-3's 8k. Its primary purpose seems to be the synthetic data pipeline thing.
I encountered this while I was reading about an obscure estradiol ester, Estradiol undecylate, used for hormone replacement therapy and treating prostate cancer. It's very useful because it has a super long half-life, but it was discontinued. I had to reread the article to be sure I understood that the standard dose chosen arbitrarily in the first trials was hundreds of times larger than necessary, leading to massive estrogen overdoses and severe side effects that killed many people due to cardiovascular complications, and yet these insane doses were typical for decades and might've caused its discontinuation.
Although it has been over a decade, decent waterproof phone mounts now exist, too.
Thank you for writing this, this is by far the strongest argument for taking this problem seriously tailored to leftists I've seen and I'll be sharing it. Hopefully the frequent (probably unavoidable) references to EA doesn't turn them off too much.
Here's why determinism doesn't bother me. I hope I get it across.
Deterministic systems still have to be simulated to find out what happens. Take cellular automata, such as Conway's Game of Life or Wolfram's Rule 110, . The result of all future steps is determined by the initial state, but we can't practically "skip ahead" because of what Wolfram calls 'computational irreducibility': despite the simplicity of the underlying program, there's no way to reduce the output to a calculation that's much cheaper than just simulating the whole thing. Same with a mathematical structure like the Mandelbrot Set: its appearance is completely determined by the function, and yet we couldn't predict what we'd see until we computed it. In fact all math is like this.
What I'm getting at is that all mathematical truths are predetermined, and yet I doubt this gives you a sense that being a mathematician is pointless, because obviously these truths have to be discovered. As with the universe: the future is determined, and yet we, or even a hypothetical outsider with a massive computer, have to discover it.
Our position is better than that, though: we're not just looking at the structure of the universe from the outside, we're within it. We're part of what determines the future: it's impossible to calculate everything that happens in the future without calculating everything we humans do. The universe is determined by the process, and the process is us. Hence, our choices determine the future.
I disagree that the Reversal Curse demonstrates a fundamental lack of sophistication of knowledge on the model’s part. As Neel Nanda explained, it’s not surprising that current LLMs will store A -> B but not B -> A as they’re basically lookup tables, and this is definitely an important limitation. However, I think this is mainly due to a lack of computational depth. LLMs can perform that kind of deduction when the information is external, that is, if you prompt it with who Tom Cruise’s mom is, it can then answer who Mary Lee Pfeiffer’s son is. If the LLM knew the first part already, you could just prompt it to answer the first question before prompting it with the second. I suspect that a recurrent model like the Universal Transformer would be able to perform the A -> B to B -> A deduction internally, but for now LLMs must do multi-step computations like that externally with a chain-of-thought. In other words, it can deduce new things, just not in a single forward pass or during backpropagation. If that doesn't count, then all other demonstrations of multi-step reasoning in LLMs don't count either. This deduced knowledge is usually discarded, but we can make it permanent with retrieval or fine-tuning. So, I think it's wrong to say that this entails a fundamental barrier to wielding new knowledge.