Posts

Comments

Comment by sludgepuddle on Critical review of Christiano's disagreements with Yudkowsky · 2023-12-27T23:29:03.981Z · LW · GW

I don't know whether augmentation is the right step after backing off or not, but I do know that the simpler "back off" is a much better message to send to humanity than that. More digestible, more likely to be heard, more likely to be understood, doesn't cause people to peg you as a rational tech bro, doesn't at all sound like the beginning of a sci-fi apocalypse plot line. I could go on.

Comment by sludgepuddle on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-14T04:50:39.888Z · LW · GW

I have to confess that I did some skimming, and by ctrl-f it looks like I actually read right up to the first half of that paragraph before I got lazy. Fwiw it was due to mental and time constraints and nothing to do with the quality of writing.

Comment by sludgepuddle on Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible · 2023-12-13T19:05:44.529Z · LW · GW

I'm not sure if this objection has been pointed out or is even valid.. I think the argument from approximate linearity is probably wrong, even if we're talking editing embryos and not adults. In machine learning we make the learning rate small enough that the map of the error over the parameter space appears linear. This means scaling the gradients way down, but my intuition is that it's minimizing the euclidean distance covered by each step that's "doing the work" of making everything appear flat. If that's correct then flipping 20000 genes is a massive step through gene space compared to flipping just a few and linearity would likely break down. I would expect you can beat sexual selection with methods like you describe, since we can use population studies to get a nice accurate estimate of that "gradient", but getting to IQ 900 or whatever seems a stretch to put it mildly.

Comment by sludgepuddle on Google Gemini Announced · 2023-12-06T17:11:02.513Z · LW · GW

A sequence of still frames is a video, if the model was trained on ordered sequences of still frames crammed into the context window, as claimed by the technical report, then it understands video natively. And it would be surprising if it didn't also have some capability for generating video. I'm not sure why audio/video generation isn't mentioned, perhaps the performance in these arenas is not competitive with other models

Comment by sludgepuddle on Conditional on living in a AI safety/alignment by default universe, what are the implications of this assumption being true? · 2023-07-17T22:26:09.872Z · LW · GW

We still have a hard problem since misuse of AI, for example using it to secure permanent control over the world, would be extremely tempting. Under this assumption outcomes where everyone doesn't die but which are as bad or worse are much more likely than they would be under its negation. I think the answer to avoiding non awful futures looks similar, we agree globally to slow down before the tech could plausibly pose a big risk, probably that means right around yesterday. Except instead of just using the extra time to do scientific research we also make the appropriate changes to our societies/governments.

Comment by sludgepuddle on Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)? · 2023-07-04T06:45:43.382Z · LW · GW

This seems to me the opposite of a low bandwidth recursion. Having access the the entire context window of the previous iteration minus the first token, it should be pretty obvious that most of the relevant information encoded by the values of the nodes in that iteration could in principal be reconstructed, excepting the unlikely event that first token turns out to be extremely important. And it would be pretty weird if much if that information wasn't actually reconstructed in some sense in the current iteration. An inefficient way to get information from one iteration to the next, if that is your only goal, but plausibly very high bandwidth.

Comment by sludgepuddle on Self-Blinded Caffeine RCT · 2023-06-28T05:19:35.639Z · LW · GW

I'm surprised you were only able to predict whether you'd taken caffeine 80% of the time. 200 mg is not a heroic dose, but on little to no tolerance it should be quite noticeable.

Comment by sludgepuddle on Sama Says the Age of Giant AI Models is Already Over · 2023-04-18T02:41:44.521Z · LW · GW

And of course if we believe efficiency is the way to go for the next few years, that should scare the shit out of us, it means that even putting all gpu manufacturers out of commission might not be enough to save us should it become obvious that a slowdown is needed

Comment by sludgepuddle on Sama Says the Age of Giant AI Models is Already Over · 2023-04-18T02:38:21.229Z · LW · GW

Maybe temporarily efficiency improvements will rule, but surely once the low and medium hanging fruit is exhausted parameter count will once again be ramped up, would bet just about anything on that

Comment by sludgepuddle on Scott Aaronson is joining OpenAI to work on AI safety · 2022-06-19T05:31:23.298Z · LW · GW

However good an idea it is, it's not as good an idea as Aaronson just taking a year off and doing it on his own time, collaborating and sharing whatever he deems appropriate with the greater community. Might be financially inconvenient but is definitely something he could swing.

Comment by sludgepuddle on Quick Thoughts on A.I. Governance · 2022-05-01T04:28:32.030Z · LW · GW

How do we deal with institutions that don't want to be governed, say idk the Chevron corporation, North Korea, or the US military?

Comment by sludgepuddle on Convince me that humanity *isn’t* doomed by AGI · 2022-04-16T05:45:18.272Z · LW · GW

Well I don't think it should be possible to convince a reasonable person at this point in time. But maybe some evidence that we might not be doomed. Yudkowsky and other's ideas rest on some fairly plausible but complex assumptions. You'll notice in the recent debate threads where Eliezer is arguing for inevitability of AI destroying us he will often resort to something like, "well that just doesn't fit with what I know about intelligences". At a certain point in these types of discussions you have to do some hand waving. Even if it's really good hand waving, if there's enough if it there's a chance at least one piece is wrong enough to corrupt your conclusions. On the other hand, as he points out, we're not even really trying, and it's hard to see us doing so in time. So the hope that's left is mostly that the problem just won't be an issue or won't be that hard for some unknown reason. I actually think this is sort of likely, given how difficult it is to analyze, it's hard to have full trust in any conclusion.

Comment by sludgepuddle on Whole Brain Emulation: No Progress on C. elegans After 10 Years · 2021-10-02T06:03:59.234Z · LW · GW

While we're sitting around waiting for revolutionary imaging technology or whatever, why not try and make progress on the question of how much and what type of information can we obscure about a neural network and still approximately infer meaningful details of that network from behavior. For practice, start with ANNs and keep it simple. Take a smallish network which does something useful, record the outputs as it's doing its thing, then add just enough random noise to the parameters that output deviates noticeably from the original. Now train the perturbed version to match recorded data. What do we get here, did we recover the weights and biases almost exactly? Assuming yes, how far can this go before we might as well have trained the thing from scratch? Assuming success, does it work equally on different types and sizes of networks, if not what kind of scaling laws does this process obey? Assuming some level of success, move on to a harder problem, a sparse network, this time we throw away everything but connectivity information and try to repeat the above. How about something biologically realistic but we try to simulate the spiking neurons with groups of standard artificial ones.. you get the drift.

Comment by sludgepuddle on Scott Alexander 2021 Predictions: Buy/Sell/Hold · 2021-05-10T04:14:55.117Z · LW · GW

This is outright saying ETH is likely to outperform BTC, so this is Scott’s biggest f*** you to the efficient market hypothesis yet. I’m going to say he’s wrong and sell to 55%, since it’s currently 0.046, and if it was real I’d consider hedging with ETH.

I'm curious what's behind this, is Zvi some sort of bitcoin maximalist? I tend to think that bitcoin having a high value is hard to explain, it made sense when it was the only secure cryptocurrency out there but now it's to a large degree a consequence of social forces rather than economic ones. Ether I can see value in, since it does a bunch of things and there's at least an argument that it's best in class for all those.

Comment by sludgepuddle on Mistake Versus Conflict Theory of Against Billionaire Philanthropy · 2019-08-01T18:03:00.707Z · LW · GW

So many times I've been reading your blog and I'm thinking to myself, "finally something I can post to leftist spaces to get them to trust Scott more", and then I run into one or two sentences that nix that idea. It seems to me like you've mostly given up on reaching the conflict theory left, for reasons that are obvious. I really wish you would keep trying though, they (we?) aren't as awful and dogmatic as they appear to be on the internet, nor is their philosophy as incompatible. For me, it's less a matter of actually adopting the conflict perspective, and more just taking it more seriously and making fun of it less.

Comment by sludgepuddle on The reward engineering problem · 2019-01-17T21:16:01.391Z · LW · GW

What about some form of indirect supervision, where we aim to find transcripts in which H has a decision of a particular hardness? A would ideally be trained starting with things that are very very easy for H, with the hardness ramped up until A maxes out it's abilities. Rather than imitating H, we use a generative technique to create fake transcripts, imitating both H and it's environment. We can incorporate into our loss function the amount of time H spends on a particular decision, the reliability of that decision, and maybe some kind of complexity measure on the transcript to find easier/harder situations which are of genuine importance to H.

Comment by sludgepuddle on The Problem With Trolley Problems · 2010-10-23T20:31:42.979Z · LW · GW

Isn't The Least Convenient Possible World directly relevant here? I'm surprised it hasn't been mentioned yet.

Comment by sludgepuddle on Slava! · 2010-10-05T02:52:43.245Z · LW · GW

Perhaps I'm just being dense, but I don't really get what Carl Sagan's look has to do with praise, or why you should find it disgusting.

Comment by sludgepuddle on Are mass hallucinations a real thing? · 2010-10-03T20:39:26.041Z · LW · GW

One thing I've personally witnessed is people claiming to have had the exact same vivid dream the night before. I'm talking stuff like playing scrabble with Brad Pitt and Former President Carter on the summit of mount McKinley, so it seems unlikely that they were both prompted by the same recent event. Assuming that these people haven't been primed until after the fact, I would expect even stronger effects to be possible for those who have.

Comment by sludgepuddle on Permission for mind uploading via online files · 2010-10-02T07:37:28.283Z · LW · GW

If you believe in Tegmark's multiverse, what's the point of uploading at all? You already inhabit an infinity of universes, all perfectly optimized for your happiness.

Personally I'm very inclined toward Tegmark's position and I have no idea how to answer the above question.

Comment by sludgepuddle on Can you enter the Matrix? The deliberate simulation of sensory input. · 2010-10-02T07:23:24.117Z · LW · GW

I am extremely poor at visualization, can't even picture a line or a circle (I just tried it) and I don't remember images from my dreams. Strangely, when I was a child, I was sometimes able to visualize, but only with extreme effort. More recently, I have experienced what I would call "brain movies", involuntary realistic visualizations, under the influence of opiates.

It seems I am fundamentally capable of visual thinking, but my brain is just not in the habit, though I wouldn't mind being able to summon the ability. It sounds kinda cool.

Comment by sludgepuddle on Automated theorem proving · 2010-10-02T05:45:29.972Z · LW · GW

There are definitely cases where there is little hope of proving "100% intended performance". For example, RSA only works as intended if factoring is hard. Most computer scientists strongly believe this is true, but this is not likely to be proven any time soon.

Comment by sludgepuddle on Intelligence Amplification Open Thread · 2010-09-15T22:16:36.982Z · LW · GW

Low dose ketamine has been shown to promote synaptogenesis in the prefrontal cortex. (in rats) Link to abstract

It is currently being investigated as a potential antidepressant in humans, but based on anecdotal evidence, it seems likely that it's also a nootropic.

Comment by sludgepuddle on Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality · 2010-09-14T23:14:46.214Z · LW · GW

Alexander Grothendieck used the analogy of opening a nut to illuminate two different styles of doing mathematics. One way is to strike the nut repeatedly with a hammer and chisel.

I can illustrate the second approach with the same image of a nut to be opened. The first analogy that came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months—when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!