Posts

Comments

Comment by hazel (sharps030) on If you weren't such an idiot... · 2024-03-03T11:38:23.288Z · LW · GW

I've been well served by Bitwarden: https://bitwarden.com/

It has a dark theme, apps for everything (including Linux commandline), the Firefox extension autofills with a keyboard shortcut, plus I don't remember any large data breaches.

Comment by hazel (sharps030) on Killing Socrates · 2023-04-13T10:13:16.690Z · LW · GW

Part of the value of reddit-style votes as a community moderation feature is that using them is easy. Beware Trivial Inconveniences and all that. I think that having to explain every downvote would lead to me contributing to community moderation efforts less, would lead to dogpiling on people who already have far more refutation than they deserve, would lead to zero-effort 'just so I can downvote this' drive-by comments, and generally would make it far easier for absolute nonsense to go unchallenged. 

If I came across obvious bot-spam in the middle of the comments, neither downvoted nor deleted and I couldn't downvote without writing a comment... I expect that 80% of the time I'd just close the tab (and that remaining 20% is only because I have a social media addiction problem). 

Comment by hazel (sharps030) on GPTs are Predictors, not Imitators · 2023-04-09T07:14:59.731Z · LW · GW

To solve this problem you would need a very large dataset of mistakes made by LLMs, and their true continuations. [...] This dataset is unlikely to ever exist, given that its size would need to be many times bigger than the entire internet. 

I had assumed that creating on that dataset was a major reason for doing a public release of ChatGPT. "Was this a good response?" [thumb-up] / [thumb-down] -> dataset -> more RLHF. Right? 

Comment by hazel (sharps030) on GPT-4 · 2023-03-15T11:01:07.030Z · LW · GW

Meaning it literally showed zero difference in half the tests? Does that make sense?

Comment by hazel (sharps030) on GPT-4 · 2023-03-15T10:38:19.173Z · LW · GW

Codeforces is not marked as having a GPT-4 measurement on this chart. Yes, it's a somewhat confusing chart.

Comment by hazel (sharps030) on GPT-4 · 2023-03-15T10:37:16.644Z · LW · GW

Green bars are GPT-4. Blue bars are not. I suspect they just didn't retest everything.

Comment by hazel (sharps030) on ARC tests to see if GPT-4 can escape human control; GPT-4 failed to do so · 2023-03-15T02:26:42.749Z · LW · GW

So.... they held the door open to see if it'd escape or not? I predict this testing method may go poorly with more capable models, to put it lightly. 

And then OpenAI deployed a more capable version than was tested!

They also did not have access to the final version of the model that we deployed. The final version has capability improvements relevant to some of the factors that limited the earlier models power-seeking abilities, such as longer context length, and improved problem-solving abilities as in some cases we've observed.

This defeats the entire point of testing. 

I am slightly worried that posts like veedrac's Optimality is the Tiger may have given them ideas. "Hey, if you run it in this specific way, a LLM might become an agent! If it gives you code for recursively calling itself, don't run it"... so they write that code themselves and run it. 

I really don't know how to feel about this. On one hand, this is taking ideas around alignment seriously and testing for them, right? On the other hand, I wonder what the testers would have done if the answer was "yep, it's dangerously spreading and increasing it's capabilities oh wait no nevermind it's stopped that and looks fine now". 

Comment by hazel (sharps030) on Scott Aaronson on "Reform AI Alignment" · 2022-11-21T23:19:35.910Z · LW · GW

At the time I took AlphaGo as a sign that Elizer was more correct than Hanson w/r/t the whole AI-go-FOOM debate. I realize that's an old example which predates the last-four-years AI successes, but I updated pretty heavily on it at the time. 

Comment by hazel (sharps030) on Consume fiction wisely · 2022-10-26T11:40:12.320Z · LW · GW

I'm going to suggest reading widely as another solution. I think it's dangerous to focus too much on one specific subgenre, or certain authors, or books only from from one source (your library and Amazon do, in fact, filter your content for you, if not very tightly). 

Comment by hazel (sharps030) on Consume fiction wisely · 2022-10-26T11:36:12.178Z · LW · GW

For me, the benefit of studying tropes is that it makes it easy to talk about the ways in which stories are story-like. In fact, to discuss what stories are like, this post used several links to tropes (specifically ones known to be wrong/misleading/inapplicable to reality). 

I think a few deep binges on TVtropes for media I liked really helped me get a lot better at media analysis very, very quickly. (Along with a certain anime analysis blog that mixed in obvious and insightful cinematography commentary focusing on framing, color, and lighting, with more abstract analysis of mood, theme, character, and purpose -- both illustrated with links to screenshots, using media that was familiar and interesting to me.) 

And by putting word-handles on common story features, it makes it easy to spot them turning up in places they shouldn't. Like in your thinking about real-life situations.

Comment by hazel (sharps030) on Why I think strong general AI is coming soon · 2022-10-03T02:55:32.074Z · LW · GW

However, you decided to define "intelligence" as "stuff like complex problem solving that's useful for achieving goals" which means that intentionality, consciousness, etc. is unconnected to it 

This is the relevant definition for AI notkilleveryoneism. 

Comment by hazel (sharps030) on AGI Safety FAQ / all-dumb-questions-allowed thread · 2022-06-09T04:12:53.258Z · LW · GW

There has to be some limits

Those limits don't have to be nearby, or look 'reasonable', or be inside what you can imagine. 

Part of the implicit background for the general AI safety argument is a sense for how minds could be, and that the space of possible minds is large and unaccountably alien. Eliezer spent some time trying to communicate this in the sequences: https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general, https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message. 

Comment by hazel (sharps030) on The Rationalists of the 1950s (and before) also called themselves “Rationalists” · 2022-03-16T03:55:21.455Z · LW · GW

Early LessWrong was atheist, but everything on the internet around the time LW was taking off had a position in that debate. "...the defining feature of this period wasn’t just that there were a lot of atheism-focused things. It was how the religious-vs-atheist conflict subtly bled into everything." Or less subtly, in this case. 

I see it just as a product of the times. I certainly found the anti-theist content in Rationality: A to Z to be slightly jarring on a re-read -- on other topics, Elizer is careful to not bring into it the political issues of the day that could emotionally overshadown the more subtle points he's making about thought in general -- but he'll drop in extremely anti religion jabs despite that. To me, that's just part of reading history. 

Comment by hazel (sharps030) on Strong Evidence is Common · 2021-03-20T13:48:19.925Z · LW · GW

Tying back to an example in the post: if we're using ascii encoding, then the string "Mark Xu" takes up 49 bits. It's quite compressible, but that still leaves more than enough room for 24 bits of evidence to be completely reasonable.

This paper suggests that spoken language is consistently ~39bits/second.

https://advances.sciencemag.org/content/5/9/eaaw2594

Comment by hazel (sharps030) on The LessWrong 2018 Book is Available for Pre-order · 2020-12-03T02:54:32.209Z · LW · GW
Comment by hazel (sharps030) on The LessWrong 2018 Book is Available for Pre-order · 2020-12-03T02:22:42.510Z · LW · GW

Where does the money go? Is it being sold at cost, or is there surplus?

If money is being made, will it support: 1. The authors? 2. LW hosting costs? 3. LW-adjacent charities like MIRI? 4. The editors/compilers/LW moderators?

EDIT: Was answered over on /r/slatestarcodex. tldr: one print run has been paid for at a loss, any (unlikely) profits go to supporting the Lesswrong nonprofit organization.

Comment by hazel (sharps030) on The central limit theorem in terms of convolutions · 2020-11-23T09:19:05.014Z · LW · GW

If  and  are the fourier transforms of  and , then . This is yet another case where you don't actually have to compute the convolution to get the thing. I don't actually use fourier transforms or have any intuition about them, but for those who do, maybe this is useful?

 

It's amazingly useful in signal processing, where you often care about the frequency-domain because it's perceptually significant (eg: percieved pitch & timbre of a sound = fundamental frequency of the air-vibrations & other frequencies. Sounds too fizzy or harsh? Lowpass filter it. Too dull or muffled? Boost the higher frequencies, etc etc etc). Although it's used the other way around -- by doing convolution, you don't have to compute the thing.

 If you have a signal  and want to change it's frequency distribution, what you do is construct a 'short' (finite support) function-- the convolution kernel -- whose frequency-domain transform would multiply to give the kind of frequency responce you're after. Then you can convolve them in the time domain, and don't need to compute the fourier/reverse-fourier at all.

For example, in audio processing. Many systems (IIRC linear time-invariant ones) can be 'sampled' by taking an impulse response -- the output of the system when the input is an impulse (like the Dirac delta function, which is ∞ at 0 but 0 elsewhere -- or as close as you can physically construct). This impulse response can then impart the 'character' of the system via convolution -- this is how convolution reverbs add, as an audio effect, the sound of specific, real-life resonant spaces to whatever audio signal you feed them ("This is your voice in the Notre Dame cathedral" style). There's also guitar amp/cab sims that work this way. This works because the Dirac delta is the identity under (continuous) convolution (also because these real physical things like sounds interacting with space, and speakers, are linear&time-invariant).

It also comes up in image processing. You can do a lot of basic image processing with a 2d discrete convolution kernel. You can implement blurs/anti-aliasing/lowpass, image sharpening/highpass, and edge 'detection' this way.

Comment by hazel (sharps030) on Moral uncertainty: What kind of 'should' is involved? · 2020-01-14T12:00:01.357Z · LW · GW

In my experience, stating things outright and giving examples helps with communication. You might not need a definition, but the relevant question is would it improve the text for other readers?

Comment by hazel (sharps030) on Blackmail · 2019-02-20T12:58:55.330Z · LW · GW

"It's obviously bad. Think about it and you'll notice that. I could write a YA dystopian novel about how the consequences are bad." <-- isn't an argument, at all. It assumes bad consequences rather than demonstrating or explaining how the consequences would be bad. That section is there for other reasons, partially (I think?) to explain Zvi's emotional state and why he wrote the article, and why it has a certain tone.

Comment by hazel (sharps030) on Blackmail · 2019-02-20T12:57:45.764Z · LW · GW
I am not sure why you pick on blackmail specifically

This is in response to other writers, esp. Robin Hanson. That's why.

Comment by hazel (sharps030) on Blackmail · 2019-02-20T12:52:41.240Z · LW · GW
This only looks at the effects on Alice and on Bob, as a simplification. But with blackmail "carrying out the threat" means telling other people information about Bob, and that is often useful for those other people.

When the public interest motivates the release of private info, it's called 'whistleblowing' and is* legally protected and considered far more moral than blackmail. I think that contrast is helpful to understanding why that's not enough to make blackmail moral.

*in some jurisdictions, restrictions may apply, see your local legal code for a full list of terms & conditions.

I think you're right that it's not trivially negative sum because it can have positive outcomes for third parties. Still expect a world of legal blackmail to be worse.

Comment by hazel (sharps030) on Open Thread August 2018 · 2018-08-05T03:50:30.249Z · LW · GW

If you're throwing your AI into a perfect inescapable hole to die and never again interacting with it, then what exact code you're running will never matter. If you observe it though, then it can affect you. That's an output.

What are you planning to do with the filtered-in 'friendly' AIs? Run them in a different context? Trust them with access to resources? Then an unfriendly AI can propose you as a plausible hypothesis, predict your actions, and fake being friendly. It's just got to consider that escape might be reachable, or that there might be things it doesn't know, or that sleeping for a few centuries and seeing if anything happens is a option-maximizing alternative to halting, etc. I don't know what you're selecting for -- suicidality, willingness to give up, halts within n operations -- but it's not friendliness.

Comment by hazel (sharps030) on 5 general voting pathologies: lesser names of Moloch · 2018-04-14T04:59:53.142Z · LW · GW

https://www.lesserwrong.com/posts/D6trAzh6DApKPhbv4/a-voting-theory-primer-for-rationalists

The first link in this post should go ^ here to your voting theory primer. Instead, for me, it links here:

https://www.lesserwrong.com/posts/JewWDfLoxgFtJhNct/utility-versus-reward-function-partial-equivalence