Posts

Comments

Comment by ZankerH on Templarrr's Shortform · 2024-04-15T11:53:04.636Z · LW · GW

Modern datacenter GPUs are basically the optimal compromise between this and still retaining enough general capacity to work with different architectures, training procedures, etc. The benefits of locking in a specific model at the hardware level would be extremely marginal compared to the downsides.

Comment by ZankerH on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-13T21:35:21.358Z · LW · GW

My inferences, in descending order of confidence:

(source: it was revealed to me by a neural net)

84559, 79685, 87081, 99819, 37309, 44746, 88815, 58152, 55500, 50377, 69067, 53130.

Comment by ZankerH on Improving the safety of AI evals · 2023-06-05T11:14:00.723Z · LW · GW

ofcourse you have to define what deceptions means in it's programming.

That's categorically impossible with the class of models that are currently being worked on, as they have no inherent representation of "X is true". Therefore, they never engage in deliberate deception.

Comment by ZankerH on Some thought experiments on digital consciousness · 2023-04-03T21:16:03.460Z · LW · GW

>in order to mistreat 2, 3, or 4, you would have to first mistreat 1

What about deleting all evidence of 1 ever having happened, after it was recorded? 1 hasn't been mistreated, but depending on your assumptions re:consciousness, 2, 3 and 4 may have.

Comment by ZankerH on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-04-02T07:18:06.237Z · LW · GW

That’s Security Through Obscurity. Also, even if we decided we’re suddenly ok with that, it obviously doesn’t scale well to superhuman agents.

Comment by ZankerH on The Prospect of an AI Winter · 2023-03-28T08:43:16.682Z · LW · GW

>Some day soon "self-driving" will refer to "driving by yourself", as opposed to "autonomous driving".

Interestingly enough, that's what it was used to mean the first time the term appeared in popular culture, in the film Demolition Man (1993).

Comment by ZankerH on Hello, Elua. · 2023-02-23T20:50:04.702Z · LW · GW

Any insufficiently human-supremacist AI is an S-risk for humanity. Non-human entities are only valued inasmuch as individual humans value them concretely. No abstract preferences over them should be permitted.

Comment by ZankerH on All AGI Safety questions welcome (especially basic ones) [~monthly thread] · 2023-01-26T22:35:08.401Z · LW · GW

We have no idea how to make a useful, agent-like general AI that wouldn't want to disable its off switch or otherwise prevent people from using it.

Comment by ZankerH on Has anyone increased their AGI timelines? · 2022-11-07T12:26:05.823Z · LW · GW

Global crackdown on the tech industry?

Comment by ZankerH on What if we solve AI Safety but no one cares · 2022-08-22T12:49:28.319Z · LW · GW

)

Comment by ZankerH on Alien Message Contest: Solution · 2022-07-13T09:04:10.419Z · LW · GW

>The aliens sent their message using a continuous transmission channel, like the frequency shift of a pulsar relative to its average or something like that. NASA measured this continuous value and stored the result as floating point data.

 

Then it makes no sense for them to publish it in binary without mentioning the encoding, or making it part of the puzzle to begin with.

Comment by ZankerH on D𝜋's Spiking Network · 2022-01-05T12:52:42.948Z · LW · GW

Your result is virtually identical to the first-ranking unambiguously permutation-invariant method (MLP 256-128-100). HOG+SVM does even better, but it's unclear to me whether that meets your criteria.

Could you be more precise about what kinds of algorithms you consider it fair to compare against, and why?

Comment by ZankerH on D𝜋's Spiking Network · 2022-01-04T09:57:51.270Z · LW · GW

The issue with MNIST is that everything works on MNIST, even algorithms that utterly fail on a marginally more complicated task. It's a solved problem, and the fact that this algorithm solves it tells you nothing about it.

If the code is too rigid or poorly performant to be tested on larger or different tasks, I suggest F-MNIST (fashion MNIST), which uses the exact same data format, has the same number of categories and amount of data points, but is known to be far more indicative of the true performance of modern machine learning approaches.

 

https://github.com/zalandoresearch/fashion-mnist

Comment by ZankerH on A Simple Introduction to Neural Networks · 2020-02-11T13:22:13.174Z · LW · GW

Square error has been used instead of absolute error in many diverse optimization problems in part because its derivative is proportional to the magnitude of the error, whereas the derivative of the absolute error is constant. When you're trying to solve a smooth optimization problem with gradient methods, you generally benefit from loss functions with a smooth gradient than tends towards zero along with the error.

Comment by ZankerH on Becoming stronger together · 2017-07-13T09:48:00.566Z · LW · GW

Sounds like you need to work on that time preference. Have you considered setting up an accountability system or self-blackmailing to make sure you're not having too much fun?

Comment by ZankerH on [deleted post] 2017-07-05T19:19:53.632Z

This is why anti-semitism exists.

Comment by ZankerH on Open thread, June 26 - July 2, 2017 · 2017-06-29T09:27:52.889Z · LW · GW

Yes, with the possible exception of moral patients with a reasonable likelihood of becoming moral agents in the future.

Comment by ZankerH on Open thread, June 26 - July 2, 2017 · 2017-06-27T12:17:54.876Z · LW · GW

Meat tastes nice, and I don't view animals as moral agents.

Comment by ZankerH on Open thread, June 5 - June 11, 2017 · 2017-06-05T17:46:32.654Z · LW · GW

Define "optimal". Optimizing for the utility function of min(my effort), I could misuse more company resources to run random search on.

Comment by ZankerH on Open thread, June 5 - June 11, 2017 · 2017-06-05T14:21:54.189Z · LW · GW

In which case, best I can do is 10 lines

MakeIntVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
Comment by ZankerH on Open thread, June 5 - June 11, 2017 · 2017-06-05T13:20:58.677Z · LW · GW

Well, that does complicate things quite a bit. I threw those lines out of my algorithm generator and the frequency of valid programs generated dropped by ~4 orders of magnitude.

Comment by ZankerH on Open thread, June 5 - June 11, 2017 · 2017-06-05T13:04:37.035Z · LW · GW

Preliminary solution based on random search

MakeIntVar A
Inc A
Shl A, 5
Inc A
Inc A
A=A*A
Inc A
Shl A, 1

I've hit on a bunch of similar solutions, but 2 * (1 + 34^2) seems to be the common thread.

Comment by ZankerH on Open thread, June 5 - June 11, 2017 · 2017-06-05T11:44:27.268Z · LW · GW

Define "shortest". Least lines? Smallest file size? Least (characters * nats/char)?

Comment by ZankerH on Stupid Questions May 2017 · 2017-04-27T07:48:12.725Z · LW · GW

My mental model of what could possibly drive someone to EA is too poor to answer this with any degree of accuracy. Speaking for myself, I see no reason why such information should have any influence on future human actions.

Comment by ZankerH on The Ancient God Who Rules High School · 2017-04-06T08:53:29.459Z · LW · GW

I'd argue that this is not the case, since the vast majority of people who don't expect to be "clerks" still end up in similar positions.

Comment by ZankerH on Metrics to evaluate a Presidency · 2017-01-24T13:06:51.571Z · LW · GW

Is there any reason to think that % in prison "should" be more equal?

Since we're talking about optimizing for "equality" between two fundamentally unequal things, why not?

Are you saying having the same amount of men and women in prison would be detrimental to the enforcement of gender equality? How does that follow?

Comment by ZankerH on Open thread, Jan. 16 - Jan. 22, 2016 · 2017-01-16T17:11:26.333Z · LW · GW

Having actually lived under a regime that purported to "change human behaviour to be more in line with reality", my prior for such an attempt being made in good faith to begin with is accordingly low.

Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn't want in charge, unless you're the kind of person nobody wants in charge in the first place.

I'm thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.

This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people's brain centres with 15th century technology.

Why don't you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide? And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation? If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point? Because that's essentially what you're proposing from the point of view of all potential futures where you fail.

Comment by ZankerH on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T15:56:07.333Z · LW · GW

Despair and dedicate your remaining lifespan to maximal hedonism.

Comment by ZankerH on Open Thread, Aug. 22 - 28, 2016 · 2016-08-26T22:26:13.369Z · LW · GW

NRx is systematized hatred.

Am NRx, this assertion is false.

Comment by ZankerH on Superintelligence via whole brain emulation · 2016-08-17T13:29:39.425Z · LW · GW

Even if it kill all humans, it will be one human which will survive.

Unless it self-modifies to the point where you're stretching any meaningful definition of "human".

Even if his values will evolve it will be natural evolution of human values.

Again, for sufficiently broad definitions of "natural evolution".

As most human beings don't like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.

If we're to believe Hanson, the first (and possibly only) wave of human em templates will be the most introvert workaholics we can find.

Comment by ZankerH on Open Thread April 11 - April 17, 2016 · 2016-04-11T23:25:08.903Z · LW · GW

Two things:

  • all other points have a negative x coordinate, and the x range passed to the tessellation algorithm is [-124, -71]. You probably forgot the minus sign for that point's x coordinate.

  • as mentioned above, the algorithm fails to converge because the weights are poorly scaled. For a better graphical representation, you will want to scale them to the range between one and one half of the nearest point distance, but to make it run, just increase the division constant.

Comment by ZankerH on Open Thread April 11 - April 17, 2016 · 2016-04-11T22:02:38.994Z · LW · GW

The range is specified by the box argument to the compute_2d_voronoi function, in form [[min_x, max_x], [min_y, max_y]]. Points and weights can be specified as 2d and 1d arrays, e.g., as np.array([[x1,y1], [x2, y2], [x3, y3], ..., [xn, yn]]) and np.array([w1, w2, w3, ..., wn]). Here's an example that takes specified points, and also allows you to plot point radii for debugging purposes: http://pastebin.com/h2fDLXRD

Comment by ZankerH on Open Thread April 11 - April 17, 2016 · 2016-04-11T12:59:23.880Z · LW · GW

You can use the pyvoro library to compute weigted 2d voronoi diagrams, and the matplotlib library to display them. Here's a minimal working example with randomly generated data:

http://pastebin.com/wNaYAPvN

edit: It seems this library uses the radical voronoi tessellation algorithm, where "weights" represent point radii. This means if you specify a point radius greater than the distance between it and the closest point, the tessellation will not function correctly, and as a corollary, if a point's radius is smaller than half of the minimal distance between it and a neighbour, the specified weight will not affect the tessellation process. Therefore, you need a secondary algorithm that takes the point weights and mutual distances into account to produce the desired result here.

Comment by ZankerH on Rationality Quotes April 2016 · 2016-04-07T12:10:28.047Z · LW · GW

A perfect example of a fully general counter-argument!

Comment by ZankerH on Black box knowledge · 2016-03-05T00:02:25.227Z · LW · GW

humanity not extinct or suffering ->FAI black box -> humanity still not extinct or suffering

Comment by ZankerH on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-17T09:05:44.523Z · LW · GW

Donate most of your disposable income to MIRI.

Comment by ZankerH on Open thread, Nov. 16 - Nov. 22, 2015 · 2015-11-17T08:57:55.664Z · LW · GW

In some sense it is voodoo (not very interpretable)

There is research in that direction, particularly in the field of visual object recognising convolutional networks. It is possible to interpret what a neural net is looking for.

http://yosinski.com/deepvis

Comment by ZankerH on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-10T23:19:15.589Z · LW · GW

*linear algebra computational graph engine with automatic gradient calculation

I really wonder how this will fit into the established deep learning software ecosystem - it has clear advantages over any single one of the large players (Theano, Torch, Caffee), but lacks the established community of any of them. As a researcher in the field, it's really frustrating that there is no standardisation and you essentially have to know a ton of software frameworks to effectively keep up with research, and I highly doubt Google entering the fray will change this.

https://xkcd.com/927/

Comment by ZankerH on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-28T17:32:57.743Z · LW · GW

I need some calibration here. Is this satire?

Comment by ZankerH on [Link]: KIC 8462852, aka WTF star, "the most mysterious star in our galaxy", ETI candidate, etc. · 2015-10-20T20:05:19.334Z · LW · GW

Two things come to mind, providing energy or highly directional interstellar communication.

Comment by ZankerH on [Link]: KIC 8462852, aka WTF star, "the most mysterious star in our galaxy", ETI candidate, etc. · 2015-10-20T09:34:40.648Z · LW · GW

Frankly, both of those suggestions sound about equally ridiculous to me. But then again, it may just be scope insensitivity because of how minute both likelihoods are to begin with.

Comment by ZankerH on Clothing is Hard (A Brief Adventure into my Inefficient Brain) · 2015-10-12T12:20:51.665Z · LW · GW

Imagining the orientations as a series of rotations along individual, orthonormal basis axes, you may run into the problem of gimbal lock. Try visualising the desired final result as an orientation represented by a quaternion.

Comment by ZankerH on Simulations Map: what is the most probable type of the simulation in which we live? · 2015-10-11T12:30:47.371Z · LW · GW

How do you know it isn't? Everything off the Earth could be a very simple simulation just designed to emit the right kind of EM radiation to look as if it's there. Likewise, large chunks of dead matter could easily be optimized away until a human interacts with them in sufficient detail. Other than your observation about classical physics, all your points are observations "from the inside" that could be optimized around without degrading our perception of the universe.

Comment by ZankerH on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-06T18:05:49.572Z · LW · GW

I definitely value it higher than the momentary high of getting to impose your values on others, which seems to be the opposite of the current US foreign policy.

Comment by ZankerH on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-06T18:04:39.120Z · LW · GW

I disapprove.

Comment by ZankerH on Open thread, Oct. 5 - Oct. 11, 2015 · 2015-10-06T09:13:16.984Z · LW · GW

Speaking for myself, I find most of his contributions relevant and interesting.

Comment by ZankerH on October 2015 Media Thread · 2015-10-04T08:04:32.354Z · LW · GW

How severe would you rate the horror aspect as? This seems interesting, but I absolutely couldn't handle Amnesia.

Comment by ZankerH on Stupid Questions September 2015 · 2015-09-05T13:00:50.810Z · LW · GW

There's usually an informal standard that's large enough to represent a significant boost to a police officer's income, but small enough that it's worth it for most people to pay rather than risk more fines or worse. There's not much negotiation involved.

Comment by ZankerH on Open Thread August 31 - September 6 · 2015-09-02T22:33:37.630Z · LW · GW

By giving me a persuasive reason to care about the subjective utility of people I can't ethnically identify with.

Comment by ZankerH on Open Thread - Aug 24 - Aug 30 · 2015-08-28T22:12:40.253Z · LW · GW

If you're rational and you're in South Africa, why are you still in South Africa? How much do you value your life over the trivial inconvenience of moving?