Posts

What will quantum computers be used for? 2020-01-01T19:33:16.838Z · score: 11 (4 votes)
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z · score: -6 (5 votes)
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z · score: 12 (8 votes)
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z · score: 9 (8 votes)

Comments

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-27T02:02:50.809Z · score: 1 (1 votes) · LW · GW
From the sound of it, they stayed in the lane mainly via some hand-coded image processing which looked for a yellow/white strip surrounded by darker color.

That is what I heard about other research groups but a bit surprising coming from Tesla, I'd imagine things have changed dramatically since then considering this video, albeit insufficient as any sort of safety validation, still demonstrates they're way beyond just following lane markings. According to Musk they're pushing hard for end-to-end ML solutions. It would make sense seeing the custom hardware they've developed and also the data leverage they have with their massive fleet, combined with over-the-air updates.

Comment by maximkazhenkov on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-02-27T00:07:34.931Z · score: 1 (1 votes) · LW · GW

I agree - the fatality rate is just much too low to affect anything long term.

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-26T23:49:43.273Z · score: 1 (1 votes) · LW · GW

What makes driving on surface streets so much different than driving on highways such that current state of the art ML techniques wouldn't be able to handle it with slightly more data and compute?

Unlike natural language processing, AI doctors or household robots, driving seems like a very limited non-AGI-complete task to me because a self-driving car never truly interacts with humans or objects beyond avoiding hitting them.

we also need to notice objects which are going to move into the lane; we need trajectory tracking and forecasting. And we need the trajectory-tracker to be robust to the object classification changing (or just being wrong altogether), or sometimes confusing which object is which across timesteps, or reflections or pictures or moving lights, or missing/unreliable lane markers, or things in the world zig-zagging around on strange trajectories, or etc.

I would claim all of the above are also required for driving on the highway.

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-26T18:58:09.290Z · score: 1 (1 votes) · LW · GW

Is there evidence for this claim? I've only ever seen evidence to the contrary

Comment by maximkazhenkov on Quarantine Preparations · 2020-02-26T16:41:25.702Z · score: 4 (2 votes) · LW · GW

How would UDT solve anthropic reasoning? Any Links?

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-26T16:08:01.538Z · score: 1 (1 votes) · LW · GW
A self-driving car which cannot correctly handle unrecognized objects is not safe

But so what? People are not safe; they have slower reaction time than machines, especially when intoxicated. For every example of a self-driving car causing an accident due to object recognition failure, I can point to a person causing an accident due to reaction time failure or attention failure. Why give preference to human failure modes?

You can always come up with arbitrarily contrived edge cases where a narrow AI requires robust value alignment like an AGI (e.g. this ridiculous trolley problem) to behave correctly and thereby reduce any real world narrow AI application to an AGI problem. Thing is, one day China is going to say "Fuck it, we need to get ahead on this AI issue" and just lets loose existing self-driving cars onto their streets; the rest gets sorted out by the insurance market and incremental tech improvements. That's my prediction of how we'll transition into self-driving.

Comment by maximkazhenkov on On unfixably unsafe AGI architectures · 2020-02-20T13:08:17.753Z · score: 1 (1 votes) · LW · GW
Whatever MIRI is doing in their undisclosed research program (involving Haskell I guess)

Uh... Haskell? I'm intrigued now.

Comment by maximkazhenkov on Moral public goods · 2020-01-27T13:25:02.130Z · score: 1 (1 votes) · LW · GW

The same way taxation is a coordination mechanism.

Taxation = social arrangement

Fines/prison sentence for tax evasion = enforcement mechanism

Charitable donation = social arrangement

Higher taxation = enforcement mechanism

Comment by maximkazhenkov on Moral public goods · 2020-01-27T11:02:50.169Z · score: 1 (1 votes) · LW · GW
It's not a coordination mechanism; it doesn't allow people to commit to giving money if and only if everyone else also gives money, as a tax does. Even if giving money was free (untaxed), the OP's coordination problem would remain.

Actually it is, just a bit contrived. The penalty for violating the "commitment" is having to pay extra taxes (lose the tax break). Just a matter of labels.

Comment by maximkazhenkov on Moral public goods · 2020-01-27T00:31:22.104Z · score: 2 (2 votes) · LW · GW

Maybe because there is value left on the table? You could apply the same logic to any new idea: "If it was so great, someone would have already thought of it and exploited it, so it clearly can't be that great."

Also, I would claim charity tax deduction already is such a coordination mechanism, allowing the rich to engage in philantrophy in ways they believe to be more effective than taxation (e.g. they would like more of their donations going towards foreign aid)

Comment by maximkazhenkov on Moral public goods · 2020-01-26T20:24:39.795Z · score: 6 (3 votes) · LW · GW
Some of the modern super-rich do generate disproportionately high value, e.g. from high-risk bets they made to build innovative companies. But most of their income still comes from capital and owning the tools of production and all that (citation required). And this influences the moral calculus for a lot of people. The reason for taking some of their property (income) is not just that most people want to do it or that someone else would enjoy it much more, it's that it shouldn't be theirs to begin with.

This isn't a post about social justice and wealth inequality in general. The moral calculus from the point of view of most people isn't the point of contention here, it's the point of view of the rich that's being discussed.

Comment by maximkazhenkov on Hedonic asymmetries · 2020-01-26T19:13:41.070Z · score: 3 (3 votes) · LW · GW

Ok I see, I was just confused by the wording "given some more time". I've become less optimistic over time about how long this disequilibrium will last given how quickly certain religious communities are growing with the explicit goal of outbreeding the rest of us.

Comment by maximkazhenkov on Hedonic asymmetries · 2020-01-26T17:58:11.307Z · score: 2 (2 votes) · LW · GW
And evolution doesn't seem to be likely to "fix" that given some more time.

Why would you suppose that?

We don't behave in a "Malthusian" way, investing all extra resources in increasing the number or relative proportion of our descendants in the next generation. Even though we definitely could, since population grows geometrically. It's hard to have more than 10 children, but if every descendant of yours has 10 children as well, you can spend even the world's biggest fortune. And yet such clannish behavior is not a common theme of any history I've read; people prefer to get (almost unboundedly) richer instead, and spend those riches on luxuries, not children.

Isn't that just due to the rapid advance of technology creating a world in disequilibrium? In the ancestral environment of pre-agricultural societies these behaviors you describe line up with maximizing inclusive genetic fitness pretty well; any recorded history you can read is too new and short to reflect what evolution intended to select for.

Comment by maximkazhenkov on Material Goods as an Abundant Resource · 2020-01-26T06:36:34.437Z · score: 3 (2 votes) · LW · GW

I'm no homo economicus and don't intend to become one; give me a duplicator and I shall drop out of the economy.

Comment by maximkazhenkov on Go F*** Someone · 2020-01-15T22:17:28.471Z · score: 11 (13 votes) · LW · GW

This is self-help-books-level advice

Comment by maximkazhenkov on Is backwards causation necessarily absurd? · 2020-01-15T18:37:25.962Z · score: 1 (1 votes) · LW · GW

Yes, but the direction of causality is very much preserved. The notion of present is not necessary in a directed acyclic graph.

Comment by maximkazhenkov on Predictors exist: CDT going bonkers... forever · 2020-01-15T18:31:23.135Z · score: 2 (2 votes) · LW · GW

But considering that randomness as an antidote to perfect predictions is ubiquitously available in this universe, it's hard to see what practical implications these CDT failures in highly contrived thought experiments have.

Comment by maximkazhenkov on What long term good futures are possible. (Other than FAI)? · 2020-01-12T22:30:51.593Z · score: -8 (8 votes) · LW · GW

No.

Comment by maximkazhenkov on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T22:23:01.049Z · score: 3 (3 votes) · LW · GW
Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.

But we're not comparing the probability of "a successful start-up will be created" vs. the probability of "an AGI will be created" in the next x years, we're comparing the probability of "an AGI will be created by a large organization" vs. the probability of "an AGI will be created by a single person on his laptop" given that an AGI will be created.

Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering - a highly technical field - can out-innovate established organizations like Lockheed Martin, why wouldn't the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.

Comment by maximkazhenkov on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T20:04:37.574Z · score: 2 (2 votes) · LW · GW
I actually agree that the "last key insight" is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day.

If that were true, start-ups wouldn't be a thing, we'd all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.

Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.

But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.

Comment by maximkazhenkov on Phage therapy in a post-antibiotics world · 2019-12-30T06:30:43.648Z · score: 1 (1 votes) · LW · GW

Seems unlikely as phages can evolve just as fast.

Comment by maximkazhenkov on What would happen if all the water on Earth were accumulated into spheres & drop on the surface? · 2019-12-08T11:23:19.314Z · score: 14 (4 votes) · LW · GW

These follow-up questions pertain to a dynamic context, and I'm afraid I'm not equipped to answer them. Moreover, I would also claim that not even Randall Munroe himself would be able to answer these questions, or anyone who hasn't got a supercomputer and a team of physicists at disposal.

I bought the What If book myself and loved every chapter of it. But if you look closely, you will notice that basically every analysis in that book was made from a static context or a dynamic one that has ridiculously simple solutions (i.e. linear or exponential). Even exotic topics like neutron star matter and supernova neutrinos can be analysed with ease under a static context; just a matter of typing large numbers into a calculator. But as soon as dynamics is involved, even mundane things like Earthly weather or air flow over ailerons is going to require a supercomputer.

It also doesn't help to analogize the problem with more familiar scenarios, either. Quantity has a quality of its own, as Stalin famously said. Thing like the cube-square law make big things behave very differently than small things even if they're made out of the same material or undergoing the same basic process. Nuclear explosions and supernovas are not hard to understand because of the extreme energies involved per se. Nuclear interactions relevant to these processes are many orders of magnitude lower than the energies achieved in particle accelerator experiments. What macroscopic effect a gargantuan amount of these simple interactions can produce, however, is a different matter.

That's why you need lots of brute force computational power as well as a team of physicists doing clever simplifications to get a general understanding of the problem at hand, not even a precise prediction of a specific problem instance like weather forecast. And I'm afraid they won't let you borrow their precious compute for a fun thought experiment.

Worse yet, in the case of real phenomenons like nuclear explosions and supernovas we at least get to observe their aftermaths (bomb yield/supernova remnant) to set a few boundary conditions on our analysis. For completely hypothetical scenarios, we can't even check our predictions against reality. Can we, for instance, safely ignore temporary phase changes into exotic ice forms? How about nuclear interactions triggered by locally extreme heat and pressure?

Comment by maximkazhenkov on Tapping Out In Two · 2019-12-06T01:09:21.252Z · score: 2 (2 votes) · LW · GW

If you'd like to break the habit of getting into internet arguments in the first place, this might be the right thing (for YouTube):

chrome.google.com/webstore/detail/hide-youtube-comments

Comment by maximkazhenkov on The unexpected difficulty of comparing AlphaStar to humans · 2019-11-30T05:43:01.372Z · score: 1 (1 votes) · LW · GW

That makes sense. Perhaps the opposite is true - that if all Nash equilibrium strategies are mixed, the game must have been imperfect information? In any simultaneous game the opponent's strategy would be the hidden information.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-30T01:42:51.714Z · score: 6 (4 votes) · LW · GW
In summary, my claim that DeepMind was throwing in the towel was wrong; they came back with a more polished version that was able to beat the world champion 4 out of 5 times

This statement, while technically correct, seems a bit misleading because being the world champion in Starcraft 2 really doesn't correlate well with being proficient at playing against AI. Check out this streamer who played against AlphaStar at the same event, without warm-up or his own setup (just like Serral) and went 7-3. What's more, he's pretty much got AlphaStar figured out by the end, and I'm fairly confident that if he was paid to play another 100 games, his win rate would be 90%+

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-30T01:03:52.881Z · score: 1 (1 votes) · LW · GW

Genetic algorithms also eventually evolved causal reasoning agents, us. That's why it feels weird to me that we're once again relying on gradient descent to develop AI - it seems backwards.

Comment by maximkazhenkov on What would happen if all the water on Earth were accumulated into spheres & drop on the surface? · 2019-11-20T21:41:49.357Z · score: 14 (7 votes) · LW · GW

Sorry for the late answer, I intended to write this 2 weeks ago but couldn't find the time.

OK, so let's look at the amount of potential energy locked up in our configuration of water spheres: I calculated the radius to be km. By symmetry, the potential energy would be the same if all the water were located at the center of these spheres, i.e. 133.322 km above ground. Add negative potential energy of Earth's oceans in its equilibrium state (1.844 km, half the average ocean depth), and we get a total of

That's a lot of energy. Let's assume the water is at 0°C initially (most ocean water is in the cold deep layers). To bring all this water to the boiling point, we'd need . That leaves us to boil the water. Given the latent heat needed, we can boil of the water.

Since Earth's oceans is 262 times as massive as the atmosphere and we boiled almost half of it, we now have an atmospheric pressure of 105 bars, exceeding that of Venus. Of course, the boiling point of water increases with pressure, but the latent heat of vaporization decreases and these two effects pretty much cancel each other out. It does mean however that we'll have a surface temperature of 315°C, approaching that of Venus. In other words, all life on Earth's surface, including the hardiest extremophile bacteria, is toast (or rather steam buns). Absolute overkill actually, since DNA itself disintegrates completely at around 200°C. The only safe place would be deep underground where the heat can't penetrate until it is radiated off into space.

What about seeking refuge in the ISS? Let's see what the atmospheric conditions are at its orbital height of 400 km. We'll use the barometric formula without temperature lapse because this isn't an equilibrium state anyway. This is equivalent to the air density of our current atmosphere at 60 km height.

Here's a video of the Mir space station at 80 km height.

Comment by maximkazhenkov on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T20:37:56.566Z · score: 2 (2 votes) · LW · GW

There are plenty (maybe even a majority) of people who would pay a premium for avoiding social interaction with strangers. In fact, early adoption of these automated technologies might be driven by exactly this reason. I think this satire puts it pretty concisely.

Comment by maximkazhenkov on Deleted · 2019-11-04T01:13:27.078Z · score: 3 (3 votes) · LW · GW

I was going to bring up Red Ice TV as a counter-example but just found out they got banned from Youtube 2 weeks ago. Troubling indeed.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T15:41:39.721Z · score: 16 (5 votes) · LW · GW

I think the AI just isn't sure what to do with the information received from scouting. Unlike AlphaZero, AlphaStar doesn't learn via self-play from scratch. It has to learn builds from human players, hinting at its inability to come up with good builds on its own, so it seems likely that AlphaStar also doesn't know how to alter its build depending on scouted enemy buildings.

One thing I have noticed though from observing these games is that AlphaStar likes to over-produce probes/drones as if preempting early game raids from the enemy. It seems to work out quite well for AlphaStar; being able to mine on full capacity afterwards. Is there a good reason why pro gamers don't do this?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-03T11:54:16.203Z · score: 1 (1 votes) · LW · GW

I see. In that case, I don't think it makes much sense to model scientific institutions or the human civilization as an agent. You can't hope to achieve unanimity in a world as big as ours.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T18:09:09.194Z · score: 3 (2 votes) · LW · GW

So basically game tree search was the "reasoning" part of AlphaZero?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T18:07:17.248Z · score: 1 (1 votes) · LW · GW

Well, it's overdetermined. Action space, tree depth, incomplete information; any one of these is enough to make MC tree search impossible.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:20:17.677Z · score: 22 (11 votes) · LW · GW

I think DeepMind should be applauded for addressing the criticisms about AlphaStar's mechanical advantage in the show games against TLO/Mana. While not as dominant in its performance, the constraints on the new version basically matches human limitations in all aspects; the games seemed very fair.

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:12:36.808Z · score: 2 (2 votes) · LW · GW

In other words, there is a non-trivial chance we could get to AGI literally this year?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T13:03:51.927Z · score: 2 (4 votes) · LW · GW

I think one ought to be careful with the wording here. What is the proportion of existing AI progress? We could be 90% there on the time axis and only one last key insight is left to be discovered, but still virtually useless compared to humans on the capability axis. It would be a precarious situation. Is the inability of our algorithms to reason the problem, or our only saving grace?

Comment by maximkazhenkov on AlphaStar: Impressive for RL progress, not for AGI progress · 2019-11-02T12:42:32.207Z · score: 3 (2 votes) · LW · GW

Could you elaborate why it's "extremely bad" news? In what sense is it "better" for DeepMind to be more staightforward with their reporting?

Comment by maximkazhenkov on The Technique Taboo · 2019-10-31T09:18:46.868Z · score: 2 (2 votes) · LW · GW

I've heard the alternate explanation that having to stare at the keyboard is bad for your neck/spine because of the downward angle in your head position, and touch typing allows you to avoid that. Which is especially important for programmers apparently because they work in front of a computer screen all the time.

Comment by maximkazhenkov on The Technique Taboo · 2019-10-30T20:44:05.235Z · score: 4 (3 votes) · LW · GW

Why is fast typing considered a necessary skill for programmers in the first place? For secretaries or writers it seems to make a difference, but how much time would be delayed if a programmer was only able to type, say, 30 words per minute? It seems to me that if you have to pause to think, typing speed was never a bottleneck anyway.

Comment by maximkazhenkov on Deleted · 2019-10-30T18:49:34.725Z · score: 3 (2 votes) · LW · GW
Look at Japan you can go over there and mention the bombs and their war crimes and most won't care.

I really don't think picking out the most conservative and conformist country on the planet supports your point very well. Of course they don't care, denying their past war crimes is the official position. Meanwhile in the US, the evils of Western Imperialism (including recent ones) is standard textbook material. Whether you agree with those textbooks or not, the phrase "history is written by the victors" usually doesn't imply self-critical writing.

The ideas are public but most people are not willing to state them and risk their social lives.

Or perhaps people are not willing to state them because they don't agree those ideas? If people are protected by legal rights to free speech and anonymity on the web yet some ideas still can't gain any traction on the market of ideas, you should start considering the possibility that those ideas aren't even secretly popular.

Comment by maximkazhenkov on The Missing Piece · 2019-10-30T17:03:27.727Z · score: 1 (1 votes) · LW · GW

A very insightful explanation. It leads me to think what this implies for the replication of nanobots:

If all nanodevices produced are precise molecular copies, and moreover, any mistakes on the assembly line are not heritable because the offspring got a digital copy of the original encrypted instructions for use in making grandchildren, then your nanodevices ain't gonna be doin' much evolving.
You'd still have to worry about prions—self-replicating assembly errors apart from the encrypted instructions, where a robot arm fails to grab a carbon atom that is used in assembling a homologue of itself, and this causes the offspring's robot arm to likewise fail to grab a carbon atom, etc., even with all the encrypted instructions remaining constant.

So is prion evolution just sliding from fixed point to fixed point? If so, how likely is it to happen and how would one go about suppressing process? How would one reduce the density of fixed points?

Comment by maximkazhenkov on Deleted · 2019-10-30T16:39:21.446Z · score: 1 (1 votes) · LW · GW
You seem to be following my every comment.

No I haven't, I just click through comments on posts that interest me.

What I'm confused about is what you mean by "they wouldn't dare express it in public". There are entire communities and subcultures built around conspiracy theories on the web, whether it's 9/11, Holocaust denial, moon landing or flat earth. How much more public can it get?

Comment by maximkazhenkov on What's your big idea? · 2019-10-29T23:34:42.515Z · score: 1 (1 votes) · LW · GW

This is like saying we need the government to mandate apple production, because without apples we might become malnourished which is bad. Why can't the market solve the problem more efficiently? Where's the coordination failure?

Comment by maximkazhenkov on Deleted · 2019-10-29T15:20:34.456Z · score: 1 (1 votes) · LW · GW

They can be stated, nobody is contesting that. They can also be downvoted to hell, which is what I'm arguing for.

Comment by maximkazhenkov on What's your big idea? · 2019-10-29T15:12:44.933Z · score: 1 (1 votes) · LW · GW

And thus, knowing geography becomes a comparative advantage to those who choose to study it. Why should the rest of us care?

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T18:53:39.787Z · score: 2 (2 votes) · LW · GW

I disagree on multiple dimensions:

First, let's get disagreements about values out of the way: I hate the term "brainwashing" since it's virtually indistinguishable from "teaching", the only difference being the intent of the speaker (we're teaching our kids liberal democratic values while the other tribe is brainwashing their kids with Marxism). But to the extent "brainwashing" has a useful definition at all, creating "a population who will perpetuate the state" would be it. In my view, if our civilization can't survive without tormenting children with years upon years of conditioning, it probably shouldn't.

Second, I'm very skeptical about this model of a self-perpetuating society. So "they" teach us literature and history? Who's "they"? Group selectionism doesn't work; there is no reason to assume that memes good at perpetuating themselves would also be good at perpetuating the civilization they find themselves in. I think it's likely that people in charge of framing the school curriculum are biased towards holding those subjects in high regard that they've been taught in school themselves (sunken cost fallacy, prestige signaling), thus becoming vehicles for meme spread. I don't see any incentive for any education board member to stop, think and analyze what will perpetuate the government they're a part of.

I also very much doubt the efficacy of such education/brainwashing at manipulating citizens into perpetuating the state. In my experience, reverse psychology and tribalism are much better methods for this purpose than straightforward indoctrination, particularly with people in their rebellious youth. The classroom, frequently associated with boredom and monotony, is among the worst environments to apply these methods. There is no faster way to create an atheist out of a child than sending him through mandatory Bible study classes; and no faster way to create a libertarian than to make him memorize Das Kapital.

Lastly, the bulk of today's actual school curriculum is neutral with respect to perpetuating our society - maths, physics, chemistry, biology, foreign languages, even most classical literature are apolitical. So even setting the issue of "civilizational propagation" aside, there is still enormous potential for optimization.

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T17:44:22.394Z · score: 1 (1 votes) · LW · GW

It's easy to prepare kids to become anything. Just teach what's universally useful.

It's impossible to prepare kids to become everything. Polymaths stopped being viable two centuries ago.

There is a huge difference between union and intersection of sets.

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T17:35:01.165Z · score: 1 (1 votes) · LW · GW

This is basically the long-term goal of Neuralink as stated by Elon Musk. I am however very skeptical because of two reasons:

  • Natural selection did not design brains to be end-user modifiable. Even if you could accurately monitor every single neuron in a brain in real-time, how would you interpret your observations and interface with it? You'd have to build a translator by correlating these neuron firing patterns with observed behaviors, which seems extremely intractable
  • In what way would such a brain-augmenting external memory be superior to pen and paper? Pen and paper already allows me to accomplish working-memory limited tasks such as multiplication of large numbers, and I'm neither constrained by storage space (I will run out of patience before I run out of paper) nor by bandwidth of the interface (most time is spent on computing what to write down, not writing itself)

It seems there is an extreme disproportionality between the difficulty of the problem and the value of solving it.

Comment by maximkazhenkov on What's your big idea? · 2019-10-28T16:59:29.953Z · score: 1 (1 votes) · LW · GW
Still there are people that I think I want in my life that all falling prey to this beast and I want to save them.

Why would this be an ethical thing to do? It sounds like you're trying to manipulate others into people you'd like them to be and not what they themselves like to be.

How to utilize my "cut X, cold-turkey" ability to teach and maintain anti-akrasia (or more general, non-self-bettering) techniques

Ethics aside, this seems to be a tall order. You're basically trying to hack into someone else's mind through very limited input channels (speech/text). In my experience it's never a lack of knowledge that's hindering people from overcoming akrasia (also the reason I'm skeptical towards the efficacy of self-help books).

Essentially, I think we're under-utilizing several higher mathematical objects - Tensors, to name one.

That's a very good point. In ML courses lots of time is spent on introducing different network types and technical details of calculus/linear algebra, without explaining why to pick out neural networks from idea space in the first place beyond hand-waving that it's "biologically inspired".

Comment by maximkazhenkov on Deleted · 2019-10-25T09:53:37.431Z · score: 1 (1 votes) · LW · GW

It's a question about value, not fact. "Bias" is not even a criticism here; value is nothing but bias.