Posts

A Brief Introduction to Container Logistics 2021-11-11T15:58:11.510Z
SSC Zürich February Meetup 2020-01-25T17:21:46.229Z

Comments

Comment by Vitor on The Best Tacit Knowledge Videos on Every Subject · 2024-04-15T10:26:28.435Z · LW · GW

Domain: Farming Construction and Craftsmanship

Link: Simple off grid Cabin that anyone can build & afford: https://www.youtube.com/watch?v=bOOXmfkXpkM (and many other builds on his channel)

Person: Dave Whipple

Background: Construction contractor, DIY living off-grid in Alaska and Michigan.

Why: He and his wife bootstrapped themselves building their own cabin, then house, sell at a profit, rinse and repeat a few times. There are many, many videos of people building their own cabins, etc. Dave's are simple, clear, lucid, from a guy who's done it many times and has skin in the game.

Comment by Vitor on Zvi's Manifold Markets House Rules · 2023-11-24T11:50:53.261Z · LW · GW

I agree that points 12 and 13 are at least mildly controversial. From the PoV of someone adopting these rules, it'd be enough if you changed the "will"s to "may"s.

By and large, the fewer points that are binding for the market creator, the easier it is to adopt the rules. I'm fine with a few big points being strongly binding (e.g. #15), and also fine with the more aspirational points where "Zvi's best judgement" automatically gets replaced with "Vitor's best judgement". But I'd rather not commit to some minutiae I don't really care about.

(It's more about "attack surface" or maybe in this case we should say "decision surface" than actual strong disagreement with the points, if that makes sense?)

Comment by Vitor on Notes on Teaching in Prison · 2023-04-21T23:00:01.594Z · LW · GW

Very interesting read, thank you!

How did you end up doing this work? Did you deliberately seek it out? What are teachers, probation officers and so on (everyone who is not a guard) like? What drives them?

Comment by Vitor on GPT-4 solves Gary Marcus-induced flubs · 2023-03-20T15:11:46.673Z · LW · GW

This kind of thing has existed (for example optimal hardware layout) for decades. It sounds a lot less impressive when you sub out "AI" for "algorithm".

"for certain aspects of computer science, computer scientists are already worse than even naive sorting algorithms". Yes, we know that machines have a bunch of advantages over humans. Calculation speed and huge, perfect memory being the most notable.

Comment by Vitor on GPT-4 solves Gary Marcus-induced flubs · 2023-03-20T02:33:59.156Z · LW · GW

Where on earth are you pulling those predictions about GPT-5 and 6 from? I'd take the other side of that bet.

Comment by Vitor on How popular is ChatGPT? Part 2: slower growth than Pokémon GO · 2023-03-04T14:22:29.962Z · LW · GW

The original chart is misleading in more ways than one. Facebook, Netflix et al might be household names now, but this has more to do with their staying power and network effects than any sort of exceedingly fast adoption.

I also suspect that chatGPT has a bunch of inactive accounts, as it's essentially a toy without an actual use-case for most people.

Comment by Vitor on The Waluigi Effect (mega-post) · 2023-03-03T13:41:10.795Z · LW · GW

Recognise that almost all the Kolmogorov complexity of a particular simulacrum is dedicated to specifying the traits, not the valences. The traits — polite, politically liberal, racist, smart, deceitful — are these massively K-complex concepts, whereas each valence is a single floating point, or maybe even a single bit!

 

A bit of a side note, but I have to point out that Kolmogorov complexity in this context is basically a fake framework. There are many notions of complexity, and there's nothing in your argument that requires Kolmogorov specifically.

Comment by Vitor on Basics of Rationalist Discourse · 2023-01-31T14:51:27.668Z · LW · GW

It seems to me that you are attempting to write a timeless, prescriptive reference piece. Then a paragraph sneaks in that is heavily time and culture dependent.

I'm honestly not certain about the intended meaning. I think you intent mask wearing to be an example of a small and reasonable cost. As a non-american, I'm vaguely aware what costco is, but don't know if there's some connotation or reference to current events that I'm missing. And if I'm confused now, imagine someone reading this in 2030...

Without getting into the object-level discussion, I think such references have no place in the kind of post this is supposed to be, and should be cut or made more neutral.

Comment by Vitor on Basics of Rationalist Discourse · 2023-01-30T15:07:43.942Z · LW · GW

You didn't address the part of my comment that I'm actually more confident about. I regret adding that last sentence, consider it retracted for now (I currently don't think I'm wrong, but I'll have to think/observe some more, and perhaps find better words/framing to pinpoint what bothers me about rationalist discourse).

Comment by Vitor on Basics of Rationalist Discourse · 2023-01-29T16:25:10.821Z · LW · GW

It's analogous to a customer complaining "if Costco is going to require masks, then I'm boycotting Costco."  All else being equal, it would be nice for customers to not have to wear masks, and all else being equal, it would be nice to lower the barrier to communication such that more thoughts could be more easily included.

 

Just a small piece of feedback. This paragraph is very unclear, and it brushes on a political topic that tends to get heated and personal.

I think you intended to say that the norms you're proposing are just the basic cost of entry to a space with higher levels of cooperation and value generation. But I can as easily read it as your norms being an arbitrary requirement that destroys value by forcing everyone to visibly incur pointless costs in the name of protecting against a bogeyman that is being way overblown.

This unintended double meaning seems apt to me: I mostly agree with the guidelines, but also feel that rationalists overemphasize this kind of thing and discount the costs being imposed. In particular, the guidelines are very bad for productive babbling / brainstorming, for intuitive knowledge transfer, and other less rigorous ways of communicating that I find really valuable in some situations.

Comment by Vitor on Why you should learn sign language · 2023-01-19T13:26:15.185Z · LW · GW

One thing I've read somewhere is that people who sign but aren't deaf, tend to use sign language in parallel with spoken language. That's an entire parallel communications channel!

Relatedly, rationalists lean quite heavily towards explicit ask/tell culture. This is sometimes great, but often clunky: "are you asking for advice? I might have some helpful comments but I'm not sure if you actually want peoples' opinions, or if you just wanted to vent."

Combining these two things, I see possible norms evolving where spoken language is used for communicating complex thoughts, and signing is used for coordination, cohesion, making group decisions (which is often done implicitly in other communities). I think there's a lot of potential upside here.

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-16T06:38:52.828Z · LW · GW

I think you're confusing arrogance concerning the topic itself with communicating my insights arrogantly. I'm absolutely doing the latter, partly as a pushback to your overconfident claims, partly because better writing would require time and energy I don't currently have. But the former? I don't think so.

Re: the Turing test. My apologies, I was overly harsh as well. But none of these examples are remotely failing the Turing test. For starters, you can't fail the test if you're not aware you're taking it. Should we call anyone misreading some text or getting a physics question wrong as "having failed the Turing test" from now on, in all contexts?

Funnily enough, the pendulum problem admits a bunch of answers, because "swinging like a pendulum" has multiple valid interpretations. Furthermore, a discerning judge shouldn't just fail every entity that gets the physics wrong, nor pass every entity that get the physics right. We're not learning anything here except that many people are apparently terrible at performing Turing tests, or don't even understanding what the test is. That's why I originally read your post as an insult, because it just doesn't make sense to me how you're using the term (so it's reduced to a "clever" zinger)

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-15T19:25:18.880Z · LW · GW

fair enough, I can see that reading. But I didn't mean to say I actually believe that, or that it's a good thing. More like an instinctive reaction.

It's just that certain types of life experiences put a small but noticeable barrier between you and other people. It was a point about alienation, and trying to drive home just how badly typical minding can fail. When I barely recognize my younger self from my current perspective, that's a pretty strong example.

Hope that's clearer.

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-15T18:29:26.249Z · LW · GW

What you said, exactly, was:

Just hope you at least briefly consider that I was exactly at your stage one day

which is what I was responding to. I know you're not claiming that I'm 100% hackable, but yet you insist on drawing strong parallels between our states of mind, e.g., that being dismissive must stem from arrogance. That's the typical-minding I'm objecting to. Also, being smart has nothing to do with it, perhaps you might go back and carefully re-read my original comment.

The Turing test doesn't have a "reading comprehension" section, and I don't particularly care if some commenters make up silly criteria for declaring someone as failing it. And humans aren't supposed to have a 100% pass rate, btw, that's just not in the nature of the test. It's more of a thought experiment than a benchmark really.

Finally, it's pretty hard to not take this the wrong way, as it's clearly a contentless insult.

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-15T15:51:41.478Z · LW · GW

I read your original post and I understood your point perfectly well. But I have to insist that you're typical-minding here. How do you know that you were exactly at my stage at some point? You don't.

You're trying to project your experiences to a 1-dimensional scale that every human falls on. Just because I dismiss a scenario, same as you did, does not imply that I have anywhere near the same reasons / mental state for asserting this. In essence, you're presenting me with a fully general counterargument, and I'm not convinced.

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-15T14:25:38.345Z · LW · GW

So, are all rationalists 70% susceptible, all humans? specifically people who scoff at the possibility of it happening to them? what's your prior here?

100 hours also seems to be a pretty large number. In the scenario in question, not only does a person need to be hacked at 100h, but they also need to decide to spend hour 2 after spending hour 1, and so on. If you put me in an isolated prison cell with nothing to do but to talk to this thing, I'm pretty sure I'd end up mindhacked. But that's a completely different claim.

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-15T13:31:07.586Z · LW · GW

Thanks for posting this, I recognize this is emotionally hard for you. Please don't interpret the rest of this post as being negative towards you specifically. I'm not trying to put you down, merely sharing the thoughts that came up as I read this.

I think you're being very naive with your ideas about how this "could easily happen to anyone". Several other commenters were focusing on how lonely people specifically are vulnerable to this. But I think it's actually emotionally immature people who are vulnerable, specifically people with a high-openness, "taking ideas seriously" kind of personality, coupled with a lack of groundedness (too few points of contact with the physical world).

This is hard to explain without digressing at least a bit, so I'm going to elaborate, as much for my own benefit as yours.

As I've aged (late 30's now), there's been some hard to pin down changes in my personality. I feel more solidified than a decade ago. I now perceive past versions of myself almost as being a bit hollow; lots of stuff going on at the surface level, but my thoughts and experiences weren't yet weaving together into the deep structures (below what's immediately happening) that give a kind of "earthiness" or "groundedness" to all my thoughts and actions now. The world has been getting less confusing with each new thing I learn, so whatever I encounter, I tend to have related experiences already in my repertoire of ideas I've digested and integrated. Thus, acquisition of new information/modes of thinking/etc becomes faster and faster, even as my personality shifts less and less from each encounter with something new. I feel freer, more agenty now. This way of saying it is very focused on the intellect, but something analogous is going on at the emotional level as well.

I've started getting this impression of hollowness from many people around me, especially from young people who have had a very narrow life path, even highly intelligent ones. Correlates: living in the same place/culture all their life, doing the same activity all their life e.g. high school into undergrad into phd without anything in between, never having faced death, never having questioned or been exposed to failure modes of our social reality, etc etc.

I know it's outrageously offensive to say, but at least some instincive part of me has stopped perceiving these beings as actual people. They're just sort of fluttering around, letting every little thing knock them off-balance, because they lack the heft to keep their own momentum going, no will of their own. Talking to these people I'm more and more having the problem of the inferential distances being too high to get any communication beyond social niceties going. You must think I'm super arrogant, but I'm just trying to communicate this important, hard to grasp idea.

Most people don't ever become solified in this way (the default mode for humans seems to be to shut off the vulnerable surface layer entirely as they age), but that's yet another digression...

All of this is a prelude to saying that I'm confident I wouldn't fall for these AI tricks. That's not a boast, or put-down, or hubris, just my best estimation based on what I know about myself. I'd consider being vulnerable in this way as a major character flaw. This not only applies to interacting with an AI btw, but also with actual humans that follow similar exploitative patterns of behavior, from prospective lovers, to companies with internal cultures full of bullshit, all the way up to literal cults. (Don't get me wrong, I have plenty of other character flaws, I'm not claiming sainthood here)

As other people have already pointed out, you've been shifting goalposts a lot discussing this, letting yourself get enchanted by what could be, as opposed to what actually is, and this painfully reminds me of several people I know, who are so open-minded that their brain falls out occasionally, as the saying goes. And I don't think it's a coincidence that this happens a lot to rationalist types, it seems to be somehow woven into the culture that solidifying and grounding yourself in the way I'm gesturing at is not something that's valued.

Relatedly, in the last few years there's been several precipitating events that have made me distance myself a bit from the capital-R Rationalist movement. In particular the drama around Leverage research and other Rationalist/EA institutions, which seem to boil down to a lack of common sense and a failure to make use of the organizational wisdom that human institutions have developed over millenia. A general lack of concern for robustness, defense-in-depth, designing with the expectation of failure, etc. The recent FTX blow-up wrt EA also has a whiff of this same hubris. Again, I don't think it's a coincidence, just a result of the kind of people that are drawn to the rationalist idea-space doing their thing and sharing the same blind spots.

As long as I'm being offensively contrarian anyway, might as well throw in that I'm very skeptical of the median LW narrative about AGI being very near. The emotional temperature on LW wrt these topics has been rising steadily, in a way that's reminiscent of your own too-generous read of "Charlotte"'s behavior. You can even see a bunch of it in the discussion of this post, people who IMO are in the process of losing their grasp on reality. I guess time will tell if the joke's on me after all.

Comment by Vitor on How it feels to have your mind hacked by an AI · 2023-01-15T12:12:12.321Z · LW · GW

Please do tell what those superpowers are!

Comment by Vitor on What's up with ChatGPT and the Turing Test? · 2023-01-06T02:47:54.151Z · LW · GW

I definitely think so. The Turing test is a very hard target to hit, and we don't really have a good idea how to measure IQ, knowledge, human-likeness, etc. I notice a lot of confusion, anthropomorphizing, bad analogies, etc in public discourse right now. To me it feels like the conversation is at a level where we need more precise measures that are human and machine compatible. Benchmarks based on specific tasks (as found in AI papers) don't cut it.

(ep status: speculative) Potentially, AI safety folks are better positioned to work on these foundational issues than traditional academics, who are very focused on capabilities and applications right now.

Comment by Vitor on What's up with ChatGPT and the Turing Test? · 2023-01-04T16:04:47.566Z · LW · GW

I'm not buying the premise. Passing the Turing test requires to fool an alert, smart person who is deliberately probing the limits of the system. ChatGPT isn't at that level.

A specially tuned persona that is optimized for this task might do better than the "assistant" persona we have available now, but the model is currently incapable of holding a conversation without going on long, unwanted tangents, getting trapped in loops, etc.

Comment by Vitor on Rock-Paper-Scissors Can Be Weird · 2022-12-29T03:03:18.480Z · LW · GW

The name I have in my head for this is "zones of control". In board- and videogames, sometimes a unit explicitly has an effect on tiles adjacent to its own. I expanded the term from there to include related phenomena, for example where the mere existence of strategy X blocks strategy Y from ever being played, even if X itself is almost never played either. X is in some sense providing "cover fire", not achieving anything directly, but pinning down another strategy in the process.

This case doesn't match that intuition exactly, but it's in the same neighborhood.

Comment by Vitor on Let’s think about slowing down AI · 2022-12-23T17:53:46.982Z · LW · GW

The difference between regulation and research is that the former has a large amount of friction, making it about as hard to push a 1% regulation through as a 10% one.

In contrast, the incremental 1% improvements in the development of capabilities is just what happens by default, as research organizations follow their charter.

Comment by Vitor on Let’s think about slowing down AI · 2022-12-23T17:48:30.783Z · LW · GW

Agreed. My main objection to the post is that it considers the involved agents to be optimizing for far future world-states. But I'd say that most people (including academics and AI lab researchers) mostly only think of the next 1% step in front of their nose. The entire game theoretic framing in the arms race etc section seems wrong to me.

Comment by Vitor on Updating my AI timelines · 2022-12-13T15:20:51.398Z · LW · GW

Mmm, I would say the general shape of your view won't clash with reality, but the magnitude of the impact will.

It's plausible to me that a smart buyer will go and find the best deal for you when you tell it to buy laptop model X. It's not plausible to me that you'll be able to instruct it "buy an updated laptop for me whenever a new model comes out that is good value and sufficiently better than what I already have," and then let it do its thing completely unsupervised (with direct access to your bank account). That's what I mean by multiple complicated objectives.

What counts as "domain where correctness matters?" What counts as "very constrained set of actions?" Would e.g. a language-model-based assistant that can browse the internet and buy things for you on Amazon (with your permission of course) be in line with what you expect, or violate your expectations?

Something that goes beyond current widespread use of AI such as spam-filtering. Spam-filtering (or selecting ads on facebook, or flagging hate speech etc) is a domain where the AI is doing a huge number of identical tasks, and a certain % of wrong decisions is acceptable. One wrong decision won't tank the business. Each copy of the task is done in an independent session (no memory).

An example application where that doesn't hold is putting the AI in charge of ordering all the material inputs for your factory. Here, a single stupid mistake (didn't buy something because the price will go down in the future, replaced one product with another, misinterpret seasonal cycles) will lead to a catastrophic stop of the entire operation.

(Also, what about Copilot? Isn't it already an example of an application that genuinely works, and isn't just in the twilight zone?)

Copilot is not autonomous. There's a human tightly integrated into everything it's doing. The jury is still out on if it works, i.e., do we have anything more than some programmers' self reports to substantiate that it increases productivity? Even if it does work, it's just a productivity tool for humans, not something that replaces humans at their tasks directly.

Comment by Vitor on Updating my AI timelines · 2022-12-13T01:10:59.500Z · LW · GW

OK, well, you should retract your claim that the median LW timeline will soon start to clash with reality then! It sounds like you think reality will look basically as I predicted! (I can't speak for all of LW of course but I actually have shorter timelines than the median LWer, I think.)

I retract the claim in the sense that it was a vague statement that I didn't expect to be taken literally, which I should have made clearer! But it's you who operationalized "a few years" as 2026 and "the median less wrong view" as your view.

Anyway, I think I see the outline of our disagreement now, but it's still kind of hard to pin down.

First, I don't think that AIs will be put to unsupervised use in any domain where correctness matters, i.e., given fully automated access to valuable resources, like money or compute infrastructure. The algorithms that currently do this have a very constrained set of actions they can take (e.g. an AI chooses an ad to show out of a database of possible ads), and this will remain so.

Second, perhaps I didn't make clear enough that I think all of the applications will remain in this twilight of almost working, showing some promise, etc, but not actually deployed (that's what I meant by the economic impact remaining small). So, more thinkpieces about what could happen (with isolated, splashy examples), rather than things actually happening.

Third, I don't think AIs will be capable of performing tasks that require long attention spans, or that trade off multiple complicated objectives against each other. With current technology, I see AIs constrained to be used for short, self-contained tasks only, with a separate session for each task.

Does that make the disagreement clearer?

Comment by Vitor on Updating my AI timelines · 2022-12-09T23:12:40.302Z · LW · GW

I do roughly agree with your predictions, except that I rate the economic impact in general to be lower. Many headlines, much handwringing, but large changes won't materialize in a way that matters.

To put my main objection succinctly, I simply don't see why AGI would follow soon from your 2026 world. Can you walk me through it?

Comment by Vitor on Updating my AI timelines · 2022-12-09T00:46:34.005Z · LW · GW

Sure, let me do this as an exercise (ep stat: babble mode). Your predicions are pretty sane overall, but I'd say you handwave away problems (like integration over a variety of domains, long-term coherent behavior, and so on) that I see as (potentially) hard barriers to progress.

2022

  • 2022 is basically over and I can't get a GPT instance to order me a USB stick online.

2023

  • basically agree, this is where we're at right now (perhaps with the intensity turned down a notch)

2024

  • you're postulating that "It’s easy to make a bureaucracy and fine-tune it and get it to do some pretty impressive stuff, but for most tasks it’s not yet possible to get it to do OK all the time." I have a fundamental disagreement here. I don't think these tools will be effective at doing any task autonomously (fooling other humans doesn't count, neither does forcing humans to only interact with a company through one of these). Currently (2022) chatGPT is arguably useful as a babbling tool, stimulating human creativity and allowing it to make templating easier (this includes things like easy coding tasks). I don't see anything in your post that justifies the implicit jump in capabilities you've snuck in here.

  • broadly agree with your ideas on propaganda, from the production side (i.e. that lots of companies/governments will be doing lots of this stuff). But I think that general attitudes in the population will shift (cynicism etc) and provide some amount of herd immunity. Note that the influence of the woke movement is already fading, shortly after it went truly mainstream and started having visible influence in average people's lives. This is not a coincidence.

2025

  • Doing well at diplomacy is not very related to general reasoning skills. I broadly agree with Zvi's take and also left some of my thoughts there.

  • I'm very skeptical that bureaucracies will be the way forward. They work for trivial tasks but reliably get lost in the weeds and start talking to themselves in circles for anything requiring a non-trivial amount of context.

  • disagree on orders of magnitude improvements in hardware. You're proposing a 100x decrease in costs compared to 2020, when it's not even clear our civilization is capable of keeping hardware at current levels generally available, let alone cope with a significant increase in demand. Semiconductor production is much more centralized/fragile than people think, so even though billions of these things are produced per year, the efficient market hypothesis does not apply to this domain.

2026

  • Here you're again postulating jumps in capabilities that I don't see justified. You talk about the "general understanding and knowledge of pretrained transformers", when understanding is definitely not there, and knowledge keeps getting corrupted by the AI's tendency to synthesize falsities as confidently as truths. Insofar as the AI can be said to be intelligent at all, it's all symbol manipulation at a high simulacron level. Integration with real-world tasks keeps mysteriously failing as the AI flounders around in a way that is simultaneously very sophisticated, but oh so very reminiscent of 2022.

  • disagree about your thoughts on propaganda, which is just an obvious extension of my 2024 thoughts above. I also notice that social changes this large take orders of magnitude longer to percolate through society than what you predict, so I disagree with your predictions even conditioned on your views of the raw effectiveness of these systems.

  • "chatbots quickly learn about themselves" etc. Here you're conflating the regurgitation of desirable phrases with actual understanding. I notice that as you write your timeline, your language morphs to make your AIs more and more conscious, but you're not justifying this in any way other than... something something self-referential, something something trained on their own arxiv papers. I don't mean to be overly harsh, but here you seem to be sneaking in the very thing that's under debate!

Comment by Vitor on Why I'm Sceptical of Foom · 2022-12-08T14:54:37.565Z · LW · GW

This is reasonably close to my beliefs. An additional argument I'd like to add is:

  • Even if superintelligence is possible, the economic path towards it might be impossible.

There needs to be an economically viable entity pushing AI development forward every step of the way. It doesn't matter if AI can "eventually" produce 30% worldwide GPD growth. Maybe diminishing returns kick in around GPT-4, or we run out of useful training data to feed to the models (We have very few examples of +6 SD human reasoning, as MikkW points out in a sibling comment).

Analogy: It's not the same to say that a given species with X,Y,Z traits can survive in an ecosystem, than to say it can evolve from its ancestor in that same ecosystem.

Comment by Vitor on Using GPT-Eliezer against ChatGPT Jailbreaking · 2022-12-08T14:44:18.188Z · LW · GW

Nice! But we're still missing a capability, namely causing the model to respond to a specific prompt, not just output an arbitrary unsafe thing.

Comment by Vitor on Machine Learning Consent · 2022-12-08T13:36:40.825Z · LW · GW

How likely is it that this becomes a legal problem rendering models unable to be published? Note that using models privately (even within a firm) will always be an option, as copyright only applies to distribution of the work.

Comment by Vitor on Machine Learning Consent · 2022-12-08T13:34:16.979Z · LW · GW

This is an interesting thought, but it seems very hard to realize as you have to distill the unique contribution of the sample, as opposed to much more widespread information that happens to be present in the sample.

Weight updates depend heavily on training order of course, so you're really looking for something like the Shapley value of the sample, except that "impact" is liable to be an elusive, high-dimensional quantity in itself.

Comment by Vitor on Updating my AI timelines · 2022-12-08T13:29:09.412Z · LW · GW

I also tend to find myself arguing against short timelines by default, even though I feel like I take AI safety way more seriously than most people.

At this point, how many people with long timelines are there still around here? I haven't explicitly modeled mine, but it seems clear that they're much, much longer (with significant weight on "never") than the average less wronger. The next few years will for sure be interesting as we see the "median less wrong timeline" clash with reality.

Comment by Vitor on Jailbreaking ChatGPT on Release Day · 2022-12-02T18:07:41.300Z · LW · GW

For meth it lists an ingredient (ether) that it doesn't actually use. And actual lab protocols are much more detailed about precise temperature, times, quantities, etc.

Comment by Vitor on On the Diplomacy AI · 2022-11-30T21:32:29.499Z · LW · GW

(ep stat: it's hard to model my past beliefs accurately, but this is how I remember it)

I mean, it's unsurprising now, but before that series of matches where AlphaStar won, it was impossible.

Maybe for you. But anyone who has actually played starcraft knows that it is a game that is (1) heavily dexterity capped, and (2) intense enough that you barely have time to think strategically. It's all snap decisions and executing pre-planned builds and responses.

I'm not saying it's easy to build a system that plays this game well. But neither is it paradigm-changing to learn that such a thing was achieved, when we had just had the news of alphago beating top human players. I do remember being somewhat skeptical of these systems working for RTS games, because the action space is huge, so it's very hard to even write down a coherent menu of possible actions. I still don't really understand how this is achieved.

When AlphaStar is capped by human ability and data availability, it's still better than 99.8% of players, unless I'm missing something, so even if all a posteriori revealed non-intelligence-related advantages are taken away, it looks like there is still some extremely significant Starcraft-specialized kind of intelligence at play.

I haven't looked into this in detail, so assuming the characterization in the article is accurate, this is indeed significant progress. But the 99.8% number is heavily misleading. The system was tuned to have an effective APM of 268, that's probably top 5% of human players. Even higher if we assume that the AI never misclicks, and never misses any information that it sees. The latter implies 1-frame reaction times to scouting anything of strategic significance, which is a huge deal.

Comment by Vitor on On the Diplomacy AI · 2022-11-29T21:41:05.532Z · LW · GW

I agree, and I don't use this argument regarding arbitrary AI achievements.

But it's very relevant when capabilities completely orthogonal to the AI are being sold as AI. The Starcraft example is more egregious, because AlphaStar had a different kind of access to the game state than a human has, which was claimed to be "equivalent" by the deepmind team. This resulted in extremely fine-grained control of units that the game was not designed around. Starcraft is partially a sport, i.e., a game of dexterity, concentration, and endurance. It's unsurprising that a machine beats a human at that.

If you (generic you) are going to make an argument about how speed of execution, parallel communication and so on are game changers (specially in an increasingly online, API accessible world), then make that argument. But don't dress it up with the supposed intelligence of the agent in question.

Comment by Vitor on Geometric Rationality is Not VNM Rational · 2022-11-29T13:38:50.718Z · LW · GW

I find this example interesting but very weird. The couple is determining fairness by using "probability mass of happiness" as the unit of account. But it seems very natural to me to go one step further and adjust for the actual outcomes, investing more resources into the sub-agent that has worse luck.

I don't know if this is technically doable (I foresee complications with asymmetric utility functions of the two sub-agents, where one is harder to satisfy than the other, or even just has more variance in outcomes), but I think such an adjustment should recover the VNM independence condition.

Comment by Vitor on On the Diplomacy AI · 2022-11-29T12:49:49.378Z · LW · GW

This confirms the suspicions I had upon hearing the news: the diplomacy AI falls in a similar category than the Starcraft II AI (AlphaStar) we had a while back.

"Robot beats humans in a race." Turns out the robot has 4 wheels and an internal combustion engine.

Comment by Vitor on Introduction to abstract entropy · 2022-11-06T03:03:55.985Z · LW · GW

While I think this post overall gives good intuition for the subject, it also creates some needless confusion.

Your concept of "abstract entropy" is just Shannon entropy applied to uniform distributions. Introducing Shannon entropy directly, while slightly harder, gives you a bunch of the ideas in this post more or less "for free":

  • Macrostates are just events and microstates are atomic outcomes (as defined in probability theory). Any rules how the two relate to each other follow directly from the foundations of probability.

  • The fact that E[-log x] is the only reasonable function that can serve as a measure of information. You haven't actually mentioned this (yet?), but having an axiomatic characterization of entropy hammers home that all of this stuff must be the way it is because of how probability works. For example, your "pseudo-entropy" of Rubik's cube states (distance from the solved state) might be intuitive, but it is wrong!

  • derived concepts, such as conditional entropy and mutual information, fall out naturally from the probabilistic view.

  • the fact that an optimal bit/guess must divide the probability mass in half, not the probability space.

I hope I don't come across as overly harsh. I know that entropy is often introduced in confusing ways in various sciences, specially physics, where its hopelessly intertwined with the concrete subject matter. But that's definitely not the case in computer science (more correctly called informatics), which should be the field you're looking at if you want to deeply understand the concept.

Comment by Vitor on Pondering computation in the real world · 2022-10-30T06:58:49.502Z · LW · GW

Probabilistic Turing machines already exist. They are a standard extension of TMs that transition from state to state with arbitrary probabilities (not just 0 and 1) and can thus easily generate random bits.

Further, we know that deterministic TMs can do anything that probabilistic TMs can, albeit potentially less efficiently.

I suspect you're not really rejecting TMs per se, but rather the entire theory of computability and complexity that is standard in CS, more specifically the Church-Turing thesis which is the foundation of the whole thing.

Comment by Vitor on A Brief Introduction to Container Logistics · 2022-09-18T21:37:19.540Z · LW · GW

It's more the latter. At the local level, salespeople are supposed to have a very good handle of their clients' future demand. A lot of the trade is also seasonal, and rises and falls in a correlated way (e.g. fruit exports will have good/bad years depending on climate)

When it comes to overall economic trends, I don't really know. That stuff was handled way above my paygrade by global HQ. But it definitely can happen that a certain route requests many empty boxes to be shipped to them, only to have them sitting there because projected demand did not materialize. These kind of imbalances can be corrected by letting them drain away over time; demand doesn't shift so much that net importer/exporter status of a region flips. This "passive" management ends up being cheaper most of the time.

Comment by Vitor on The ethics of reclining airplane seats · 2022-09-04T22:35:29.821Z · LW · GW

Exactly. The worst transatlantic flight I ever had was one where I paid for "extra legroom". turns out it was a seat without a seat in front, i.e., the hallway got broader there.

However, other passengers and even the flight attendants certainly didn't act like this extra legroom belonged to me. Someone even stepped on my foot! On top of that I had to use an extremely flimsy table that folded out of the armrest.

Since most of us aren't weekly business flyers, this is a far cry from a free market.

Comment by Vitor on I’ve become a medical mystery and I don’t know how to effectively get help · 2022-07-11T23:15:34.397Z · LW · GW

Yes, it's mostly a diagnosis of exclusion. But Bayesian evidence starts piling up much sooner than a doctor is willing to write down a diagnosis on an Official Piece of Paper. However, there are some tell-tale signs like the myofascial trigger points mentioned by others, heightened pain when touching specific bones (e.g. the vertebrae), and other specific patterns how the body reacts to stimulus. This is the domain of rheumatologists.

Are your sleep issues stress-related? Like jumpiness, unable to settle into the relaxation of falling asleep? What I'm getting at here is that there are two broad classes of sleep issues. In one, you have trouble falling asleep, in the other, your sleep is not restful because you're too tense, in a way that persists through the process of falling asleep. The latter is really really bad, as it leads to chronic sleep deprivation because your body isn't getting good rest, even though you "put in the hours".

The thing with psychosomatic symptoms is also that there's a feedback loop between the pain (or weird sensations), and your mental state. As you get habituated to focusing more and more on those sensations, your brain learns to bring them up to your conscious attention more eagerly. This results in pain sensitization. Even in a situation where let's say the pain has a clear physical cause that has been corrected, the pain can persist. The original injury was just the trigger of the problem.

It might be that your collection of symptoms is actually pain that may be mild / weird enough that you don't recognize it as such. An easy way to test this is to take a standard dose of an over-the-counter painkiller and checking if you feel any different an hour later.

PS: feel free to send me a message anytime if you want to talk in more depth.

Comment by Vitor on I’ve become a medical mystery and I don’t know how to effectively get help · 2022-07-09T22:44:36.779Z · LW · GW

I have recently been diagnosed with fibromyalgia, and your symptoms sound like they might be caused by this, or other related things like chronic fatigue syndrome or chronic pain.

You didn't specify the kind of sleep problems you have. Pointing towards fibromyalgia would be difficulty "turning off", not being able to fall asleep due to tension/anxiety, and waking up unrefreshed even after getting several hours of sleep.

Do you feel unusually fatigued / sleep deprived? Frequent headaches? mental fog? worse at concentrating lately? short temper?

For background info, I've had this condition at a low level for all my life, but it only erupted into a major issue recently, due to taking a covid vaccine. It is known that both viral infections and vaccinations can trigger long-term issues like this.

Comment by Vitor on Toni Kurz and the Insanity of Climbing Mountains · 2022-07-04T14:52:25.853Z · LW · GW

This story has been adapted into a (relatively faithful) movie: https://en.wikipedia.org/wiki/North_Face_(film)

Comment by Vitor on Who is this MSRayne person anyway? · 2022-07-02T00:05:42.760Z · LW · GW

For what it's worth, being afraid of others' judgements is a very normal thing. It's also pretty normal that it gets exaggerated when one is isolated.

Now, you are a clear outlier along that dimension, but I think I can empathize with your situation at least a little bit, based on my own experiences with isolation, of which there are two: (1) for the last few years, due to complicated health issues I won't go into right now, I am much less socially active than I'd like to be. Constantly cancelling on my friends, and being "more work" to be around has consistently been pushing me towards not even trying (2) during the recent pandemic I strictly isolated myself for months at a time, only having contact with 1 family member. This shifted my social reactions to be much more defensive, feeling easily overwhelmed, and so on. It took months to get back to a relatively "normal" baseline after things opened back up. I have a tendency towards avoidant behavior, and those two situations made it 10x worse.

I won't give any advice, because you haven't asked for it and I don't have good solutions anyway. But I'd like to point out that the kind of social anxiety you're describing (intermingled with avoidant and/or depressive behavior) can often be ameliorated by simple exposure and practice. So please don't be too hard on yourself, and remind yourself that you're in a place where you're not the only weird one. Furthermore, people here will tell you in plain words if you "mess up" in any way, which so far no-one has done.

Comment by Vitor on Mandatory Post About Monkeypox · 2022-05-25T23:37:31.840Z · LW · GW

Our base expectation for asymptomatic spread should be quite low, because previous variants of monkeypox and smallpox (mostly) didn't spread like that. So I disagree with your "MSM with AIDS" scenario. It wouldn't be that surprising for the spread to be contained to the particularly vulnerable AIDS population.

Comment by Vitor on Convince me that humanity *isn’t* doomed by AGI · 2022-04-16T20:18:27.871Z · LW · GW

"Foom" has never seemed plausible to me. I'm admittedly not well-versed in the exact arguments used by proponents of foom, but I have roughly 3 broad areas of disagreement:

  1. Foom rests on the idea that once any agent can create an agent smarter than itself, this will inevitably lead to a long chain of exponential intelligence improvements. But I don't see why the optimization landscape of the "design an intelligence" problem should be this smooth. To the contrary, I'd expect there to be lots of local optima: architectures that scale to a certain level and then end up at a peak with nowhere to go. Humans are one example of an intelligence that doesn't grow without bound.

  2. Resource constraints are often hand-waved away. We can't turn the world to computronium at the speed required for a foom scenario. We can't even keep up with GPU demand for cryptocurrencies. Even if we assume unbounded computronium, large-scale actions in the physical world require lots of physical resources.

  3. Intelligence isn't all-powerful. This cuts in two directions. First, there are strategic settings where a relatively low level of intelligence already allows you to play optimally (e.g. tic-tac-toe). Second, there are problems for which no amount of intelligence will help, because the only way to solve them is to throw lots of raw computation at them. Our low intelligence makes it hard for us to identify such problems, but they definitely exist (as shown in every introductory theoretical computer science lecture).

Comment by Vitor on A Quick Guide to Confronting Doom · 2022-04-16T19:44:59.081Z · LW · GW

This phrasing bothers me a bit. It presupposes that it is only a matter of time; that there's no error about the nature of the threat AGI poses, and no order-of-magnitude error in the timeline. The pessimism is basically baked in.

Comment by Vitor on Humans pretending to be robots pretending to be human · 2022-03-29T19:38:17.915Z · LW · GW

Right, but we wouldn't then use this as proof that our children are precocious politicians!

In this discussion, we need to keep separate the goals of making GPT-3 as useful a tool as possible, and of investigating what GPT-3 tells us about AI timelines.

Comment by Vitor on Humans pretending to be robots pretending to be human · 2022-03-29T08:53:53.443Z · LW · GW

It is definitely misleading, in the same sense that the performance of a model on the training data is misleading. The interesting question w.r.t. GPT-3 is "how well does it perform in novel settings?". And we can't really know that, because apparently even publicly available interfaces are inside the training loop.

Now, there's nothing wrong with training an AI like that! But the results then need to be interpreted with more care.

P.S.: sometimes children do parrot their parents to an alarming degree, e.g., about political positions they couldn't possibly have the context to truly understand.