Posts
Comments
I like this quote by Stephen Hawking from one of his answers:
The real risk with AI isn’t malice but competence.
Rot13: Svfure–Farqrpbe qvfgevohgvba
Why would you assume that? That's like saying by the time we can manufacture a better engine we'll be able to replace a running one with the new design.
For example, evolution has optimized and delivered a mechanism for turning gene edits in a fertilized egg into a developed brain. It has not done the same for incorporating after-the-fact edits into an existing brain. So in the adult case we have to do an extra giga-evolve-years of optimization before it works.
Could you convert the tables into graphs, please? It's much harder to see trends in lists of numbers.
Another possible hypothesis could be satiation. When I first read wikipedia, it dragged me into hours long recursive article reading. Over time I've read more and more of the articles I find interesting, so any given article links to fewer unread interesting articles. Wikipedia has essentially developed a herd immunity against me. Maybe that pattern holds over the population, with the dwindling infectiousness overcoming new readers?
On second thought, I'm not sure that works at all. I guess you could check the historical probability of following to another article.
"reality is a projection of our minds and magic is ways to concentrate and focus the mind" is too non-reductionist of an explanation. It moves the mystery inside another mystery, instead of actually explaining it.
For example: in this universe minds seem to be made out of brains. But if reality is just a projection of minds, then... brains are made out of minds? So minds are made out of minds? So where does the process hit bottom? Or are we saying existence is just a fractal of minds made out of minds made out of minds all the way down?
Hm, my take-away from the end of the chapter was a sad feeling that Quirrel simply failed at or lied about getting both houses to win.
The 2014 LW survey results mentioned something about being consistent with a finger-length/feminism connection. Maybe that counts?
Some diseases impact both reasoning and appearance. Gender impacts both appearance and behavior. You clearly get some information from appearance, but it's going to be noisy and less useful than what you'd get by just asking a few questions.
There's a radiolab episode about blame that glances this subject. They talk about, for example, people with brain damage not being blamed for their crimes (because they "didn't have a choice"). They also have a guest trying to explain why legal punishment should be based on modelling probabilities of recidivism. One of the hosts usually plays (is?) the "there is cosmic blame/justice/choice" position you're describing.
Well, yeah. The particular case I had in mind was exploiting partial+ordered transfiguration to lobotomize/brain-acid the death eaters, and I grant that that has practical problems.
But I found myself thinking about using patronus and other complicated things to take down LV after, instead of exploiting weak spells being made effective by the resonance. So I put the idea out there.
If I may quote from my post:
Assuming you can take down the death eaters, I think the correct follow-up
and:
LV is way up high, too far away to have good accuracy with a hand gun.
I made my suggestion.
Assuming you can take down the death eaters, I think the correct follow-up for despawning LV is... massed somnium.
We've seen somnium be effective at range in the past, taking down an actively dodging broomstick rider at range. We've seen the resonance hit LV harder than Harry, requiring tens of minutes to recover versus seconds.
LV is not wearing medieval armor to block the somnium. LV is way up high, too far away to have good accuracy with a hand gun.If LV dodges behind something, Harry has time to expecto patronum a message out.
... I think the main risk is LV apparating away, apparating back directly behind harry, and pulling the trigger.
Dumbledore is a side character. He needed to be got rid of, so neither Harry nor the reader would expect or hope for Dumbledore to show up at the last minute and save the day
There's technically six more hours of story time for a time-turned Dumbledore to show up, before going on to get trapped. He does mention that he's in two places during the mirror scene.
Dumbledore has previously stated that trying to fake situations goes terribly wrong, so there could be some interesting play with that concept and him being trapped by the mirror.
Sorry for getting that one wrong (I can only say that it's an unfortunately confusing name).
Your claim is that AGI programs have large min-length-plus-log-of-running-time complexity.
I think you need more justification for this being a useful analogy for how AGI is hard. Clarifications, to avoid mixing notions of problems getting harder as they get longer for any program with notions of single programs taking a lot of space to specify, would also be good.
Unless we're dealing with things like the Ackermann function or Ramsey numbers, the log-of-running-time component of KL complexity is going to be negligible compared to the space component.
Even in the case of search problems, this holds. Sure it takes 2^100 years to solve a huge 3-SAT problem, but that contribution of ~160 time-bits pales in comparison to the several kilobytes of space-bits you needed when encoding the input into the program.
Or suppose we're looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI's bootstrap code. Basically, you replace "run this" with "return this" at the start of the program and suddenly AGI-finding's KL complexity is just its K complexity. (see also: the P=NP algorithm that brute force finds programs that work, and so only "works" if P=NP)
I think what I'm getting at is: just use length plus running time, without the free logarithm. That will correctly capture the difficulty of search, instead of making it negligible compared to specifying the input.
Plus, after you move to non-logarithmed time complexity, you can more appropriately appeal to things like the no free lunch theorem and NP-completeness as weak justification for expecting AGI to be hard.
Kolmogorov complexity is not (closely) related to NP completeness. Random sequences maximize Kolmogorov complexity but are trivial to produce. 3-SAT solvers have tiny Kolmogorov complexity despite their exponential worst case performance.
I also object to thinking of intelligence as "being NP-Complete", unless you mean that incremental improvements in intelligence should take longer and longer (growing at a super-polynomial rate). When talking about achieving a fixed level of intelligence, complexity theory is a bad analogy. Kolmogorov complexity is also a bad analogy here because we want any solution, not the shortest solution.
I would say cos
is simpler than sin
because its Taylor series has a factor of x knocked off.
In practice they tend to show up together, though. Often you can replace the pair with something like e^(i x)
, so maybe that should be considered the simplest.
Here's another interesting example.
Suppose you're going to observe Y in order to infer some parameter X. You know that P(x=c | y) = 1/2^(c-y)
.
- You set your prior to P(x=c) = 1 for all c. Very improper.
- You make an observation, y=1.
- You update: P(x=c) = 1/2^(c-1)
- You can now normalize P(x) so its area under the curve is 1.
- You could have done that, regardless of what you observed y to be. Your posterior is guaranteed to be well formed.
You get well formed probabilities out of this process. It converges to the same result that Bayesianism does as more observations are made. The main constraint imposed is that the prior must "sufficiently disagree" in predictions about a coming observation, so that the area becomes finite in every case.
I think you can also get these improper priors by running the updating process backwards. Some posteriors are only accessible via improper priors.
I did notice that they were spending the whole time debating a definition, and that the article failed to get to any consequences.
I think that existing policies are written in terms of "broadband", perhaps such as benefits to ISPs based on how many customers have access to broadband? That would make it a debate about conditions for subsidies, minimum service requirements, and the wording of advertising.
Hrm... reading the paper, it does look like NL1 goes from |a> to |cd> instead of |c> + |d>, This is going to move all the numbers around, but you'll still find that it works as a bomb detector. The yellow coming out of the left non-interacting-with-bomb path only interferes with the yellow from the right-and-mid path when the bomb is a dud.
Just to be sure, I tried my hand at converting it into a logic circuit. Here's what I get:
Having it create both the red and yellow photon, instead of either-or, seems to have improved its function as a bomb tester back up to the level of the naive bomb tester. Half of the live bombs will explode, a quarter will trigger g, and the other quarter will trigger h. None of the dud bombs will explode or trigger g; all of them trigger h. Anytime g triggers, you've found a live bomb without exploding it.
If you're going to point out another minor flaw, please actually go through the analysis to show it stops working as a bomb tester. It's frustrating for the workload to be so asymmetric, and hints at motivated stopping (and I suppose motivated continuing for me).
A live bomb triggers nothing when the photon takes the left leg (50% chance), gets converted into red instead of yellow (50% chance), and gets reflected out.
An exploded bomb triggers g or h because I assumed the photon kept going. That is to say, I modeled the bomb as a controlled-not gate with the photon passing by being the control. This has no effect on how well the bomb tester works, since we only care about the ratio of live-to-dud bombs for each outcome. You can collapse all the exploded-and-triggered cases into just "exploded" if you like.
Okay, I've gone through all the work of checking if this actually works as a bomb tester. What I found is that you can use the camera to remove more dud bombs than live bombs, but it does worse than the trivial bomb tester.
So I was wrong when I said you could use it as a drop-in replacement. You have to be aware that you're getting less evidence per trial, and so the tradeoffs for doing another pass are higher (since you lose half of the good bombs with every pass with both the camera and the trivial bomb tester; better bomb testers can lose fewer bombs per pass). But it can be repurposed into a bomb tester.
I do still think that understanding the bomb tester is a stepping stone towards understanding the camera.
Anyways, on to the clunky analysis.
Here's the (simpler version of the) interferometer diagram from the paper:
Here's my interpretation of the state progression:
Start
|green on left-toward-BS1>
Beam splitter is hit. s = sqrt(2)
|green on a>/s + i |green on left-downward-path>/s
non-linear crystal 1 is hit, splits green into (red + yellow) / s
|red on a>/2 + |yellow on a>/2 + i |green on left-downward-path>/s
hit frequency-specific-mirror D1 and bottom-left mirror
i |red on d>/s^2 + |yellow on c>/s^2 - |green on b>/s
interaction with O, which is either a detector or nothing at all
i |red on d>|O yes>/s^2 + |yellow on c>|O no>/s^2 - |green on b>|O no>/s
hit frequency-specific-mirror D2, and top-right mirror
-|red on b>|O yes>/s^2 + i |yellow on right-toward-BS2>|O no>/s^2 - |green on b>|O no>/s
hit non-linear crystal 2, which acts like NL1 for green but also splits red into red-yellow. Not sure how this one is unitary... probably a green -> [1, 1] while red -> [1, -1] thing so that's what I'll do:
-|red on f>|O yes>/s^3 + |yellow on e>|O yes>/s^3 + i |yellow on right-toward-BS2>|O no>/s^2 - |red on f>|O no>/s^2 - |yellow on e>|O no>/s^2
red is reflected away; call those "away" and stop caring about color:
|e>|O yes>/s^3 + i |right-toward-BS2>|O no>/2 - |e>|O no>/2 - |away>|O yes>/s^3 - |away>|O no>/s^2
yellows go through the beam splitter, only interferes when O-ness agrees.
|h>|O yes>/s^4 + i|g>|O yes>/s^4 + i |g>|O no>/s^3 - |h>|O no>/s^3 - |h>|O no>/s^3 - i|g>|O no>/s^3 - |away>|O yes>/s^3 - |away>|O no>/s^2 |h>|O yes>/s^4 + i|g>|O yes>/s^4 - |h>|O no>/s - |away>|O yes>/s^3 - |away>|O no>/s^2 ~ 6% h yes, 6% g yes, 50% h no, 13% away yes, 25% away no
CONDITIONAL upon O not having been present, |O yes> is equal to |O no> and there's more interference before going to percentages:
|h>/s^4 + i|g>/s^4 - |h>/s - |away>/s^3 - |away>/s^2
|h>(1/s^4-1/s) + i|g>/s^4 - |away>(1/s^2 + 1/s^3)
~ 21% h, 6% g, 73% away
Ignoring the fact that I probably made a half-dozen repairable sign errors, what happens if we use this as a bomb tester on 200 bombs where a hundred of them are live but we don't know which? Approximately:
- 6 exploded bombs that triggered h
- 21 dud bombs that triggered h
- 50 live bombs that triggered h
- 6 exploded bombs that triggered g
- 6 dud bombs that triggered g
- 0 live bombs that triggered g
- 13 exploded bombs that triggered nothing
- 25 live bombs that triggered nothing
- 73 dud bombs that triggered nothing
So, of the bombs that triggered h but did not explode, 50/71 are live. Of the bombs that triggered g but did not explode, none are live. Of the bombs that triggered nothing but did not explode, 25/98 are live.
If we keep only the bombs that triggered h, we have raised our proportion of good unexploded bombs from 50% to 70%. In doing so, we lost half of the good bombs. We can repeat the test again to gain more evidence, and each time we'll lose half the good bombs, but we'll lose proportionally more of the dud bombs.
Therefore the camera works as a bomb tester.
Well...
The bomb tester does have a more stringent restriction than the camera. The framing of the problems is certainly different. They even have differing goals, which affect how you would improve the process (e.g. you can use grover's search algorithm to make the bomb tester more effective but I don't think it matters for the camera; maybe it would make it more efficient?)
BUT you could literally use their camera as a drop-in replacement for the simplest type of bomb tester, and vice versa. Both are using an interferometer. Both want to distinguish between something being in the way or not being in the way on one leg. Both use a detector that only fires when the photon "takes the other leg" and hits a detector that it counterfactually could not have if there was no obstruction on the sampling leg.
So I do think that calling the (current) camera a more involved version of the (basic) bomb tester makes sense and acts as a useful analogy.
I think this is just a more-involved version of the Elitzur-Vaidman bomb tester. The main difference seems to be that they're going out of their way to make sure the photons that interact with the object are at a different frequency.
The quantum bomb tester works by relying on the fact that the two arms interfere with each other to prevent one of the detectors from going off. But if there's a measure-like interaction on one arm, that cancelled-out detector starts clicking. The "magic" is that it can click even when the interaction doesn't occur. (I think the many worlds view here is that the bomb blew up in one world, creating differences that prevented it from ending up in the same state as. and thus interfering with. the non-bomb-blowing-up worlds.)
This is an attempt at a “plain Jane” presentation of the results discussed in the recent arxiv paper
... [No concrete example given] ...
Urgh...
- Write the password down on paper and keep that paper somewhere safe.
- Practice typing it in. Practice writing it down. Practice singing it in your head.
- Set things up so you have to enter it periodically.
A concrete example of a paper using the add-i-to-reflected-part type of beam splitter is the "Quantum Cheshire Cats" paper:
A simple way to prepare such a state is to send a horizontally polarized photon towards a 50:50 beam splitter, as depicted in Fig. 1. The state after the beam splitter is |Psi>, with |L> now denoting the left arm and |R> the right arm; the reflected beam acquires a relative phase factor i.
The figure from the paper:
I also translated the optical system into a similar quantum logic circuit:
Note that I also included the left-path detector they talk about later in the paper, and some read-outs that show (among other things) that the conditional probability of the left-path detector having gone off, given that D1 went off, is indeed 100%. (The circuit editor I fiddle with is here.)
It's notable that my recreation uses gates with different global phase factors (the beam splitter is 1/2-i/2 and 1/2+i/2 instead of 1/sqrt(2) and i/sqrt(2)). It also ignores the mirrors that appear once on both paths. The effect is the same because global phase factors don't matter.
edit My ability to make sign errors upon sign errors is legendary and hopefully fixed.
Possible analogy: Was molding the evolutionary path of wolves, so they turned into dogs that serve us, unethical? Should we stop?
Wait, I had the impression that this community had come to the consensus that SIA vs SSA was a problem along the lines of "If a tree falls in the woods and no one's around, does it make a sound?"? It finds an ambiguity in what we mean by "probability", and forces us to grapple with it.
In fact, there's a well-upvoted post with exactly that content.
The Bayesian definition of "probability" is essentially just a number you use in decision making algorithms constrained to satisfy certain optimality criteria. The optimal number to use in a decision obviously depends on the problem, but the unintuitive and surprising thing is that it can depend on details like how forgetful you are and whether you've been copied and how payoffs are aggregated.
The post I linked gave some examples:
If Sleeping Beauty is credited a cumulative dollar every time she guesses correctly, she should act as if she assigns a probability of 1/2 to the proposition.
If Sleeping Beauty is given a dollar only if she guesses correctly in all cases, otherwise nothing, then she should act as if she assigns a probability of 1/3 to the proposition.
Other payoff structures give other probabilities. If you never recombine Sleeping Beauty, then the problem starts to become about whether or not she values her alternate self getting money and what she believes her alternate self will do.
I would be happy to prove my "faith" in science by ingesting poison after I'd taken an antidote proven to work in clinical trials.
This is one of the things James Randi is known for. He'll take a "fatal" dose of homeopathic sleeping pills during talks (e.g. his TED talk) as a way of showing they don't work.
I am pretty sick of 1% being given as the natural lowest error rate of humans on anything. It's not.
Hmm. Our error rate moment to moment may be that high, but it's low enough that we can do error correction and do better over time or as a group. Not sure why I didn't realize that until now.
(If the error rate was too high, error correction would be so error-prone it would just introduce more error. Something analogous happens in quantum error correction codes).
Oh, so M is not a stock-market-optimizer it's a verify-that-stock-market-gets-optimized-er.
I'm not sure how this differs from a person just asking the AI if it will optimize the stock market. The same issues with deception apply: the AI realizes that M will shut it off, so it tells M the stock market will totally get super optimized. If you can force it to tell M the truth, then you could just do the same thing to force it to tell you the truth directly. M is perhaps making things more convenient, but I don't think it's solving any of the hard problems.
It's extremely premature to leap to the conclusion that consciousness is some sort of unobservable opaque fact. In particular, we don't know the mechanics of what's going on in the brain as you understand and say "I am conscious". We have to at least look for the causes of these effects where they're most likely to be, before concluding that they are causeless.
People don't even have a good definition of consciousness that cleanly separates it from nearby concepts like introspection or self-awareness in terms of observable effects. The lack of observable effects goes so far that people posit they could get rid of consciousness and everything would happen the same (i.e. p-zombies). That is not a unassailable strength making consciousness impossible to study, it is a glaring weakness implying that p-zombie-style consciousness is a useless or malformed concept.
I completely agree with Eliezer on this one: a big chunk of this mystery should dissolve under the weight of neuroscience.
... wait, what? You can equate predicates of predicates but not predicates?!
(Two hours later)
Well, I'll be damned...
What are other examples of possible motivating beliefs? I find the examples of morals incredibly non-convincing (as in actively convincing me of the opposite position).
Here's a few examples I think might count. They aren't universal, but they do affect humans:
Realizing neg-entropy is going to run out and the universe will end. An agent trying to maximize average-utility-over-time might treat this as a proof that the average is independent of its actions, so that it assigns a constant eventual average utility to all possible actions (meaning what it does from then on is decided more by quirks in the maximization code, like doing whichever hypothesized action was generated first or last).
Discovering more fundamental laws of physics. Imagine an AI was programmed and set off in the 1800s, before anyone knew about quantum physics. The AI promptly discovers quantum physics, and then...? There was no rule given for how to maximize utility in the face of branching world lines or collapse-upon-measurement. Again the outcome might come down to quirks in the code; on how the mapping between the classical utilities and quantum realities is done (e.g. if the AI is risk-averse then its actions could differ based on if was using Copenhagen or Many-worlds).
Learning you're not consistent and complete. An agent built with an axiom that it is consistent and complete, and the ability to do proof by contradiction, could basically trash its mathematical knowledge by proving all things when it finds the halting problem / incompleteness theorems.
Discovering an opponent that is more powerful than you. For example, if an AI proved that Yahweh, god of the old testament, actually existed then it might stop mass-producing paperclips and start mass-producing sacrificial goats or prayers for paperclips.
For instance, if anything dangerous approached the AIXI's location, the human could lower the AIXI's reward, until it became very effective at deflecting danger. The more variety of things that could potentially threaten the AIXI, the more likely it is to construct plans of actions that contain behaviours that look a lot like "defend myself." [...]
It seems like you're just hardcoding the behavior, trying to get a human to cover all the cases for AIXI instead of modifying AIXI to deal with the general problem itself.
I get that you're hoping it will infer the general problem, but nothing stops it from learning a related rule like "Human sensing danger is bad.". Since humans are imperfect at sensing danger, that rule will better predict what's happening compared to the actual danger you want AIXI to model. Then it removes your fear and experiments with nuclear weapons. Hurray!
Anthropomorphically forcing the world to have particular laws of physics by more effectively killing yourself if it doesn't seems... counter-productive to maximizing how much you know about the world. I'm also not sure how you can avoid disproving MWI by simply going to sleep, if you're going to accept that sort of evidence.
(Plus quantum suicide only has to keep you on the border of death. You can still end up as an eternally suffering almost-dying mentally broken husk of a being. In fact, those outcomes are probably far more likely than the ones where twenty guns misfire twenty times in a row.)
I find Eliezer's insistence about Many-Worlds a bit odd, given how much he hammers on "What do you expect differently?". Your expectations from many-worlds are be identical to those from pilot-wave, so....
I'm probably misunderstanding or simplifying his position, e.g. there are definitely calculational and intuition advantages to using one vs the other, but that seems a bit inconsistent to me.
Is there an existing post on people's tendency to be confused by explanations that don't include a smaller version of what's being explained?
For example, confusion over the fact that "nothing touches" in quantum mechanics seems common. Instead of being satisfied by the fact that the low-level phenomena (repulsive forces and the Pauli exclusion principle) didn't assume the high-level phenomena (intersecting surfaces), people seem to want the low-level phenomena to be an aggregate version of the high-level phenomena. Explaining something without using it is one of the best properties an explanation can have, but people are somehow unsatisfied by such explanations.
Other examples of "but explain(X) doesn't include X!": emotions from biology, particles from waves, computers from solid state physics, life from chemistry.
More controversial examples: free will, identity, [insert basically any other introspective mental concept here].
Examples of the opposite: any axiom/assumption of a theory, billiard balls in Newtonian mechanics, light propagating through the ether, explaining a bar magnet as an aggregation of atom-sized magnets, fluid mechanics using continuous fields instead of particles, love from "God wanted us to have love".
Mario, for instance, once you jump, there's not much to do until he actually lands
Mario games let you change momentum while jumping, to compensate for the lack of fine control on your initial speed. This actually does matter a lot in speed runs. For example, Mario 64 speedruns rely heavily on a super fast backwards long jump that starts with switching directions in the air.
A speed run of real life wouldn't start with you eating lunch really fast, it would start with you sprinting to a computer.
In the examples you show how to run the opponent, but how do you access the source code? For example, how do you distinguish a cooperate bot from a (if choice < 0.0000001 then Defect else Cooperate)
bot without a million simulations?
That sounds like what I expected. Have any links?
Is it expected that electrically disabling key parts of the brain will replace anesthetic drugs?
Whoops, box B was supposed to have a thousand in both cases.
I did have in mind the variant where Omega picks the self-consistent case, instead of using only the box A prediction, though.
Yes, the advantage comes from being hard to predict. I just wanted to find a game where the information denial benefits were counterfactual (unlike poker).
(Note that the goal is not perfect indistinguishability. If it was, then you could play optimally by just flipping a coin when deciding to bet or call.)
The variant with the clear boxes goes like so:
You are going to walk into a room with two boxes, A and B, both transparent. You'll be given the opportunity to enter a room with both boxes, their contents visible, where can either take both boxes or just box A.
Omega, the superintelligence from another galaxy that is never wrong, has predicted whether you will take one box or two boxes. If it predicted you were going to take just box A, then box A will contain a million dollars and box B will contain a thousand dollars. If it predicted you were going to take both, then box A will be empty and box B will contain a thousand dollars.
If Omega predicts that you will purposefully contradict its prediction no matter what, the room will contain hornets. Lots and lots of hornets.
Case 1: You walk into the room. You see a million dollars in box A. Do you take both, or just A?
Case 2: You walk into the room. You see no dollars in box A. Do you take both, or just A?
If Omega is making its predictions by simulating what you would do in each case and picking a self-consistent prediction, then you can eliminate case 2 by leaving the thousand dollars behind.
edit Fixed not having a thousand in box B in both cases.
Thanks for the clarification. I removed the game tree image from the overview because it was misleading readers into thinking it was the entirety of the content.
Alright, I removed the game tree from the summary.
The -11 was chosen to give a small, but not empty, area of positive-returns in the strategy space. You're right that it doesn't affect which strategies are optimal, but in my mind it affects whether finding an optimal strategy is fun/satisfying.
You followed the link? The game tree image is a decent reference, but a bad introduction.
The answer to your question is that it's a zero sum game. The defender wants to minimize the score. The attacker wants to maximize it.
Sam Harris recently responded to the winning essay of the "moral landscape challenge".
I thought it was a bit odd that the essay wasn't focused on the claimed definition of morality being vacuous. "Increasing the well-being of conscious creatures" is the sort of answer you get when you cheat at rationalist taboo. The problem has been moved into the word "well-being", not solved in any useful way. In practical terms it's equivalent to saying non-conscious things don't count and then stopping.
It's a bit hard to explain this to people. Condensing the various inferential leaps into a single post might make a useful post. On the other hand it's just repackaging what's already here. Thoughts?
Arguably the university's NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren't actively against it.
The NAT/firewall was there for security reasons, not to police gaming. This was when I lived in residence, so gaming was a legitimate recreational use.
Endpoints not being able to connect to each other makes some functionality costly or impossible. For example, peer to peer distribution systems rely on being able to contact cooperative endpoints. NAT makes that a lot harder, meaning plenty of development and usability costs.
A more mundane example is multiplayer games. When I played warcraft 3, I had lots of issues testing maps I made because no one could connect to games I hosted (I was behind a university NAT, out of my control). I had to rely on sending the map to friends and having them host.