Posts

Comments

Comment by memeticimagery on on the dollar-yen exchange rate · 2024-04-11T16:52:33.278Z · LW · GW

State level actors don't want rapid disruption to the worldwide socioeconomic order. 

Comment by memeticimagery on on the dollar-yen exchange rate · 2024-04-09T01:09:26.680Z · LW · GW

Would slightly better remote work tech lead to a complete overturn of the world labor market?

Does the complete overturn of the world labor market strike you as something various people/institutions in those countries would want? Inertia, in this context is surely the desired state.

Comment by memeticimagery on The Altman Technocracy · 2024-02-16T22:47:08.486Z · LW · GW

I suspect a lot of people here think that a (unusually powerful) human technocracy is the least we have to worry about. 

Comment by memeticimagery on AI Safety is Dropping the Ball on Clown Attacks · 2023-10-22T16:18:42.935Z · LW · GW

Scrolling down this almost stream of consciousness post against my better judgement, unable to look away perfectly mimicked scrolling social media. I am sure you did not intend it but I really liked that aspect. 

Loads of good ideas in here, generally I think modelling the alphabet agencies is much more important than implied by discussion on LW. Clown attack is a great term, although I'm not entirely sure how much personal prevention layer of things really helps the AI safety community, because the nature of clown attacks seems like a blunt tool you can apply to the public at large to discredit groups. So, primarily the vulnerability of the public to these clown attacks is what matters and is much harder to change. 

Comment by memeticimagery on The UAP Disclosure Act of 2023 and its implications · 2023-07-25T20:02:28.587Z · LW · GW

Wow this really pulls together a lot of disparate ideas I've had at various times about the topic but wouldn't have summarised nearly as well. A note on the psyops point: if UAP are Non Human Intelligence, then we should still expect (a lot of) disinformation on the topic. Not just as a matter of natural overlap, but as a matter of incentive, it is reasonable to assume that muddying the waters with disinfo and making everyone out to be a 'crackpot' that Stephen Hawking can dismiss, would be a viable strategy to covering up the reality. Real issues are used for psyops all the time, it does not mean they do not exist. Russian troll farms capitalise on race issues in America or culture war topics, it doesn't mean the issues themselves are entirely fabricated. 

Comment by memeticimagery on The UAP Disclosure Act of 2023 and its implications · 2023-07-23T14:22:14.979Z · LW · GW

Your post implies/states that would be a kind of straightforward explanation but I'm not sure it would be. For one, the idea that ball lightning is not only much more common than previously thought, which it would need to be to also explain UFOs, but also has a hallucination component would both be quite startling if true. 

Secondly, there are aspects ball lightning cannot explain. What are we to make of the recent addition of "USO's" for instance? Unidentified Submerged Objects have consistently been part of this recent narrative, sometimes having been UFO's beforehand. Further, it only kind of unsatisfactorily explains people seeing actual 'craft'. Why would it consistently produce a hallucination where people see saucer shaped UFOs? Why not mundane craft? Why not something even more unbelievable?

Thirdly, stacking an additionally wild psyop on top of it only makes it less mundane. It would be a big story if it were to be confirmed aspects of the intelligence community were deliberately running alien psyops on their own military. 

Comment by memeticimagery on The UAP Disclosure Act of 2023 and its implications · 2023-07-22T18:04:51.513Z · LW · GW

It is a bizarre situation, but I think I disagree with you about the most likely prosaic explanation. Increasingly, especially with the latest events, the psyop explanation seems a relatively better explanation than the 'politicians are just fools' one. The reason being that politicians with higher clearances (and so more data) than us are making stronger and stronger commitments publicly taking UAP=ET seriously. That suggests to me there is a credible combination of evidence and people that had led them there. Further, the claims being made are so extreme it seems impossible to kind of stumble your way into them by accident. It seems more likely that someone, somewhere, likely within the military industrial complex would have to be pushing these claims as the origin point. 

That is my takeaway so far, is that the most prosaic explanation is weakening relatively, but interestingly I haven't seen anything from the debunking/skeptic community that reflects this.

Comment by memeticimagery on I still think it's very unlikely we're observing alien aircraft · 2023-06-15T14:46:47.677Z · LW · GW

But is it necessarily unlikely that they would be screwing with us if they existed? That's something I don't like about the bigfoot comparison, it's obviously laughable that large apes are evading camera detection at every turn, but with aliens, presumably it would be trivial to do so. We know that they would have the means, so that only leaves the reasoning to do this. I also don't necessarily agree with the assumption that our commercial sensor tech is good enough to detect hypothetical aliens. Try filming a drone from a distance with your phone. It will look surprisingly unclear. Modern cameras are obviously more than adequate to film a bigfoot but I don't think so for aliens-the sky is big etc. 

What I didn't get from your post is how the prosaic sensor anomalies/atmospheric oddities and statistical artifacts etc lends itself to explaining the much more zany claims that are now coming out of intelligence and intelligence connected people. It doesn't seem to explain someone claiming there are actual recovered bodies/craft, at all. My take on that is the psyops/disinfo angle you wrote off becomes much more likely. 

Comment by memeticimagery on Intelligence Officials Say U.S. Has Retrieved Craft of Non-Human Origin · 2023-06-06T21:06:07.020Z · LW · GW

So Grusch is another one of these Pentagon UAP investigatory program guys, which means he is claiming people have come to him from the compartmentalised Special Access Programs claiming they have recovered craft. That is important because unless he is saying somewhere he personally witnessed these craft, it is perfectly possible he fully believes his claim and is telling the truth in that yes, someone has come to him with these claims. Unfortunately I suspect whoever these first hand sources are will be shrouded entirely in classified red tape. I agree at this point a sophisticated and extensively planned hoax/disinfo operation is becoming increasingly likely and if I was totally sceptical I think that would be the best explanation. However, I don't intuitively think aliens/non human intelligence is highly unlikely because I don't trust my or others ability to model possible behaviours. For instance, the idea that this is all clearly nonsense because super advanced aliens wouldn't have malfunctioning/rudimentary craft at all- it could be that it is simply a form of interaction/manipulation. 

Comment by memeticimagery on Where is all this evidence of UFOs? · 2023-05-01T20:46:56.300Z · LW · GW

The best evidence that addresses both your claims would probably come from the military, since they have both state of the art sensors+ reliable witnesses. The recent surge in UFO coverage is almost all related to branches of the military (mostly Navy?) so the simple explanation is, it's classified to varying degrees. My understanding is that there is the publicly released stuff which is somewhat underwhelming, then some evidence Congress and the like has seen during briefings, and then probably more hush hush stuff above that for non civilians. The members of Congress who were briefed seem to have continued making noise on the topic so presumably there is more convincing evidence not yet public. 

I have no idea where Hanson got those figures from, but from your post it seems like you would be able to rule most civilian sightings out anyway because there is no such thing as a perfectly reliable human witness, and to date camera and sensor quality available to the average person is actually pretty poor (especially compared to government/military hardware).

Comment by memeticimagery on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-09T00:41:55.432Z · LW · GW

I should have clarified a bit, I was using the term 'military industrial complex' to try to narrow in on the much more technocratic underbelly of the American Defence/Intelligence community or private contractors. I don't have any special knowledge of the area so forgive me, but essentially DARPA and the like or any agency with a large black budget. 

Whatever they are doing does not need to have any connection to whatever the public facing government says in press briefings. It is perfectly possible that right now a priority for some of these agencies is funding a massive AI project, while the WH laughs off AI safety-that is how classified projects work. It illustrates the problem a bit actually, in that the entire system is set up to cover things up for national defence, in which case, having a dialogue about AI Risk is virtually impossible. 

Comment by memeticimagery on All AGI Safety questions welcome (especially basic ones) [April 2023] · 2023-04-08T20:41:00.973Z · LW · GW

Why is there so little mention of the potential role of the military industrial complex in developing AGI rather than a public AI lab? The money is available, the will, the history (ARPANET was the precursor to the internet). I am vaguely aware there isn't much to suggest the MIC is on the cutting edge of AI-but there wouldn't be if it were all black budget projects. If that is the case, it presumably implies a very difficult situation because the broader alignment community would have no idea when crucial thresholds were being crossed. 

Comment by memeticimagery on LW Team is adjusting moderation policy · 2023-04-05T14:53:00.366Z · LW · GW

Disclaimer: I myself am a newer user from last year.

I think trying to change downvoting norms and behaviours could help a lot here and save you some workload on the moderation end. Generally, poor quality posters will leave if you ignore and downvote them. Recently, there has been an uptick in these posts and of the ones I have seen many are upvoted and engaged with. To me, that says users here are too hesitant to downvote. Of course, that raises the question of how to do that and if doing so is undesirable because it will broadly repel many new users some of whom will not be "bad". Overall though I think encouraging existing users to downvote should help keep the well-kept garden. 

Comment by memeticimagery on "Dangers of AI and the End of Human Civilization" Yudkowsky on Lex Fridman · 2023-03-30T23:37:19.566Z · LW · GW

No, that was just a joke Lex was making. I don't know the exact timestamps but in most of the instances where he was questioned on his own positions or estimations on the situation Lex seemed uncomfortable to me, including the alien civilisation example. At one point I recall actually switching to the video and Lex had his head in his hands, which body language wise seems pretty universally a desperate pose. 

Comment by memeticimagery on "Dangers of AI and the End of Human Civilization" Yudkowsky on Lex Fridman · 2023-03-30T19:46:35.356Z · LW · GW

There were definitely parts where I thought Lex seemed uncomfortable, not just limited to specific concepts but when questions got turned around a bit towards what he thought. Lex started podcasting very much in the Joe Rogan sphere of influence, to the extent that I think he uses a similar style, which is very open and lets the other person speak/have a platform but is perhaps at the cost of being a bit wishy-washy. Nevertheless it's a huge podcast with a lot of reach. 

Comment by memeticimagery on Will people be motivated to learn difficult disciplines and skills without economic incentive? · 2023-03-20T17:41:36.716Z · LW · GW

This is why I don't place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce. 

Comment by memeticimagery on GPT-4: What we (I) know about it · 2023-03-15T22:38:01.906Z · LW · GW

I think what stands out to me the most is big tech/big money now getting involved seriously. That has a lot of potential for acceleration just because of funding implications. I frequent some financial/stock websites and have noticed AI become not just a major buzzword, but even some sentiments along the lines of 'AI could boost productivity and offset a potential recession in the near future'. The rapid release of LLM models seems to have jump started public interest in AI, what remains to be seen is what the nature of that interest manifests as. I am personally unsure if it will mostly be caution and regulation, panic, or the opposite. The way things are nowadays I guess there will be a significant fraction of the public completely happy with accelerating AI capabilities and angry at anyone who disagrees.

Comment by memeticimagery on An AI risk argument that resonates with NYTimes readers · 2023-03-13T20:51:46.514Z · LW · GW

I think it may be necessary to accept that at first, there may need to be a stage of general AI wariness within public opinion before AI Safety and specific facets of the topic are explored. In a sense, the public has not yet fully digested the 'AI is a serious risk' or perhaps even 'AI will be transformative to human life' in the relatively near term future. I don't think it is very likely that is a phase that can simply be skipped and it will probably be useful to get as many people broadly on topic before the more specific messaging, because if they are not then they will reject your messaging immediately, perhaps becoming further entrenched in the process. 

If this is the case then right now sentiments along the lines of general anxiety about AI are not too bad, or at least they are better than dismissive sentiment.

Comment by memeticimagery on What problems do African-Americans face? An initial investigation using Standpoint Epistemology and Surveys · 2023-03-12T21:34:46.450Z · LW · GW

I had never heard of Standpoint epistemology prior to this post but have encountered plenty of thinking that seems similar to what it espouses. One thing I can not figure out at all how this functionally differs to surveying a specific demographic on an issue. How, exactly, is whatever this is more useful? In fact to me it seems likely to be functionally worse in that for a survey the sample size is small and there is absolutely no control group, as someone else pointed out, we don't get any sense of what any other group responds with given the same questions. 

I don't really have an issue with the proposition that there is value in considering different groups experiences, what I do have an issue with is why it seems bound to devolve into a myopic consideration of a very small number of people's experiences.

Comment by memeticimagery on What fact that you know is true but most people aren't ready to accept it? · 2023-02-03T23:35:17.070Z · LW · GW

I'm not sure about 75% but it is an interesting subject and I do think the consensus view is slightly too sceptical. I don't have any expertise but one thing that always sticks out to me as decreasing the likelihood of bigfoot's existence is the lack of remains. Ok, I buy encounters could be rare enough so that there isn't one within the advent of the smartphone. But where are the skeletons? Is part of the claim they might have some type of burial grounds? Very remote territory they stick to without exceptions? 

Comment by memeticimagery on [deleted post] 2023-01-24T17:56:43.875Z

I don't think AI in the long run will be comparable to events like the industrial revolution (or anything, historically) because AI will be less tool like and more agent like in my view. That is not a situation that draws any historical precedent. A famous investor, Ray Dalio made a point something along the lines that recessions, bubbles etc (but really any rare-ish economic event) are incredibly hard to model because the length of time the economic system has existed is actually relatively short so we don't have that large a sample size. That point can be extrapolated across to this situation almost exactly. Technological revolutions are incredibly rare and we do not have enough of them to find two that are very similar. I don't think AI is going to be like anything that came before and I don't see why the economic system would be durable towards shocks like it.

Comment by memeticimagery on [deleted post] 2023-01-23T18:10:53.844Z

Transformative AI will demand a rethink of the entire economic system. The world economy is based on an underlying assumption that most humans are essentially capable of being productive in some real way that generates value. Once that concept is eroded, and my intuition is that it will only take a surprisingly small percentage of people being rendered unproductive, some form of redistribution will probably be required. Rather than designing this system in such a way that 'basic' needs are provided/paid for, I think a percentage of output from AI gains should be redistributed. After all, defining 'basic' needs is hard-people have drastically different definitions that change over time and location. The basic needs of someone in 1000AD are obviously not the same as in 2023. 

If this approach is used, it also lessens popular anxiety about AI. They have direct benefits that scale with the success of it. Most likely, the hardest part of all this is changing an economic system that is highly entrenched.

Comment by memeticimagery on A general comment on discussions of genetic group differences · 2023-01-14T19:15:53.906Z · LW · GW

Cultural differences explain racial differences a lot better than genetics, at least for now.

Where is the evidence for this? I am not really well versed with this topic but am under the impression if this were true it would be heavily promoted/reported on. 

Comment by memeticimagery on A general comment on discussions of genetic group differences · 2023-01-14T19:08:25.214Z · LW · GW

To me it seems like the current zeitgeist is just not up to addressing this question without being almost entirely captured by bad faith actors on both sides and therefore causing some non trivial social unrest. There might be some positive gain to be had from changes in policies depending on the reality of genetic differences in IQ, however policy makers would have to be much more nuanced and capable than they appear to be. Even if this were possible it would have to be weighed against the social aspects.

Comment by memeticimagery on Protectionism will Slow the Deployment of AI · 2023-01-08T00:11:14.548Z · LW · GW

Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain 'acceptable' confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech. 

Comment by memeticimagery on Investing for a World Transformed by AI · 2023-01-03T03:43:54.632Z · LW · GW

I like this post a lot, partially because I think it is an underdiscussed area, partially because it expands beyond the obvious semiconductor type companies. One thing I would add, is that with almost no technology advancement, existing and soon to exist LLMs might make investing in social media (and internet adjacent) companies much more volatile. This is because as far as I can see, the bot problem for these companies should only become worse and worse as AI can more perfectly mimic a real user. This could lead to a kind of dead internet scenario where real humans are somewhat drowned out by noise and have grave implications for whatever stocks are involved-who of course depend on ad revenue. The reason I say more volatile is because there is obviously also upside for these same companies to utilise AI themselves further. My views on it are fuzzy though and it could go either way. 

Comment by memeticimagery on The Dumbest Possible Gets There First · 2022-08-14T15:00:05.737Z · LW · GW

Assuming this was the case, wouldn't it actually imply slightly more optimistic long term odds for humanity? A world where AI development actually resembles something like natural evolution and (maybe) throws up red flags that generate interest in solving alignment would be good, no?

I worry that the strategies we might scrounge up to avoid them will be of the sort that are very unlikely to generalise once the superintelligence risks do eventually rear their heads

Ok sure but extra resources and attention is still better than none. 

Comment by memeticimagery on Two-year update on my personal AI timelines · 2022-08-03T19:37:34.765Z · LW · GW
  • I’m somewhat surprised that I haven’t seen more vigorous commercialization of language models and commercial applications that seem to reliably add real value beyond novelty; this is some update toward thinking that language models are less impressive than they seemed to me, or that it’s harder to translate from a capable model into economic impact than I believed.

Minor point here, but I think this is less to do with the potential commercial utility of LLMs and more relating to the reticence of large tech companies to publicly release a LLM that poses a significant risk of social harm. It is my intuition that in comparison with people on LW, the higher ups at the likes of Google are relatively more worried about those risks and the associated potential PR disaster. Entirely safety proofing a LLM in that way seems like it would be incredibly difficult as well as subjective and may greatly slow the release of such models.