Posts

rhollerith_dot_com's Shortform 2022-01-21T02:13:20.810Z
One Medical? Expansion of MIRI? 2014-03-18T14:38:23.618Z
Computer-mediated communication and the sense of social connectedness 2011-03-18T17:13:32.203Z
LW was started to help altruists 2011-02-19T21:13:00.020Z

Comments

Comment by RHollerith (rhollerith_dot_com) on Statement on AI Extinction - Signed by AGI Labs, Top Academics, and Many Other Notable Figures · 2024-03-19T02:40:29.518Z · LW · GW

The statement does not mention existential risk, but rather "the risk of extinction from AI".

Comment by RHollerith (rhollerith_dot_com) on Can any LLM be represented as an Equation? · 2024-03-15T15:11:02.319Z · LW · GW

Any computer program can be presented in the form of an equation. Specifically, you define a function named step such that step (s, input) = (s2, output) where s and s2 are "states", i.e., mathematical representations of the RAM, cache and registers.

To run the computer program, you apply step to some starting state, yielding (s2, output), then you apply step to s2, yielding (s3, output2), then apply step to s3, and so on for billions of repetitions.

Another reply to your question asserts that equations cannot handle non-determinism. Untrue. To handle it, all we need to do is add another argument to step, rand say, that describes the non-deterministic influences on the program. This is routinely done in formalisms for modelling causality, e.g., the structural equation models used in economics.

So, in summary, your question has some implicit assumptions that would need to be made explicit before I can answer.

Comment by RHollerith (rhollerith_dot_com) on Acting Wholesomely · 2024-02-27T05:34:34.922Z · LW · GW

I hope owencb won't let this prevent him from continuing to post on this topic.

Comment by RHollerith (rhollerith_dot_com) on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-23T02:17:23.842Z · LW · GW

One of the reasons I think demographic factors aren't as important over the long term as some say they are is that an estimated 50% of the population of Europe died (from the plague) from 1346 to 1353 and yet not long afterwards Europe started pulling ahead the rest of the world, with Gutenberg inventing movable type in 1450 and the Renaissance having spread throughout Europe by 1500. Admittedly the Renaissance's beginning (in Italy) predate the plague, but the point is that the loss of about 50% of the population did not prevent those beginnings from spreading to the rest of the Europe and did not prevent Europe from becoming the most influential region of the world (with the European discovery of the New World in 1492, with the European Magellan making the first circumnavigation of the globe between 1519 and 1522 and with the scientific and political advances, e.g., the empirical method and liberalism, of the European Renaissance having tremendous global influence to the present day).

And even if I were not able to cite the example I just cited, the track record of the class of experts (geographers, political scientists) who maintain that demographics is national destiny is that they are often wrong. Less Wrong does well in general, but could do better at resisting false information and information of unknown truth value that gets repeated over and over on by the internet.

Comment by RHollerith (rhollerith_dot_com) on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-21T22:47:43.192Z · LW · GW

My guess is that banning birth control is not in the Overton window in Russia. 

You make a good point. I think however that these demographic factors aren't as important to a country's long-term fate as many recent commentators say it is.

Comment by RHollerith (rhollerith_dot_com) on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-20T22:25:04.826Z · LW · GW

Probably the only feasible way of fixing the demography is mass immigration.

Banning birth control would be another feasible way in my estimation for any government that can survive the resentment caused by the ban -- and the Kremlin probably could survive it.

Comment by RHollerith (rhollerith_dot_com) on Weighing reputational and moral consequences of leaving Russia or staying · 2024-02-20T22:21:45.385Z · LW · GW

The Russian regime is not that bad. It has for example caused fewer civilian deaths in Ukraine since 2014 than the Israeli regime has caused in a few months in Gaza -- and Ukraine has 74 times as many people as Gaza has.

If Texas left the United States and started accepting military aid from China, as an American, I would probably want my government to force Texas back into the United States even if that requires invading Texas and causing many casualties.

Also, even if the reputation of the Russian government remains poor, Westerners will continue to treat individual Russians okay because of the fairly strong ethic in the West and especially in the US under which every individual should be judged on their own merits, not by the group they belong to, particularly if they were born into the group. I cannot imagine for example your staying in Russia significantly reducing the probability of your winning a grant from a Western organization to work on AI safety or effective altruism with the one exception that the organization might be prohibited by sanctions imposed by Western governments from sending money into Russia.

On the other hand, Stockholm syndrome is a powerful cognitive bias which might bias you against emigrating.

Comment by RHollerith (rhollerith_dot_com) on mike_hawke's Shortform · 2024-02-16T21:00:02.646Z · LW · GW

I drink it or more precisely mix it with my bowl of beans and veggies.

Comment by RHollerith (rhollerith_dot_com) on OpenAI wants to raise 5-7 trillion · 2024-02-09T19:20:31.723Z · LW · GW

My $300 against your $30 that OpenAI's next round of funding raises less than $40 billion of investor money. In case OpenAI raises no investment money in the next 36 months, the bet expires. Any takers?

I make the same offer with "OpenAI" replaced with "any startup with Sam Altman as a founder".

I'm retracting this comment and these 2 offers. I still think it is very unlikely, but realized that I would be betting against people who are vastly more efficient than I am at using the internet to access timely information and might even be betting against people with private information from Altman or OpenAI.

Someone should publicly bet like I did as an act of altruism to discourage this unprofitable line of inquiry, but I'm not feeling that altruistic today.

Comment by RHollerith (rhollerith_dot_com) on OpenAI wants to raise 5-7 trillion · 2024-02-09T19:15:00.526Z · LW · GW
Comment by RHollerith (rhollerith_dot_com) on Why have insurance markets succeeded where prediction markets have not? · 2024-01-24T18:36:41.373Z · LW · GW

Reading this makes me suspect that increasing the scope (range of applicability) of insurance or of “derivative” contracts (e.g., options and futures) is a more potent way to improve the world than promoting prediction markets.

Comment by RHollerith (rhollerith_dot_com) on The case for training frontier AIs on Sumerian-only corpus · 2024-01-15T21:03:56.798Z · LW · GW

It is good to see people thinking creatively, but a frontier model that becomes superhuman at physics and making plans that can survive determined human opposition is very dangerous even if it never learns how to read or understand any human language.

In other words, being able to interact verbally with humans is one avenue by which an AI can advance dangerous plans, but not the only avenue. (Breaking into computers would be another avenue where being able to communicate with humans might be helpful, but certainly not necessary.)

So, do you have any ideas on how to ensure that your Sumerian-schooled frontier model doesn't become superhuman at physics or breaking into computer?

Comment by RHollerith (rhollerith_dot_com) on Commonwealth Fusion Systems is the Same Scale as OpenAI · 2024-01-13T00:08:20.660Z · LW · GW

OpenAI raised money recently at a valuation of $100 billion whereas the last time Commonwealth Fusion raised money it did so at a valuation of "$7.2—10.8b (Dealroom.co estimates Dec 2021)". Also, OpenAI is only one of dozens of well-funded organizations in the space.

Source of the latter fact: https://app.dealroom.co/companies/commonwealth_fusion_systems

Comment by RHollerith (rhollerith_dot_com) on Theoretically, could we balance the budget painlessly? · 2024-01-04T00:32:52.056Z · LW · GW

If the supply of government bonds is reduced, most of the money that would've gone into gov bonds will go into other investments (because if you want to, e.g., save for retirement, the unavailability of gov bonds is not going to stop you from finding some way to save for retirement). Investment has a different effect on the economy than consumption (e.g., choosing to have a kid) does, and your policy proposal replaces consumption with (private) investment.

Comment by RHollerith (rhollerith_dot_com) on Why does expected utility matter? · 2023-12-26T16:08:42.779Z · LW · GW

No it does not imply constancy or consistency over time because the 4 axioms do not stop us from adding to the utility function a real-valued argument that represent the moment in time that the definition refers to.

In other words, the 4 axioms do not constrain us to consider only utility functions over world states: utility functions over "histories" are also allowed, where a "history" is a sequence of world states evolving over time (or equivalently a function that takes a number representing an instant in time and returning a world state).

Comment by RHollerith (rhollerith_dot_com) on Why is capnometry biofeedback not more widely known? · 2023-12-22T05:57:52.947Z · LW · GW

>Does this mean that a cheap "pseudo-capnometer" can be created which . . . ?

I doubt it, but don't know for sure because I don't know anything about the mechanisms by which people outgas the VOCs.

Comment by RHollerith (rhollerith_dot_com) on Why is capnometry biofeedback not more widely known? · 2023-12-22T01:21:24.622Z · LW · GW

The $100 CO2 monitors do not measure CO2, they measure VOCs, which in typical home and office settings closely correlates with CO2 (because humans emit both at a relatively constant rate and humans are the main sources of both in typical home and office settings).

Comment by RHollerith (rhollerith_dot_com) on Why is capnometry biofeedback not more widely known? · 2023-12-22T01:19:09.705Z · LW · GW

I typically see 35-45 mmHg of partial pressure carbon dioxide being cited as the good range

That's about 5% (since atmospheric pressure is about 760 mmHg) or 50,000 ppm. Being in a room with that high a concentration of co2 is immediately dangerous to life and health, which is a good illustration of the fact that this post (your post) is about co2 in exhaled air, which is distinct from co2 in inhaled air or ambient air, where for example 5,000 ppm of co2 "is the permissible exposure limit for daily workplace exposures" (source).

Comment by RHollerith (rhollerith_dot_com) on leogao's Shortform · 2023-12-17T20:16:53.626Z · LW · GW

What does "the lr" mean in this context?

Comment by RHollerith (rhollerith_dot_com) on Current AIs Provide Nearly No Data Relevant to AGI Alignment · 2023-12-16T05:45:55.307Z · LW · GW

An LLM can be strongly super-human in its ability to predict the next token (that some distribution over humans with IQ < 100 would write) even if it was trained only on the written outputs of humans with IQ < 100.

More generally, the cognitive architecture of an LLM is very different from that of a person, and IMO we can use our knowledge of human behavior to reason about LLM behavior.

Comment by rhollerith_dot_com on [deleted post] 2023-12-11T18:27:18.511Z

In a world without any global network of computers, human beings would probably be most potent resource available to an AI that had just acquired superhuman capabilities (beyond the resources needed just to keep the AI running). Spy agencies are skilled at getting human beings to act against their own interests and the interests of their fellow human beings; an AI with superhuman capabilities would probably be much better at it.

The bigger picture is that no one is going to create a very powerful AI and not try to put it to profitable use, and all the ways I can think of to put it to profitable use entail giving it access to resources it can use to take over.

Comment by RHollerith (rhollerith_dot_com) on Nate Showell's Shortform · 2023-12-11T15:51:06.183Z · LW · GW

My memory is not that good. I do recall that it is in the chapter "Other ways: alternatives to many-worlds".

Comment by RHollerith (rhollerith_dot_com) on Nate Showell's Shortform · 2023-12-11T05:45:11.646Z · LW · GW

My probability that quantum Bayesianism is onto something is .05. It went down a lot when I read Sean Carroll's book Something Deeply Hidden. .05 is about as extreme as my probabilities get for the parts of quantum physics that are not settled science since I'm not an expert.

Comment by RHollerith (rhollerith_dot_com) on Nathan Young's Shortform · 2023-12-09T18:58:40.553Z · LW · GW

You don't want to warn us that it is behind a paywall?

Comment by RHollerith (rhollerith_dot_com) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T21:21:58.949Z · LW · GW

Suppose I start a dialog knowing I will never choose to publish it. Would the LW team welcome that or tend to consider it a waste of resources because nothing gets published?

Comment by RHollerith (rhollerith_dot_com) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T17:25:37.198Z · LW · GW

How prepared is LW for an attack? Those who want AI research to proceed unimpeded have an incentive to sabotage those who want to slow it down or ban it and consequently have an incentive to DDoS LW.com or otherwise make the site hard to use. What kind of response could LW make against that?

Also, how likely is it that an adversary will manage to exploit security vulnerabilities to harvest all the PMs (private messages) stored on LW?

Comment by RHollerith (rhollerith_dot_com) on jacquesthibs's Shortform · 2023-12-03T18:34:46.800Z · LW · GW

I have some idea about how much work it takes to maintain something like LW.com, so this random person would like to take this opportunity to thank you for running LW for the last many years.

Comment by RHollerith (rhollerith_dot_com) on jacquesthibs's Shortform · 2023-12-02T20:16:26.760Z · LW · GW

I wish people would stop including images of text on LW. I know this practice is common on Twitter and probably other forums, but we aspire a higher standard here. My reasoning: (1) it is more tedious to compose a reply when one cannot use copying-pasting to choose exactly which extent of text to quote (2) the practice is a barrier to disabled people using assistive technologies and people reading on very narrow devices like smartphones.

Comment by RHollerith (rhollerith_dot_com) on How can I use AI without increasing AI-risk? · 2023-11-29T19:14:13.091Z · LW · GW

I have yet to interact with a state-of-the-art model (that I know of), but I do know from browsing Hacker News that many are running LLaMA and other open-source models on their own hardware (typically Apple Silicon or desktops with powerful GPUs).

Comment by RHollerith (rhollerith_dot_com) on How can I use AI without increasing AI-risk? · 2023-11-29T02:51:01.229Z · LW · GW

You don't say what kind of harms you consider worst. I consider extinction risk the worst harm, and here is my current personal strategy. I don't give OpenAI or similar companies any money: if I had to use use AI, I'd use an open-sourced model. (It would be nice if there were some company that offered a paid service that I could trust not to advance the state of the art, but I don't know of any though yes, I understand that some companies contribute more to extinction risk than others.) I expect that the paid services will eventually (and soon probably) get so good that it is futile to continue to bother with open-sourced models, and that to compete I will need eventually to give money to a company offering a paid service. I plan to try to put that day off as long as possible, which I consider useful, not futile: suppose it takes OpenAI 4 generations of services (where GPT-4-based services is generation 1) to become a very successful, very profitable company. each of those generations is equally important in expectation to OpenAI's eventual wild success. (An iMac was a vastly better computer than the first Macintosh, but every major product launch between the first Mac and the first iMac was roughly equally important to the eventual wild success of Apple.) Thus if I can hold off giving OpenAI any money till they're offering 4th-generation services, I will have withheld from OpenAI 75% of the "mojo" (revenue, evidence that I'm a user that they can in aggregate show to investors) I might have given them (before they become so successful that nothing I do could possibly have any effect), but enhanced my productivity almost as much as if I had used all 4 generations of OpenAI's service (because of how good the 4th generation will be).

If I use any AI services, open-source or not, I don't tell anyone about it (to prevent my contributing to the AI field's reputation for usefulness) except for people who take AI extinction risk seriously enough that they'd quickly vote to shut down the whole field if they could.

Like Eliezer says, this is not what a civilization that might survive AI research looks like, but what I just wrote is my current personal strategy for squeezing as much dignity as possible from the situation.

Comment by RHollerith (rhollerith_dot_com) on jacquesthibs's Shortform · 2023-11-27T17:26:35.922Z · LW · GW

I don't think I've ever seen an endorsement of the flow state that came with non-flimsy evidence that it increases productivity or performance in any pursuit, and many endorsers take the mere fact that the state feels really good to be that evidence.

>you're in relentless, undisturbed pursuit

This suggest that you are confusing drive/motivation with the flow state. I have tons of personal experience of days spent in the flow state, but lacking motivation to do anything that would actually move my life forward.

You know how if you spend 5 days in a row mostly just eating and watching Youtube videos, it starts to become hard to motivate yourself to do anything? Well, the quick explanation of that effect is that watching the Youtube videos is too much pleasure for too long with the result that the anticipation of additional pleasure (from sources other than Youtube videos) no longer has its usual motivating effect. The flow state can serve as the source of the "excess" pleasure that saps your motivation: I know because I wasted years of my life that way!

Just to make sure we're referring to the same thing: a very salient feature of the flow state is that you lose track of time: suddenly you realize that 4 or 8 or 12 hours have gone by without your noticing. (Also, as soon as you enter the flow state, your level of mental tension, i.e., physiological arousal, decreases drastically--at least if you are chronically tense, but I don't lead with this feature because a lot of people can't even tell how tense they are.) In contrast, if you take some Modafinil or some mixed amphetamine salts or some Ritalin (and your brain is not adapted to any of those things) (not that I recommend any of those things unless you've tried many other ways to increase drive and motivation) you will tend to have a lot of drive and motivation at least for a few hours, but you probably won't lose track of time.

Comment by RHollerith (rhollerith_dot_com) on Johannes C. Mayer's Shortform · 2023-11-25T15:46:01.261Z · LW · GW
Comment by RHollerith (rhollerith_dot_com) on OpenAI Staff (including Sutskever) Threaten to Quit Unless Board Resigns · 2023-11-21T15:57:43.753Z · LW · GW

The US NatSec community doesn't know that the US (and Britain) are with probability = .99 at least 8 years ahead of China and Russia in AI?

Comment by RHollerith (rhollerith_dot_com) on The Shutdown Problem: Three Theorems · 2023-11-21T15:53:26.117Z · LW · GW

It's hard to control how capable the AI turns out to be. Even the creators of GPT-4 were surprised, for example, that it would be able to score in the 90th percentile on the Bar Exam. (They expected that if they and other AI researchers were allowed to continue their work long enough that eventually one of their models would be able to do, but had no way of telling which model it would be.)

But more to the point: how does boxing have any bearing on this thread? If you want to talk about boxing, why do it in the comments to this particular paper? why do it as a reply to my previous comment?

Comment by RHollerith (rhollerith_dot_com) on The Shutdown Problem: Three Theorems · 2023-11-21T02:44:24.756Z · LW · GW

Yudkowsky's suggestion is for preventing the creation of a dangerous AI by people. Once a superhumanly-capable AI has been created and has had a little time to improve its situation, it is probably too late even for a national government with nuclear weapons to stop it (because the AI will have hidden copies of itself all around the world or taken other measures to protect itself, measures that might astonish all of us).

The OP in contrast is exploring the hope that (before any dangerous AIs are created) a very particular kind of AI can be created that won't try to prevent people from shutting it down.

Comment by RHollerith (rhollerith_dot_com) on lsusr's Shortform · 2023-11-20T23:11:22.259Z · LW · GW

Can you explain why you think that "Microsoft has gained approximately $100B in market capitalization?" I see a big dip in stock price late Thursday, followed by a recovery to exactly the start price 2 hours later.

Comment by RHollerith (rhollerith_dot_com) on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-20T15:23:44.358Z · LW · GW

Agree in general, but there is an ecosystem of mostly-small colleges where teaching has higher priority, and most ambitious American students and their parents know about it. Note for example that Harvard, Yale, Princeton and Stanford do not appear in the following list of about 200 colleges:

https://www.usnews.com/best-colleges/rankings/national-liberal-arts-colleges

Comment by RHollerith (rhollerith_dot_com) on Sam Altman fired from OpenAI · 2023-11-18T03:47:21.165Z · LW · GW

Someone writes anonymously, "I feel compelled as someone close to the situation to share additional context about Sam and company. . . ."

https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p7mpv/

Comment by RHollerith (rhollerith_dot_com) on Facebook is Paying Me to Post · 2023-11-16T22:54:46.728Z · LW · GW

Yes, I meant changing country or US state.

It's pretty bad that I didn't consider the possibility you were joking.

Comment by RHollerith (rhollerith_dot_com) on Facebook is Paying Me to Post · 2023-11-16T16:54:28.113Z · LW · GW

you would pick governments whose policies you like

No I wouldn't (nor would he) usually because the very high expected personal cost of switching governments will usually swamp the considerations you speak of.

Comment by RHollerith (rhollerith_dot_com) on adamzerner's Shortform · 2023-11-16T16:35:12.451Z · LW · GW

My whole UI is zoomed to 175% (though Gnome calls it "scale") which I much prefer to what you describe because zooming with cmd+/- in the browser applies only to the current web site, so one ends up repeating the adjustment for basically every site one visits.

(I don't know how to zoom the whole UI to 175% on MacOS without making everything blurry, but it can be done without blurriness on Linux/Wayland, ChromeOS and Windows. Also HiDPI displays are the norm on Macs, and some people on HiDPI displays don't mind the fact that MacOS introduces blurriness when the scale factor is other than 1.0 or 2.0.)

Comment by RHollerith (rhollerith_dot_com) on Why is lesswrong blocking wget and curl (scrape)? · 2023-11-14T16:23:06.135Z · LW · GW

@nicolas-lacombe If you decide to grab stuff directly from the API (rather than scraping GW) I might help by offering to pair program with you or trying to contribute code.

Comment by RHollerith (rhollerith_dot_com) on Open Thread – Autumn 2023 · 2023-11-11T20:03:19.990Z · LW · GW

>If it’s worth saying, but not worth its own post, here's a place to put it.

Why have both shortforms and open threads?

Comment by RHollerith (rhollerith_dot_com) on Why is lesswrong blocking wget and curl (scrape)? · 2023-11-11T03:18:52.553Z · LW · GW
Comment by RHollerith (rhollerith_dot_com) on Why is lesswrong blocking wget and curl (scrape)? · 2023-11-10T00:38:25.091Z · LW · GW

What app do you imagine you will use? A web browser?

Comment by RHollerith (rhollerith_dot_com) on Why is lesswrong blocking wget and curl (scrape)? · 2023-11-09T15:39:47.572Z · LW · GW

When you imagine your "read offline" project having succeeded, do you tend to imagine yourself reading LW with a net connection on a computer, a smartphone or both?

Correction: I meant without a net connection. D'oh!

Comment by RHollerith (rhollerith_dot_com) on rhollerith_dot_com's Shortform · 2023-11-06T15:56:18.515Z · LW · GW

We will soon learn how to make machines that are better at planning and better at reality than we are. That is a big problem.

Comment by RHollerith (rhollerith_dot_com) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T20:23:44.799Z · LW · GW

Some people enjoy arguing philosophical points, and there is nothing wrong with that.

Do you believe that the considerations you have just described have any practical relevance to someone who believes that the probability of AI research's ending all human life some time in the next 60 years is .95 and wants to make a career out of pessimizing that probability? 

Comment by RHollerith (rhollerith_dot_com) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T19:49:18.943Z · LW · GW

Off-topic, but somewhat related. I want to know if there is anyone reading these words who is willing to admit that he or she is kind of hoping humanity will go extinct because humanity has been unfair to him or her or because (for some other reason) humanity is bad or unworthy.

Comment by RHollerith (rhollerith_dot_com) on Self-Blinded L-Theanine RCT · 2023-10-31T20:06:03.268Z · LW · GW

Neuroscientist Andrew Huberman uses and recommends theanine at bedtime to make it easier to get to sleep. I do, too. Theanine in pill or powder form, not from tea.