How did you integrate voice-to-text AI into your workflow? 2023-11-20T12:01:37.696Z
How much fraud is there in academia? 2023-11-16T11:50:41.544Z
Amazon KDP AI content guidelines 2023-09-11T18:36:08.464Z
What Caused the Puzzling Decline in Activism Against Policy Violence Towards Black People? 2023-07-19T14:40:26.120Z
Are vaccine safe enough, that we can give their producers liability? 2023-06-23T02:47:06.804Z
Never Fight The Last War 2023-06-20T12:35:26.038Z
Why didn't virologists run the studies necessary to determine which viruses are airborne? 2023-06-20T11:58:30.756Z
Palantir's AI models 2023-06-16T16:20:36.174Z
Matt Taibbi's COVID reporting 2023-06-15T09:49:54.272Z
Dealing with UFO claims 2023-06-10T15:45:05.884Z
Elon talked with senior Chinese leadership about AI X-risk 2023-06-07T15:02:49.606Z
What's the state of AI safety in Japan? 2023-05-02T17:06:17.024Z
Should we openly talk about explicit use cases for AutoGPT? 2023-04-20T23:44:02.162Z
Human Extinction by AI through economic power 2023-04-16T12:15:46.861Z
What would the FLI moratorium actually do? 2023-04-14T13:14:16.436Z
Stupid Questions - April 2023 2023-04-06T13:07:42.792Z
Chat bot as CEO at NetDragon Websoft 2023-03-26T16:01:35.800Z
What did you do with GPT4? 2023-03-18T15:21:46.307Z
What happened to the OpenPhil OpenAI board seat? 2023-03-15T16:59:06.390Z
Timeline: The proximal origin of SARS-CoV-2 2023-03-01T17:02:45.699Z
Language models can generate superior text compared to their input 2023-01-17T10:57:10.260Z
Are tulpas moral patients? 2022-12-27T11:30:29.923Z
What readings did you consider best for the happy parts of the secular solstice? 2022-12-21T15:45:44.583Z
ChatGPT's new novel rationality technique of fact checking 2022-12-11T13:54:08.337Z
A poem about applied rationality by ChatGPT 2022-12-11T13:43:53.820Z
Is ChatGPT rigth when advising to brush the tongue when brushing teeth? 2022-12-02T14:53:02.123Z
Who holds all the USDT? 2022-11-25T11:58:30.163Z
Intercept article about lab accidents 2022-11-07T21:10:19.559Z
Could a Supreme Court suit work to solve NEPA problems? 2022-11-03T21:10:48.344Z
Ukraine and the Crimea Question 2022-10-28T12:26:51.982Z
Should we push for requiring AI training data to be licensed? 2022-10-19T17:49:55.644Z
Orexin and the quest for more waking hours 2022-09-24T19:54:56.207Z
Biden should be applauded for appointing Renee Wegrzyn for ARPA-H 2022-09-18T19:57:31.209Z
Enantiodromia 2022-08-31T21:13:35.496Z
Are there practical exercises for developing the Scout mindset? 2022-08-15T17:23:42.552Z
One-day applied rationality workshop with Duncan Sabien 2022-08-01T18:37:01.296Z
Astral Codex Ten Berlin Meetup 2022-08-01T05:51:44.701Z
Is Gas Green? 2022-07-21T10:30:03.864Z
What are the simplest questions in applied rationality where you don't know the answer to? 2022-07-20T09:53:01.600Z
LessWrong Meetup - Gendlin's Focusing 2022-07-19T16:55:24.889Z
LessWrong Meetup - Changing our Minds 2022-07-07T12:59:42.757Z
Literature recommendations July 2022 2022-07-02T09:14:28.400Z
Examples of practical implications of Judea Pearl's Causality work 2022-07-01T20:58:58.066Z
What journaling prompts do you use? 2022-06-06T11:35:40.058Z
LessWrong Meetup - Hamming Circles 2022-06-05T20:35:06.515Z
The case for using the term 'steelmanning' instead of 'principle of charity' 2022-06-02T19:24:40.583Z
Is there any formal argument that climate change needs to more extreme weather events? 2022-05-31T09:01:58.586Z
Definition Practice: Applied Rationality 2022-05-15T20:44:54.907Z
How would public media outlets need to be governed to cover all political views? 2022-05-12T12:55:20.785Z
Astral Codex Ten Berlin Meetup 2022-05-11T20:24:42.991Z


Comment by ChristianKl on Out-of-distribution Bioattacks · 2023-12-03T21:05:57.714Z · LW · GW

If you tighten your reference class even further to include only historical biological attacks by individuals or small groups, the one with the most deaths is just five, in the 2001 anthrax attacks.

It's worth noting that the attacks were either done by Bruce Edwards Ivins who was paid out of funds to defend against bioattacks or someone in his vicinity. 

It seems strange to me that the recommendations you make don't take that into account. 

The idea that lay people using LLMs are worth worrying more about than people with expertise and access to top laboratories seems wrong to me. It's just an easy position to hold because it's not inconvenient for people with power.

Comment by ChristianKl on A Proposed Cure for Alzheimer's Disease??? · 2023-12-01T20:15:05.927Z · LW · GW

That setup doesn't give you a randomized control trial which is what's usually meant with the term clinical trial.

The system has a lot of incentives against doctors cooperating with illegal clinical trials. I don't think there's a notable example of anyone who pulled off a comparable trial which suggests that it's hard.

Comment by ChristianKl on A Proposed Cure for Alzheimer's Disease??? · 2023-12-01T13:46:11.448Z · LW · GW

Clinical trials are highly regulated. The median cost of a clinical trial is on the order of US$19 million. Do you have that kind of money available to run a clinical trial?

Comment by ChristianKl on A Proposed Cure for Alzheimer's Disease??? · 2023-12-01T13:37:21.435Z · LW · GW

Whether someone has epistemic virtue depends on whether they use the epistemic tools available to them. We made a lot of progress in epistemics in the last hundred years.

Comment by ChristianKl on A Proposed Cure for Alzheimer's Disease??? · 2023-12-01T10:03:29.519Z · LW · GW

This post looks to me like it's not living up to any epistemic virtues championed by the rationality community.

When we talk about predictions in rationality we are talking about statements that come with the likelihood of whether or not a future event happens.

You lay out a thesis, but you don't make an argument for why I should believe the thesis. You are just saying what you believe to be true and not why you believe it. 

The fact that you believe that someone would run a clinical trial because you wrote the post also suggests that you are a bit delusional about how things work. 

Comment by ChristianKl on Stupid Question: Why am I getting consistently downvoted? · 2023-12-01T09:47:47.269Z · LW · GW

There's a lot of material to read. Part of being good at reading is spending one's attention in the most effective way and not wasting it with low-value content. 

Comment by ChristianKl on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T12:02:45.980Z · LW · GW

If you want your proposed solution attributed to you, writing it in a style that people actually want to engage with instead of "your personal voice", would be the straightforward choice. 

Larry McEnerney is great at explaining what writing is about. 

Comment by ChristianKl on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T10:02:33.932Z · LW · GW

What do you care more about? Getting to write in "your personal voice" or getting your ideas well received?

Comment by ChristianKl on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T08:55:46.830Z · LW · GW

I'm definitely a crank, but I personally feel like I'm onto something?

That quite common for cranks ;)

If the ideas you want to propose are unorthodox, try to write in the most orthodox style in the venue you are addressing. 

Look at how posts that have high karma are written and try to write your own post in the same style. 

Secondly, you can take your post and tell ChatGPT that you want to post it on LessWrong and ask it what problems people are likely to have with the post. 

Comment by ChristianKl on Wikipedia is not so great, and what can be done about it. · 2023-11-29T20:52:48.645Z · LW · GW

however there's as if the higher echelons are trapped in office politics and doesn't really seem to realise what sort of implications are going to occur if they let themselves be gamed by malicious actors

It's quite ironic that you say that at the same time as speaking against actions that are about making it harder to game Wikipedia by malicious actors. 

Wikipedia isn't perfect but all decisions have their tradeoffs and when you don't think about those, that's not really the basis for improving anything. 

It's amplified in a large magnitude on Wikipedia which acts like a monopoly on the knowledge market. 

There are plenty of different ways knowledge is published on the web and Wikipedia does not have a monopoly on knowledge. What it has is a community that in all its flaws has a decent process that produces valuable outcomes. 

Nobody found a way to set up the way a community around the topic works better than Wikipedia. 

Comment by ChristianKl on Neither EA nor e/acc is what we need to build the future · 2023-11-29T14:27:27.998Z · LW · GW

When negotiating it can be useful to be open to outcomes that are net destruction of value, even if the outcome is not what you ideally want. 

Comment by ChristianKl on Wikipedia is not so great, and what can be done about it. · 2023-11-29T14:17:33.284Z · LW · GW

Communities are made up of people who have subjective experiences. It's nothing that you can't prevent. 

Squeeze them altogether in one place means that they are bound to generate large conflicts and issues, which Wikipedia is currently facing since it's pretty much the only place where people can "change or dictate history".

Wikipedia is a place where that happens because of the high-quality level that Wikipedia has.

It's no perfect place and there are certainly reforms that would be good, but for that you actually have to understand Wikipedia a bit better

There were many attempts to build alternatives. Those mostly didn't lead to projects with comparable value. 

Comment by ChristianKl on Wikipedia is not so great, and what can be done about it. · 2023-11-29T12:05:43.265Z · LW · GW

While adminship comes with certain rights, it does not come with the right to decide what a policy should be. If you would put a time limit on adminship you would likely see Wikipedia losing a lot of routine maintenance and lose quality as a result.

The policies would be still the same and likely still be executed. 

Comment by ChristianKl on Wikipedia is not so great, and what can be done about it. · 2023-11-29T12:01:41.447Z · LW · GW

It sounds to me like you ignore my main claim. The OP strawmans why people on Wikipedia hold exclusionist positions. Understanding is actually pretty important if you want to change anything that goes on in Wikipedia. 

Introducing something like "reliability metrics" is something that a Wikipedia community could decide because it's a policy question. It's not something over which the Wikimedia Foundation has any jurisdiction and thus it's strange to try to discuss it in a Wikimedia's internal mailing list instead of discussing it on Wikipedia. 

Comment by ChristianKl on The Limitations of GPT-4 · 2023-11-27T01:01:40.175Z · LW · GW

It is very unclear to me how difficult these problems are to solve. But I also haven’t seen realistic approaches to tackle them. 

That sounds be me more like lack of interest in research than lack of attempts to solve the problems.

AutoGPT frameworks provide LLMs a way to have system II thinking. With 100k token as a context window, there's a lot that can be done as far as such agents go. It's work to create good training data but it's doable provided there the capital investment. 

As far as multimodel models go, DeepMinds GATO does that. It's just not as performant as being a pure LLM 


Comment by ChristianKl on Wikipedia is not so great, and what can be done about it. · 2023-11-27T00:08:39.349Z · LW · GW

I feel like the post is long without really understanding why things are the way they are. It strawmans.

Given how important Wikipedia articles happen to be, there are a lot of interests that want to bend Wikipedia to their liking. If you take the notability policy, it's not just there because people believe in academic standards but because an article for a topic for which there are few reliable sources can be a lot easier to manipulate. 

Comment by ChristianKl on Why not electric trains and excavators? · 2023-11-26T20:35:11.103Z · LW · GW

Hydrogen is the most efficient fuel storage 'battery' with 40-50% round-trip energy storage possible [...] Desert pv will likely come down in price to consistent ~$0.01-0.02 

If energy prices come down so much, the round-trip efficiency is not central. 

You need much larger storage tanks in both ships and airplanes if you go for hydrogen than if you use denser fuel. 

And electrolysis and liquefaction tech are on track to yield the stated $1.50/kg (learning curves are magic).

If that's true why are the subventions for its production so high? What sources do you find trustworthy for those costs in an environment where plenty of the players have incentives to make people believe in a certain future?

Comment by ChristianKl on Progress links digest, 2023-11-24: Bottlenecks of aging, Starship launches, and much more · 2023-11-25T09:21:37.933Z · LW · GW

“What do AI safety/accelerationist people disagree on that they could bet on? What concrete things are going to happen in the next two years that would prove one party right or wrong?”

That seems to me quite confused. Why would you expect that concrete things appear in the next two years that can prove either side wrong?

A lot of what AI safety people worry is about, is the dynamics of a world where most of the power is held by AI.

Most power won't be held by AI in the next two years so we can't make any observations that tell us about the future dynamics.

AI safety people made some predictions like the difficulty of boxing AI which came true when we now see ChatGPT browsing the internet instead of being boxed but further similar predictions are unlikely to convince any accelerationist people. The same goes for predictions of autonomous weapon systems. 

Comment by ChristianKl on Why not electric trains and excavators? · 2023-11-25T09:19:19.196Z · LW · GW

I work professionally developing Liquid hydrogen fueled transport power technology.  

So your job depends on believing the projections about how H2 costs will come down?

It's very practical for some applications, particularly aircraft and ships that I expect will transition to hydrogen in next 2-3 decades, and possibly trains and agriculture.


This is likely the only realistic route to fully renewable power for human civilisation - producing in cheapest sunny or windy areas and using at high latitudes/through winters.  


so a more convenient dense and long-term easily and cheaply stored energy carrier such as Ammonia or synthetic hydrocarbons made using future cheap hydrogen feedstocks may be a better option.

It's possible that direct production of synthetic hydrocarbons will be more effective than going through H2 production. Given that we already have ships that can drive well if you fuel them with gas, it's possible that all the money invested into trying to get ships to run on hydrogen will be wasted.

Comment by ChristianKl on Insulate your ideas · 2023-11-25T08:59:46.995Z · LW · GW

Regardless of these external economic stimuli, the ground truth remains identical: be disciplined in your deployment of capital.

In an industry where you can expect the most successful companies to have a monopoly that they can use to make a lot of money, a company that can raise and spend more than its competitors can grow faster. The competitor that's disciplined in the deployment of capital doesn't rise to the top and thus doesn't make the most profits. 

Comment by ChristianKl on OpenAI: The Battle of the Board · 2023-11-24T16:55:53.876Z · LW · GW

That seems to be the publically available except. There's the Harvard Magazine article I linked above that speaks about the context of that writing and how it's part of a longer seven-page document.

Summers seems to have been heavy into deregulation three decades ago. More lately he seems to be supportive of minimum wage increases and more taxes for the rich. 

I do think though that they might disqualify him, or at least make him a worse choice, for something like the OpenAI board, because that comes with ideological requirements.

While I would prefer people who are ideologically clear for adding a lot of regulations for AI, it seems to me that part of what Sam Altman wanted was a board where people who can clearly counted on to vote that way don't have the majority. 

Larry Summers seems to be a smart independent thinker whose votes are not easy to predict ahead and that made him a good choice as a board candidate on which both sides can agree. 

Having him on the board could also be useful for lobbying for the AI safety regulation that OpenAI wants.

Comment by ChristianKl on How "Pinky Promise" diplomacy once stopped a war in the Middle East · 2023-11-24T15:24:39.754Z · LW · GW

Yes, I would see stepping out of that agreement with Iran also of a real breach of promises. Bush also broke formal promises made to North Korea.

I don't think NATO expansion fits into that category.

Comment by ChristianKl on How "Pinky Promise" diplomacy once stopped a war in the Middle East · 2023-11-24T05:05:32.432Z · LW · GW

The way the executive can make promises to other countries that are binding for future administrations is to do it as part of a treaty that gets ratified by the Senate. 

The German unification happened under the Treaty on the Final Settlement with Respect to Germany which has Russia and the United States as parties. If Russia's position at the time had been that they only agreed with German unification if a promise was made not to expand eastward, they could have asked for it to be included in that treaty. 

If they would have done that, it would have been binding for future US administrations in a way that statements by a foreign ministers aren't.

There are plenty cases like the sanctions against Belarus that are a much better example of the United States actually not uploading promises it made.

Comment by ChristianKl on OpenAI: The Battle of the Board · 2023-11-24T03:45:41.308Z · LW · GW

I'd need to read the memo to form my own opinion on whether that holds. 

It seems generally bad form to criticize people for things without actually reading what they wrote.

Just reading a text without trying to understand the context in which the text exists is also not a good way to understand whether a person made a mistake.

I think what you wrote here is likely more morally problematic than what Summers did 30 years ago. Do you think that whenever someone thinks about your merits as a person a decades from now someone should bring up that you are a person who likes to criticize people for what they said without reading what they said?

Comment by ChristianKl on OpenAI: The Battle of the Board · 2023-11-23T18:25:17.392Z · LW · GW

I never got the sense of this being settled science (of course given how controversial the claim would be hard for it to be settled for good), but even besides that, the question is: what does one do with that information?

He did not present it as settled science but as one of three hypotheses for why women may have been underrepresented in tenured positions in science and engineering at top universities and research institutions. The key implication of the hypothesis being true would be that having quotas for a certain amount of women in tenure positions is not meritocratic. 

Conversely, someone who suggests that "the economic logic behind dumping a load of toxic waste in the lowest-wage country is impeccable" seems already to think that economics are mainly about maximal efficiency, and any concerns for human well being are at best tacked on. 

His position seems to be that the sentence was ironic. The word "impeccable" usually does not appear in serious academic or policy writing. The memo seems to be in response to a report that suggested that free trade will produce environmental benefits in developing nations. It was a way to make fun of a PR lie. 

It's actually related to what Zvi talked about as bullet biting. If you want to advocate the policies of the World Bank in 1991 on free trade, it makes sense to accept that this comes with negative environmental effects in some third-world countries. 

Comment by ChristianKl on OpenAI: The Battle of the Board · 2023-11-23T15:04:18.202Z · LW · GW

his comments about sending waste to low income countries.

Wikipedia describes those as:

In December 1991, while at the World Bank, Summers signed a memo that was leaked to the press. Lant Pritchett has claimed authorship of the private memo, which both he and Summers say was intended as sarcasm.[19] The memo stated that "the economic logic behind dumping a load of toxic waste in the lowest wage country is impeccable and we should face up to that.[19] ... I've always thought that under-populated countries in Africa are vastly underpolluted."[20] According to Pritchett, the memo, as leaked, was doctored to remove context and intended irony, and was "a deliberate fraud and forgery to discredit Larry and the World Bank."[21][19]

Generally, judging people by what they said over three decades ago is not very useful. In this case, there seems to be a suggestion that it was a joke. 

Hanging out with Epstein is bad, but it does not define a person who does lots of different things. 

infer his intentions only from what policies he's advocated and implemented relative to counterfactual for who else could have filled the positions he's held.

infer his intentions only from what policies he's advocated and implemented relative to counterfactual for who else could have filled the positions he's held.

So what did he advocate lately? Things like:

"I am certainly no left wing ideologue, but I think something wrong when taxpayers like me, well into the top .1 percent of income distribution, are getting a significant tax cut in a Democrats only tax bill as now looks likely to happen," wrote Summers. 

"No rate increases below $10 million, no capital gains increases, no estate tax increases, no major reform of loopholes like carried interest and real estate exchanges but restoration of the state and local deduction explain it."

Especially given that he's on the board of a VC firm advocating for closing the carried interest loophole suggests that does not only see his own economic self-interests as important. 

Comment by ChristianKl on Aaron Silverbook on anti-cavity bacteria · 2023-11-23T13:58:00.163Z · LW · GW

There are plenty of biology questions where I feel like Aaron Silverbook should study them more to be able to give answers.

One of them was why about mutacin 1140 and why it's no problem of the new bacterium. I would be pretty certain that given that the new bacteria was grown in a culture after getting the gene to produce mutacin 1140 it likely evolved changes to be able to partly immunize itself against mutacin.

While those mutations were not explicitly inserted, they likely evolved under evolutionary pressure. 

Comment by ChristianKl on ChristianKl's Shortform · 2023-11-23T09:55:31.022Z · LW · GW

As matter of irony, lsusr decided to censor me from commenting on his posts, so I can't comment on Restricting freedom is more harmful than it seems.

Comment by ChristianKl on OpenAI: The Battle of the Board · 2023-11-23T06:43:30.501Z · LW · GW

If you disagree with something someone said, don't include words that suggest that he said things he didn't say. Don't make false claims.

Don't try to use links to opinions about what he said as sources but seek to link to the actual statements by the person and quote the passages you found offensive or a factual description of what's actually said. 

Sorry I said he thinks women suck at life the wrong way? Gotta say I'm disappointed that you're just filing this under "well, technically women do have less variance". That seems ... likely to help paper over the likely extent of threat that can be inferred from his having used a large platform to announce this thing,

Wikipedia describes the platform in which he made the statements as "In January 2005, at a Conference on Diversifying the Science & Engineering Workforce sponsored by the National Bureau of Economic Research, Summers sparked controversy with his discussion of why women may have been underrepresented "in tenured positions in science and engineering at top universities and research institutions". The conference was designed to be off-the-record so that participants could speak candidly without fear of public misunderstanding or disclosure later."

There's no reason to translate "we might have less women at top positions because of less variance in women" in such a context into "women suck at life".

I'm saying I believe he believes it, based on his pattern of behavior surrounding when and how he made the claim, and the other things he's said, and his political associations

His strongest political affiliations seem to be around holding positions in the treasury under Clinton and then being Director of the National Economic Council under Obama. 

Suggesting that being associated with either of those Democratic administrations means that someone has to believe that "women suck at life" is strange.

Comment by ChristianKl on My first conversation with Annie Altman · 2023-11-23T05:57:12.182Z · LW · GW

I played around with Dalle3 to create a Discord profile picture. The assumption that the picture I created with it isn't my creative expression seems strange.

The LessWrong books all have AI-generated art on the cover. If they instead had stock-images on their cover I don't think that would improve the amount of "creative self-expression" in any practical sense. The ability to generate those images seems to me to allow more creative expression. 

Comment by ChristianKl on OpenAI: The Battle of the Board · 2023-11-23T04:42:48.356Z · LW · GW

I'm certain the board threatened to fire Sam before this unless he made X changes. I'm certain Sam never made all of those X changes. 

From where do you get that certainty?

If they would have made those threats, why didn't someone tell the New York Times journalists who were trying to understand what happened about it? 

Why didn't they say so when they fired him? It's the kind of thing that's easy to say to justify firing him. 

Comment by ChristianKl on Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI" · 2023-11-23T03:39:08.970Z · LW · GW

I think this matters insofar as thousands of tech people just got radicalized into the "accelerationist" tribe, and have a narrative of "EAs destroy value in their arrogance and delusion." Whole swaths of Silicon Valley are now primed to roll their eyes at and reject any technical, governance, or policy proposals justified by "AI safety."

I'm not sure. This episode might give some people the idea that Sam Altman position in regards to regulation is a middle of two extremes when they previously opposed the regulations that Sam sought as being too much. 

Comment by ChristianKl on Foresight Institute: 2023 Progress & 2024 Plans for funding beneficial technology development · 2023-11-23T02:19:14.772Z · LW · GW

We are a small non-profit and entirely funded by donations. 

Doesn't the Foresight Institute run a lot of workshops that have fees that are paid by participants?

Comment by ChristianKl on Vote on worthwhile OpenAI topics to discuss · 2023-11-21T06:46:31.927Z · LW · GW

The secrecy in which OpenAI's board operated made it less trustworthy. Boards at places like Antrophic should update to be less secretive and more transparent. 

Comment by ChristianKl on Aaron Silverbook on anti-cavity bacteria · 2023-11-21T06:32:32.554Z · LW · GW

Streptococcus mutans is not part of the vaginal microbiome. Lactobacillus seems responsible for lactic acid production in the vagina. Lactobacillus is also present in the mouth but it's unlikely that this intervention will do anything to reduce the amount of Lactobacillus in the mouth let alone in the vaginal microbiome. 

Comment by ChristianKl on Am I going insane or is the quality of education at top universities shockingly low? · 2023-11-21T05:04:57.540Z · LW · GW

From where do you get the 40-45hrs/week number?

Comment by ChristianKl on Why not electric trains and excavators? · 2023-11-21T04:59:50.042Z · LW · GW

I am pretty sure that most trains in the USA are diesel-electric not just diesel. So the real question is would converting those trains to pure electric actually reduce the total carbon footprint of rail?

For ideal operation, a train can access electricity from the grid at all times. Currently, that's not possible on large parts of the US grid. 

Electric cars need batteries and as a result you have different dynamics.

I think if you want to think about power innovations for trains things like hydrogen are probably a better way forward.

Why? It's easier to transport electricity than it's to transport hydrogen. Electricity-driven motors are also more efficient than hydrogen-fuel cells.

Separately, the OP wrote a post on how hydrogen is likely not going to be as cheap in 2030 as officially projected:

Comment by ChristianKl on In favour of a sovereign state of Gaza · 2023-11-21T04:57:33.545Z · LW · GW

If there's a governor appointed by Israel it's likely that there's sustained insurgency against their government. 

It's difficult to get economic development when there's an ongoing insurgency. 

Comment by ChristianKl on How did you integrate voice-to-text AI into your workflow? · 2023-11-20T23:53:06.458Z · LW · GW

I downloaded it and selected the W2L Conformer engine. On it does not say anything about using Whisper. It seems much worse than what ChatGPT does. 

Did you load another engine to get Whisper to work?

Comment by ChristianKl on Social Dark Matter · 2023-11-19T08:22:17.128Z · LW · GW

Crime does not need to be perfect to be undetected. We have a good idea of base rates of murder and burglary, so I would expect that most people not know one of those. Burglary also doesn't seem to be a one-of-crime but mostly done by organized gangs.

Embezzlement on the other hand happens in different strengths. Plenty of employees embezzle pens and paper from their employers 

Theft from supermarkets would be one example. 1/4 Britains admit to stealing at self-service checkouts. You likely know more thieves than you think. 

Different kinds of fraud also happen more often relative to their visibility. That likely includes subjects like false data in scientific publications. 

Comment by ChristianKl on ChristianKl's Shortform · 2023-11-19T08:13:32.148Z · LW · GW

According to the South China Morning Post's summary of the Xi-Biden talks:

Among concrete results of the summit, the two sides agreed to cooperate on narcotics control and artificial intelligence governance, and resume military-to-military communication. But China voiced its continuing discontent with several US policies it believes hold it back, including export controls, investment reviews and unilateral sanctions.

For anyone who thought that cooperation between the US and China on AI governance is impossible this should be seen as great news.

Comment by ChristianKl on I think I'm just confused. Once a model exists, how do you "red-team" it to see whether it's safe. Isn't it already dangerous? · 2023-11-18T23:05:15.546Z · LW · GW

I think you get the point but say openAI "trains" GPT-5 and it turns out to be so dangerous that it can persuade anybody of anything and it wants to destroy the world.

Most scenarios that are dangerous are not as straightforward where GPT-5 has a clear goal to destroy the world.  If that's what the model does it's also fairly straightforward to just delete the model.

Comment by ChristianKl on Social Dark Matter · 2023-11-18T21:06:45.618Z · LW · GW

If I search for the number of the trans population, I find which suggests a rate of identifying as trans of 0.5% in for people between 18 and 25. It seems to cite the Williams Institute as a source which seems to me like an organization that's friendly toward trans people and doesn't really have a reason to misstate the prevalence. When I searched for information other sources also came up with something in the same ballpark.

At that base rate, knowing 60 to 80 people and nobody of them being trans should not be surprising. 

Do you believe that the base rates that organizations like the Williams Institute come up with are wrong?

Comment by ChristianKl on LLMs May Find It Hard to FOOM · 2023-11-17T10:59:47.070Z · LW · GW

In general, what people have been finding seems to be that fine-tuning an LLM on dataset much smaller that it pre-training set can bring out latent abilities or behaviors that it already had, or add narrow new capabilities, but making it a whole lot smarter in general requires a dataset comparable in size to the one it was pretrained on.

Yes, you do need lot of data. 

There are a lot of domains where it's possible to distinguish good answers from bad answers by looking at results. 

If you take a lot of mathematical problems, it's relatively easy to check whether a mathematical proof is correct and hard to write the proof in the first place. 

Once you have an AutoGPT-like agent that can do mathematical proofs, you have a lot of room to generate data about mathematical proofs and can optimize for the AutoGPT instance being able to create proofs with less steps of running the LLM. 

With the set of prompts that ChatGPT users provided, the agent can also look through the data and find individual problems that have the characteristics that it's easy to produce problem sets and grade the quality of answers. 

Comment by ChristianKl on Facebook is Paying Me to Post · 2023-11-17T02:25:04.538Z · LW · GW

Consider this: would you be happy if the government actually told you clearly and upfront, in a way you can understand the rationale of, what it's going to do with your taxes the moment you pay them?

There are a lot of different things for which government pays money and most government budgets are publically available. 

Comment by ChristianKl on The impossibility of rationally analyzing partisan news · 2023-11-16T21:16:22.117Z · LW · GW

Critically, if we are persuaded by either camp, we will find most of the sources in that camp believable.

Then the easy solution is to not let yourself be persuaded by either camp and assume that there are a lot of flaws in the information environment on both sides.

Your approach is to treat news outlets like they are blackboxes. As I argued before, if you want to rationally read news you need to have models of how the news outlets you read operate.

For a lot of news it's possible to understand the ground reality. If there's a new law that gets proposed, you are not limited to what journalists write about it. You can actually read the test of the law and compare it with what journalists write about it. 

Court cases end with the court publishing a document with their rulings. In science journalism, you can read the papers yourself. 

Freedom of information request allow accessing a lot of government data to understand the ground reality.

Often facts get more clear over time. While it might be hard to understand that the New York Times mislead their readers at the time of the start of the Iraq war, it become more clear later. 

Comment by ChristianKl on How much fraud is there in academia? · 2023-11-16T21:05:14.028Z · LW · GW

Asking people about suspicions of colleagues may give an overestimate. 

It might also be an underestimate. If you ask most people about how many of their colleagues have stolen in the past or ask men about how many of their friends engaged in sexual assault, you get underestimates.

Comment by ChristianKl on LLMs May Find It Hard to FOOM · 2023-11-16T10:23:04.345Z · LW · GW

You basically assume that the only way to make a LLM better is to give it training data that's similar in structure to the random internet data but written in a higher IQ way.

I don't think there's a good reason to assume that this is true. 

Look at humans' ability at facial recognition and how it differs between different people. The fact that some people have "face-blindness" suggests that we have a pretty specialized model for handling faces that's not activated in all people. A person with face-blindness is not lower or higher IQ than a person who doesn't have it. 

For LLMs you can create training data to make it learn specific abilities at high expertise. Abilities around doing probabilistic reasoning for example can likely be much higher than human default performance at similar levels of IQ.

Comment by ChristianKl on LLMs May Find It Hard to FOOM · 2023-11-16T09:01:50.381Z · LW · GW

An "IQ of 240" that can be easily scaled up to run in billions of instances in parallel might be enough to have a singularity. It can outcompete anything humans do by a large margin. 

Comment by ChristianKl on jacquesthibs's Shortform · 2023-11-14T20:43:46.658Z · LW · GW

Sure, but sometimes it's just a PM and a couple of other people that lead to a feature being implemented. Also, keep in mind that Community Notes was a thing before Musk. Why was Twitter different than other social media websites?

Twitter seems to have started Birdwatch as a small separate pilot project where it likely wasn't easy to fight or on anyone's radar to fight. 

In the current enviroment, where X gets seen as evil by a lot of the mainstream media, I would suspect that copying Community Notes from X would alone produce some resistence. The antibodies are now there in a way they weren't two years ago. 

Also, the Community Notes code was apparently completely revamped by a few people working on the open-source code, which got it to a point where it was easy to implement, and everyone liked the feature because it noticeably worked.

If you look at mainstream media views about X's community notes, I don't think everyone likes it. 

I remember Elon once saying that he lost a 8-figure advertising deal because of Community Notes on posts of a company that wanted to advertise on X.

Either way, I'd rather push for making it happen and somehow it fails on other websites than having pessimism and not trying at all. If it needs someone higher up the chain, let's make it happen.

I think you would likely need to make a case that it's good business in addition to helping with truth. 

If you want to make your argument via truth, motivating some reporters to write favorable articles about Community Notes might be necessary.