Why is lesswrong blocking wget and curl (scrape)?

post by Nicolas Lacombe (nicolas-lacombe) · 2023-11-08T19:42:52.070Z · LW · GW · 7 comments

This is a question post.

Contents

  Answers
    21 jimrandomh
    5 gwern
None
7 comments

if there is no official lesswrong db/site archive for public posts, i'd like to be able to create my own with automated tools like wget, so that i can browse the site while offline. see Is there a lesswrong archive of all public posts? [LW · GW]

wget and curl logs:

$ wget -mk https://www.lesswrong.com/
--2023-11-08 14:31:26--  https://www.lesswrong.com/
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving www.lesswrong.com (www.lesswrong.com)... 54.90.19.223, 44.213.228.21, 54.81.2.129
Connecting to www.lesswrong.com (www.lesswrong.com)|54.90.19.223|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2023-11-08 14:31:26 ERROR 403: Forbidden.

Converted links in 0 files in 0 seconds.
$ curl -Lv https://www.lesswrong.com/
*   Trying 54.81.2.129:443...
* Connected to www.lesswrong.com (54.81.2.129) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=lesswrong.com
*  start date: Sep  8 00:00:00 2023 GMT
*  expire date: Oct  6 23:59:59 2024 GMT
*  subjectAltName: host "www.lesswrong.com" matched cert's "www.lesswrong.com"
*  issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
*  SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://www.lesswrong.com/
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: www.lesswrong.com]
* [HTTP/2] [1] [:path: /]
* [HTTP/2] [1] [user-agent: curl/8.4.0]
* [HTTP/2] [1] [accept: */*]
> GET / HTTP/2
> Host: www.lesswrong.com
> User-Agent: curl/8.4.0
> Accept: */*
> 
< HTTP/2 403 
< server: awselb/2.0
< date: Wed, 08 Nov 2023 19:31:44 GMT
< content-type: text/html
< content-length: 118
< 
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
</body>
</html>
* Connection #0 to host www.lesswrong.com left intact

Answers

answer by jimrandomh · 2023-11-08T22:06:47.736Z · LW(p) · GW(p)

It's an AWS firewall rule with bad defaults. We'll fix it soon, but in the mean time, you can scrape if you change your user agent to something other than wget/curl/etc. Please use your name/project in the user-agent so we can identify you in logs if we need to, and rate-limit yourself conservatively.

comment by Nicolas Lacombe (nicolas-lacombe) · 2023-11-09T01:53:43.082Z · LW(p) · GW(p)

thanks a lot for the answer!

answer by gwern · 2023-11-11T01:38:05.552Z · LW(p) · GW(p)

You should use GreaterWrong. Even when the AWS stuff is fixed for LW2, GW is designed to be more static than LW2, and ought to snapshot better in general. You can also use the built-in theme designer to customize it better for your offline use and scrape it using your cookies.

comment by habryka (habryka4) · 2023-11-11T02:18:20.005Z · LW(p) · GW(p)

Yeah, GW is pretty good for snapshots and scraping. Either that or grab stuff directly from our API. 

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-11-14T16:23:06.135Z · LW(p) · GW(p)

@nicolas-lacombe If you decide to grab stuff directly from the API (rather than scraping GW) I might help by offering to pair program with you or trying to contribute code.

Replies from: nicolas-lacombe
comment by Nicolas Lacombe (nicolas-lacombe) · 2023-11-15T02:26:36.709Z · LW(p) · GW(p)

thanks for offering! right now i am thinking ill just use wget to create an archive of gw and/or lw since that would likely be faster than using the api for my use case.

but i am still interested to write code that would generate a lw archive from the lw api. if i end up doing that and if i remember this discussion then ill likely contact you and show you where i put the code so that we could both work on the same codebase if you want.

7 comments

Comments sorted by top scores.

comment by RHollerith (rhollerith_dot_com) · 2023-11-09T15:39:47.572Z · LW(p) · GW(p)

When you imagine your "read offline" project having succeeded, do you tend to imagine yourself reading LW with a net connection on a computer, a smartphone or both?

Correction: I meant without a net connection. D'oh!

Replies from: nicolas-lacombe
comment by Nicolas Lacombe (nicolas-lacombe) · 2023-11-09T16:21:22.687Z · LW(p) · GW(p)

i'll most likely read it when i have no internet access on a laptop.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-11-10T00:38:25.091Z · LW(p) · GW(p)

What app do you imagine you will use? A web browser?

Replies from: nicolas-lacombe
comment by Nicolas Lacombe (nicolas-lacombe) · 2023-11-10T01:08:19.247Z · LW(p) · GW(p)

probably some form of web browser: yes.

Replies from: rhollerith_dot_com
comment by ryan_b · 2023-11-08T21:07:18.343Z · LW(p) · GW(p)

I register a guess this is to keep the content of lesswrong from being scraped for LLMs and similar purposes.

Replies from: nicolas-lacombe
comment by Nicolas Lacombe (nicolas-lacombe) · 2023-11-08T21:20:07.130Z · LW(p) · GW(p)

according to this comment [LW(p) · GW(p)] it looks like a member of the lw site devs is ok with lw being scraped by gpt.