When I decided to design my own website, I had no experience with web development. After 202 days, 2,220+ commits,1 and 1,008 unit tests, I present turntrout.com—the result of my inexperience.

I’m proud of this website and its design. Indulge me and let me explain the choices I made along the way.

A basic rendition of the article "Think carefully before calling RL policies 'agents'". The website looks bare and amateurish.
The beginning of my journey, rendered under my third commit (6e687609) on April 1, 2024.
A pleasing rendition of the article "Think carefully before calling RL policies 'agents'".
Content rendered approximately when this article was first published (31bba104).

The site is a fork of the Quartz static site generator. While the build process is rather involved, here’s what you need to know for this article:

  1. Almost all of my content is written in Markdown.
  2. Each page has its metadata stored in plaintext yaml.
  3. The Markdown pages are transformed in (essentially) two stages; a sequence of transformations are applied to the intermediate representations of each page.
  4. The intermediate representations are emitted as webpages.
  5. The webpages are pushed to Cloudflare and then walk their way into your browser!

With the help of the LessWrong moderation team, I migrated the content from my old blog via their GraphIQL tool. The tool outputs both Markdown and html versions of the posts. However, while attempting to import my posts, I found the included Markdown to be a mess. I was staring at 120 posts’ worth of invalid Markdown, and—I found this out the hard way—the mess was too complicated to RegEx my way out of.

So I decided to convert the html to Markdown on my own using turndown. That solved the formatting issues. I was then confronted with compatibility issues. For example, throughout my six years on my old blog, there were at least three footnote formats which I used. I needed to be able to parse a single format. Now imagine that issue, but sprouting up one hundred-fold.

That took a few months.

This site is hosted by Cloudflare. The site is set up to have few external dependencies. In nearly all cases, I host scripts, stylesheets, and media assets on my cdn. If the rest of the Web went down (besides Cloudflare), turntrout.com would look nearly the same.2 Furthermore, minimizing embeds (e.g. <iframe>s) will minimize the number of invasive tracking cookies.3

My cdn brings me comfort—about 3% of my older image links had already died on LessWrong (e.g. imgur links expired). I think LessWrong now hosts assets on their own cdn. However, I do not want my site’s content to be tied to their engineering and organizational decisions. I want my content to be timeless.

I wrote a script which uploads and backs up relevant media files. Before pushing new assets to my main branch, the script:

  1. Uploads the assets to my cdn (assets.turntrout.com);
  2. Copies the assets to my local mirror of the cdn content;
  3. Removes the assets so they aren’t tracked by my git repo.

I later describe my deployment pipeline in more detail.

The color scheme derives from the Catppuccin “latté” (light mode) and “frappé” (dark mode) palettes.

The four Catppuccin palettes.

I like the pastel palettes provided by Catppuccin:

Light mode
Red

Orange

Yellow

Green

Blue

Purple

Smiling Face With Hearts on Twitter
Dark mode
Red

Orange

Yellow

Green

Blue

Purple

Smiling Face With Hearts on Twitter
The palettes for light and dark mode. In dark mode, I decrease the saturation of image assets.

I use the darkest text color sparingly. The margin text is medium-contrast, as are e.g. list numbers and bullets. I even used css to dynamically adjust the luminance of favicons which often appear in the margins, so that I don’t have e.g. a jet-black GitHub icon surrounded by lower-contrast text.

Color is important to this website, but I need to be tasteful and strict in my usage or the site turns into a mess. For example, in-line favicons are colorless (e.g. YouTube’s logo is definitely red). To choose otherwise is to choose chaos and distraction.

When designing visual content, I consider where the reader’s eyes go. People visit my site to read my content, and so the content should catch their eyes first. The desktop pond gif (with the goose) is the only exception to this rule. I decided that on the desktop, I want a reader to load the page, marvel and smile at the scenic pond, and then bring their eyes to the main text (which has high contrast and is the obvious next visual attractor).

During the build process, I convert all naïve css assignments of color:red (imagine if I made you read this) to the site’s red. Lots of my old equations used raw redgreenblue colors because that’s all that my old blog allowed; these colors are converted to the site theme. I even override and standardize the colors used for syntax highlighting in the code blocks.

As a static webpage, my life is much simpler than the lives of most web developers. However, by default, users would have to wait a few seconds for each page to load, which I consider unacceptable. I want my site to be responsive even on mobile on slow connections.

Quartz offers basic optimizations, such as lazy loading of assets and minifying JavaScript and css files. I further marked the core css files for preloading. However, there are a range of more interesting optimizations which Quartz and I implement.

EB Garamond Regular 8pt takes 260kb as an otf file but compresses to 80kb under the newer woff2 format. In all, the font footprint shrinks from 1.5mb to about 609kb for most pages. I toyed around with font subsetting but it seemed too hard to predict which characters my site never uses. While I could subset each page with only the required glyphs, that would add overhead and complicate client-side caching, likely resulting in a net slowdown.

I use subfont to subset each font across my entire website, taking the font footprint from 609kb to 113kb—a reduction of over 5×! Eventually, the ultimate solution will be progressive font enrichment, which will load just those glyphs needed for a webpage, and then cache those glyphs so that they aren’t reloaded during future calls. Sadly, progressive font enrichment is not yet available.

Among lossy compression formats, there are two kings: avif and webp. Under my tests, they achieved similar (amazing) compression ratios of about 10× over png. For compatibility reasons, I chose avif. The upshot is that images are nearly costless in terms of responsiveness, which is liberating.

To demonstrate this liberty, I perform a statistical analysis of the 941 avif files hosted on my cdn as of November 9, 2024.4 I downloaded each avif file and used magick to convert it back to a png, measuring the size before and after.

Compression ratios: (PNG size) / (AVIF size). A left-skew histogram with tails reaching out to 75x.
At first blush, most of the compression ratios seem unimpressive. However, the vast majority of the “images” are favicons which show up next to urls. These images are already tiny as pngs (e.g. 2kb), so avif can only compress them so much.
This friendly avif goose clocks in below 45kb, while its png equivalent weighs 450kb—a 10× increase!
A scatterplot showing dramatic decreases in filesize from PNG to AVIF.
Now the huge savings of avif are clearer.
MetricValue
Total png size280mb
Total avif size25mb
Overall space savings91%

Unlike the image case, I’m not yet happy with my video compression. Among modern formats, there appear to be two serious contenders: h265 mp4 (“hevc”) and webm (via the VP9 codec). Reportedly, hevc has better compression than VP9 webm. In practice, I haven’t figured out how to make that happen, and my hevc mp4s remain several times larger than my webms at similar visual quality.

Under my current compression pipeline, webm videos are hilariously well-compressed (if I remember correctly, about 10× over gif and 4× over hevc). However, there is one small problem which is actually big: while Safari technically “supports” webm, Safari refuses to autoplay & loop webms.5

The problem gets worse because—although Safari will autoplay & loop hevc, Safari refuses to render transparency. Therefore, for the looping video of the pond (which requires transparency), the only compatible choice is a stupid gif which takes up 561kb instead of 58kb. That asset shows up on every page, so that stings a bit. Inline videos don’t have to be transparent, so I’m free to use hevc for most video assets.

However, after a bunch of tweaking, I still can’t get ffmpeg to sufficiently compress hevc. I’ll fix that later—possibly I need to try a different codec.

I tried using PurgeCSS to remove unused styles, reducing the css footprint from 84kb to 73kb. Since I couldn’t safely purge selectors from my main stylesheet (there were too many false positives), so the benefit was quite marginal. The purging caused trouble in my build process and had little benefit, so I removed it.

Even after minification and purging, it takes time for the client to load the main css stylesheet. During this time, the site looks like garbage. One solution is to manually include the most crucial styles in the html header, but that’s brittle.

Instead, I hooked the critical package into the end of the production build process. After emitting the webpages, the process computes which “critical” styles are necessary to display the first glimpse of the page. These critical styles are inlined into the header so that they load immediately, without waiting for the entire stylesheet to load. When the page loads, it quickly notes the status of light vs dark mode and immediately applies the relevant theme. Once the main stylesheet loads, I delete the inlined styles (as they are superfluous at best).

When loading a new page, the micromorph package selectively loads the new elements in the page. The shared elements are not updated, cutting load times.

This website contains many design elements. To maintain a regular, assured style and to avoid patchwork chaos, I made two important design choices.

Exponential font sizing
I fixed a base font size—20px on mobile, to 22px on tablets, to 24px on full displays. I read up on how many characters should be on a single line in order to maximize readability—apparently between 50 and 60. On desktop, I set the center column to 750px (yielding about 75 characters per line).6 I decided not to indent paragraphs because that made the left margin boundary too ragged.

After consulting TypeScale, I scaled the font by , with for body text and for headers:

Header 1 Header 2 Header 3 Header 4 Header 5

Normal text Smaller text Smaller text Smaller text Smaller text

All spacing is a simple multiple of a base measurement
If—for example—paragraphs were separated by 3.14 lines of space but headings had 2.53 lines of margin beneath them, that would look chaotic. Instead, I fixed a “base margin” variable and then made all margin and padding calculations be simple fractional multiples (e.g. 1.5×, 2×) of that base margin.

The font family is the open-source EB Garamond. The monospace font is Fira Code VF, which brings a range of ligatures.

A range of programming ligatures offered by Fira Code VF.
Ligatures transform sequences of characters (like “<=”) into a single glyph (like “<=”).
Demonstrating how the monospace font aligns the x-height and cap-heights of common bigrams like 'Fl'.
I love sweating the small stuff. 🙂 Notice how aligned “FlTl” is!

My site contains a range of fun fonts which I rarely use. For example, the Lord of the Rings font “Tengwar Annatar” renders Elvish glyphs.

A
Monochromatic dropcaps seem somewhat illegible.

I have long appreciated illuminated calligraphy. In particular, a dropcap lends gravity and elegance to a text. Furthermore, EB Garamond dropcaps are available.

However, implementation was tricky. As shown with the figure’s “A”, css assigns a single color to each text element. To get around this obstacle, I took advantage of the fact that EB Garamond dropcaps can be split into the foreground and background.

A

However, text blocks other text; only one letter can be in a given spot—right? Wrong! By rendering the normal letter as the background dropcap font, I apply a different (lighter) color to the dropcap background. I then use the css ::before pseudo-element to render another glyph in the foreground. The result:

A

A less theme-disciplined man than myself might even flaunt dropcap colorings!

T H E
P O N D

BeforeAfter
"We did not come to fear the future. We came here to shape it." - Barack Obama“We did not come to fear the future. We came here to shape it.” — Barack Obama

Undirected quote marks ("test") look bad to me. Call me extra (I am extra), but I ventured to never have undirected quotes on my site. Instead, double and single quotation marks automatically convert to their opening or closing counterparts. This seems like a bog-standard formatting problem, so surely there’s a standard library. Right?

Sadly, no. GitHub-flavored Markdown includes a smartypants option, but honestly, it’s sloppy. smartypants would emit strings like Bill said “’ello!” (the single quote is oriented incorrectly). So I wrote a bit of code.

How do the following sentences feel to read?

  1. Signed in the 1990’s, NAFTA was a trade deal.

  2. Signed in the 1990’s, nafta was a trade deal.

Typographically, capital letters are designed to be used one or two at a time—not five in a row. “NAFTA” draws far too much attention to itself. I use regular expressions to detect at least three consecutive capital letters, excluding Roman numerals like XVI.

Furthermore, I apply smallcaps to letters which follow numbers (like “100gb”) so that the letters have the same height as the numerals. For similar reasons as smallcaps, most of the site’s numerals are oldstyle (“100”) rather than lining (“100”). I also uppercase the first letter of smallcaps if it begins a sentence or a paragraph element.

Nafta, Wikipedia

The North American Free Trade Agreement (nafta /ˈnæftə/ naf-tə; Spanish: Tratado de Libre Comercio de América del Norte, tlcan; French: Accord de libre-échange nord-américain, aléna) was an agreement signed by Canada, Mexico, and the United States that created a trilateral trade bloc in North America. The agreement came into force on January 1, 1994, and superseded the 1988 Canada–United States Free Trade Agreement between the United States and Canada. The nafta trade bloc formed one of the largest trade blocs in the world by gross domestic product.

Merriam-Webster ordains that—contrary to popular practice—hyphens (-) and em-dashes (—) be used in importantly different situations:

The em dash (—) can function like a comma, a colon, or parenthesis. Like commas and parentheses, em dashes set off extra information, such as examples, explanatory or descriptive phrases, or supplemental facts. Like a colon, an em dash introduces a clause that explains or expands upon something that precedes it.

Technically, en dashes should be used for ranges of dates and numbers. So “p. 202-203” turns into “p. 202–203”, and “Aug-Dec” turns into “Aug–Dec”!

Some hyphens should actually be minus signs. I find raw hyphens (-2) to be distasteful when used with plaintext numbers. I opt for “−2” instead.

Fractions
I chose slanted fractions in order to slightly increase the height of the numerals in the numerator and denominator. People are 2/3 water, but “01/01/2000” should not be rendered as a fraction.
Detecting multipliers
Multipliers like “2×” are 2× more pleasant than “2x.”
Full-width slashes
Used for separators like “cat” /“dog” in place of “cat” / “dog”—note how cramped the EB Garamond halfwidth “/” is!
Mathematical definitions
In the past, I used the symbol to denote definitions (as opposed to normal equations). I now convert these symbols to the self-explanatory .
Superscripting ordinal suffixes
By default, ordinal numbers look a bit strange: 1st. This html transformation allows me to write about what happened on e.g. August 8th.

While EB Garamond is a nice font, it has a few problems. As of April 2024, EB Garamond did not support slashed zeroes (the zero feature). The result: zero looked too similar to “o.” Here’s a number rendered in the original font: “100”; in my tweaked font it shows as “100.” Furthermore, the italicized font did not support the cv11 OpenType feature for oldstyle numerals. This meant that the italicized 1 looked like a slanted “1”—too similar to the smallcaps capital I (“I”).

Therefore, I paid Hisham Karim $121 to add these features. I have also notified the maintainer of the EB Garamond font.

This list is not exhaustive.

Tasteful emoji usage helps brighten and vivify an article. However, it seems like there are over 9,000 emoji stylings:

Smiling Face With Hearts on Apple
Apple
Smiling Face With Hearts on Google
Google
Smiling Face With Hearts on Microsoft
Microsoft
Smiling Face With Hearts on Facebook
Facebook
Smiling Face With Hearts on Twitter
Twitter
Smiling Face With Hearts on WhatsApp
WhatsApp
Smiling Face With Hearts on Samsung
Samsung
Smiling Face With Hearts on LG
LG

I want the user experience to be consistent, so my build process bakes in the Twitter emoji style: 🥰⭐️✨💘🐟😊🤡😏😮‍💨☺️🥰🎉🤷‍♂️🌊😠🏰❤️😞🙂‍↕️😌🥹🏝️🪂

Favicons are those little website icons you see in your tab bar. Inspired by gwern.net and Wikipedia, I decided to show favicons next to links. Including favicons has several benefits, from “the reader often knows what to expect” to “it just looks nice.”

I wrote a server-side html transformation implementing the following algorithm:

  1. Takes as input a semi-processed html syntax tree,
  2. Finds all of the link elements,
  3. Checks what favicon (if any) is available for each,
  4. Downloads the favicon if needed,
  5. Appends a favicon <img> element after the link.

There remains a wrinkle: How can I ensure the favicons look good? As gwern noted, inline favicons sometimes appear on the next line (detached from their link). This looks bad—just like it would look bad if your browser displayed the last letter of a word on the next line, all on its own.

To tackle this, the favicon transformation doesn’t just append an <img> element. Basically, I make a new <span> which acts as a “favicon sandwich”, packaging both the last few letters of the link text and then the favicon <img> element. The <span> is styled so that if the favicon element is wrapped, the last few letters will be wrapped as well. To ensure legibility in both light and dark mode, I also dynamically style certain favicons, including this site’s favicon: Favicon for turntrout.com.

I love these “admonition” bubbles which contain information. When an admonition is collapsed by default, the reader can decide whether or not they want more detail on a topic, reducing ambient frustration.

Often, websites embed diagrams as images. However, I find this unsatisfying for several reasons:

  1. Inconsistent styling as several different diagram suites may be used to generate images—the diagrams often use different color palettes,
  2. Bloated page size from embedding sparse graphical information into dense image data, and
  3. Inability to adapt to shifts between light and dark mode.

Mermaid diagrams fix these problems. The main downside was the extra difficulty of generating diagrams, but modern multimodal llms can easily take an image of a diagram and output valid Mermaid code. The diagrams are rendered server-side, avoiding a bulky JavaScript download.

Entire video

Action sequence

Human

Human query function

Question(s)

Answer(s)

A diagram from my Eliciting Latent Knowledge proposal.

Popovers
Quartz comes with interactive popover previews for internal links, such as footnotes or section references. Desktop users can view popovers by hovering over an internal link. The Favicon for turntrout.com favicon appears for links to other pages on the site, while the Counterclockwise loop icon is used for within-page links.
Search
Also packaged in vanilla Quartz, my site is searchable with live content previews—rendering the entire page on the desktop view. To accord with classic keybindings, I ensured that the search window can be toggled by pressing /.
Metadata
Every page has an html description and tags (if appropriate), along with a table of contents which (on desktop) highlights the current section. I track original publication date and display when each was page was last modified by a git push to the main branch. I also support “sequences” of blog posts:
The sequence metadata for my post on shard theory.
Spoilers hide text until hovered
I made a Markdown plugin which lets me specify spoilers by starting the line with >!. The results are unobtrusive but pleasant:

Have you heard? Snape kills Dumbledore.

Server-side math rendering via
I initially chose over MathJax due to its faster client-side rendering speed. However, now I render the server-side so all the client has to do is download katex.min.css (27kb). Easy.

I quickly learned the importance of comprehensive tests and documentation. The repository now has very strong code health. My test suite protects my site from so many errors. Before a new commit touches the live site, it must pass a gauntlet of challenges:

  1. The pre-commit git hook runs before every commit is finalized.
  2. The pre-push hook runs before commits are pushed to the main branch.
  3. Github actions ensure that the site still works properly on the remote server.

Lastly, external static analysis alerts me to potential vulnerabilities and anti-patterns. If somehow a bad version slips through anyways, Cloudflare allows me to instantly revert the live site to a previous good version.

lint-staged improves the readability and consistency of my code. While I format some filetypes on save, there are a lot of files and a lot of types. Therefore, my package.json specifies what linting & formatting tools to run on what filetypes:

"lint-staged": {
 "*.{js, jsx, ts, tsx, css, scss, json}": "prettier --write",
 "*.fish": "fish_indent",
 "*.sh": "shfmt -i 2 -w",
 "*.py": [
     "autoflake --in-place",
     "isort", 
     "autopep8 --in-place",
     "black"
    ]
}

I also run docformatter to reformat my Python comments. For compatibility reasons, docformatter runs before lint-staged in my pre-commit hook.

Whenever I find a bug, I attempt to automatically detect it in the future. The result is this long pipeline of checks, designed to surface errors which would take a long time to notice manually. The push operation is aborted if any of the following checks7 fail.

I run eslint --fix to automatically fix up my TypeScript files. By using eslint, I maintain a high standard of code health, avoiding antipatterns such as declaring variables using the any type. I also run stylelint --fix to ensure scss quality and ensure that pylint rates my code health at 10/10.

I use mypy to statically type-check my Python code. Since my JavaScript files are actually TypeScript, the compiler already raises exceptions when there’s a type error.

I run a multi-purpose spellchecking tool. The tool maintains a whitelist dictionary which the user adds to over time. Potential mistakes are presented to the user, who indicates which ones are real. The false positives are ignored next time. The spellchecker also surfaces common hiccups like “the the.”

I then lint my Markdown links for probable errors. I found that I might mangle a Markdown link as [here's my post on shard theory](shard-theory). However, the link url should start with a slash: /shard-theory. My script catches these. I check the yaml metadata, ensuring that each article has required fields filled in (like title and description). I also check that no pages attempt to share a url.

I check that my expressions avoid using \tag{...}, as that command wrecks the formatting in the rendered html.

I lastly check that my css:

  1. Defines font-faces using fonts which actually exist in the filesystem, and
  2. Does not refer to nonexistent fonts.

As of first posting, I have 843 JavaScript unit tests and 164 pytest Python tests. I am quite thorough—these tests are my pride and joy. 🙂 Writing tests is easy these days. I use cursor—AI churns out dozens of high-coverage lines of test code in seconds, which I then skim for quality assurance.

Pure unit tests cannot test the end-to-end experience of my site, nor can they easily interact with a local server. playwright lets me test dynamic features like search, spoiler blocks, and light /dark mode. What’s more, I test these features across a range of browsers and viewport dimensions (mobile vs desktop).

Many errors cannot be caught by unit tests. For example, I want to ensure that my site keeps looking good—this cannot (yet) be automated. To do so, I perform visual regression testing. The testing also ensures that the overall site theme is retained over time and not nuked by unexpected css interactions.

I use playwright to interact with my website and argos-ci to take stable pictures of the website. playwright renders the site at pre-specified locations, at which point argos-ci takes pictures and compares those pictures to previously approved reference pictures. If the pictures differ by more than a small percentage of pixels, I’m given an alert and can view a report containing the pixel-level diffs. Using argos-ci helps reduce flakiness and track the evolution of the site.

An image of a mountain is changed to have snow on top. The pixel-level diff is highlighted to the user.
playwright and argos-ci can tell you “hey, did you mean for your picture of a mountain to now have snow on it?”.

However, it’s not practical to test every single page. So I have a test page which stably demonstrates site features. My tests screenshot that page from many angles. I also use visual regression testing to ensure the stability of features like search.

At this point, I also check that the server builds properly.

My goal is a zero-hassle process for adding assets to my website. In order to increase resilience, I use Cloudflare R2 to host assets which otherwise would bloat the size of my git repository.

I edit my Markdown articles in Obsidian. When I paste an asset into the document, the asset is saved in a special asset_staging/ directory. Later, when I move to push changes to my site, the following algorithm runs:

  1. Move any assets from asset_staging/ to a slightly more permanent static/ asset directory, updating any filepath references in the Markdown articles;
  2. Compress all relevant assets within static/, updating filepath references appropriately;
  3. Run exiftool to strip Exif metadata from images, preventing unintended information leakage;
  4. Upload the assets to assets.turntrout.com, again updating references in the Markdown files;8
  5. Copy the assets to my local mirror of my R2 asset bucket (in case something happens to Cloudflare).

While this pipeline took several weeks of part-time coding to iron out, I’m glad I took the time.

Over time, links decay and rot, eventually emitting 404 errors. Unlike gwern, I do not yet have a full solution to this problem. However, links I control should never 404:

I use linkchecker to validate these links.

At this point, I check the built pages for a smattering of possible errors:

  • Links to my local server (localhost:8080) which validate but will become invalid on the Web;
  • I might have disabled favicon rendering to increase build speed;
  • Common Markdown errors:
    • Footnotes may be unmatched (e.g. I deleted the reference to a footnote without deleting its content, leaving the content exposed in the text);
    • Incorrectly terminated blockquotes;
    • Unrendered emphasis markers (often indicated by a trailing * or _);
    • Failing to render spoiler boxes;
    • Failed attempts to specify a <figcaption> element;
    • Failed renders of html elements;
    • Assets present in the Markdown file but which are not present in the html dom;
    • Mentioning usernames (like TurnTrout) without setting them as inline code;
  • Certain kinds of dead links which linkchecker won’t catch:
    • Anchor links which don’t exist;
    • Duplicate anchor targets on a page;
    • git-hosted assets, stylesheets, or scripts which don’t exist;
  • Duplicate id attributes on a page’s html elements;
  • Metadata validity, including:
    • Ensure page descriptions exist and are not too long for social media previews;
  • Failures of my text prettification pipeline:
    • Non-smart quotation marks (e.g. ' or ");
    • Multiple dashes in a row;
  • rendering errors;
  • Failure to inline the critical css;
  • Rss file generation failure.

Reordering elements in <head> to ensure social media previews
I want nice previews for my site. Unfortunately, the behavior was flaky—working on Facebook, not on Twitter, not on Slack, working on Discord… Why? I had filled out all of the OpenGraph fields.

Apparently, Slack only reads the metadata from the first portion of the <head>. However, my OpenGraph <meta> tags were further back, so they weren’t getting read in. Different sites read different lengths of the <head>, explaining the flakiness.

The solution: Include tags like <meta> and <title> as early as possible in the <head>. As a post-build check, I ensure that these tags are confined to the first 9kb of each file.

Updating page metadata
For posts which are being pushed for the first time, I set their publication date. For posts which have been updated since the last push, I update their “last updated” date.
Cryptographic timestamping
I concatenate the sha-1 commit hashes of all commits being pushed to main and hash their concatenation with sha-256. Using a slight variant of gwern’s timestamping procedure, I use OriginStamp to commit the sha-256 hash to the blockchain by the next day.

By committing the hash to the blockchain, I provide cryptographic assurance that I have in fact published the claimed commits by the claimed date. This reduces (or perhaps eliminates) the possibility of undetectably “hiding my tracks” by silently editing away incorrect or embarrassing claims after the fact, or by editing my commit history.

I use DeepSource to analyze and lint the repository. DeepSource serves multiple roles:

  1. A verbose linter which surfaces a huge range of antipatterns. For example, in Python it points out variables which are redeclared from an outer scope.
  2. An autofix tool which—for a subset of issues—can create a pull request fixing the issues.

I try to keep the repository clean of DeepSource issues, but it does point out a lot of unimportant issues (which I ignore). Sadly, their command-line tool cannot be configured to only highlight sufficiently important problems. So the DeepSource analysis is not yet part of my automated pre-push hook.

Thanking people who helped with this site

Emma Fickel decisively pushed me to create this site, which has been one of my great joys of 2024. The LessWrong moderators helped me export my post data. Chase Denecke provided initial encouragement and expertise. Garrett Baker filed several bug reports. Thomas Kwa trialed an integration of Plot.ly graphs.

Asset attributions

The Plus sign and Heart icon are sourced from the “Dazzle Line Icons” collection under the CC attribution license. The link callout icon A single link from a chain and the same-page “favicon” A counterclockwise arrow are sourced from Solar Icons on svg repo. The Twitter emoji styling is from the Twemoji repository.

LessWrong inspired the “previous/next” sequence navigation interface. gwern.net inspired inline link icons, dropcaps, linkchecker, and cryptographic timestamping.

Black and white trout

Find out when I post more content: newsletter & RSSRSS icon

Thoughts? Email me at alex@turntrout.com

  1. I counted my commits by running git log --author="Alex Turner" --oneline | wc -l.

  2. Examples of content which is not hosted on my website: There are several <iframe> embeds (e.g. Google forms and such). I also use the privacy-friendlier umami.is analytics service—the script is loaded from their site.

  3. To avoid YouTube tracking cookies, I even self-host AI presidents discuss AI alignment agendas.

  4. I used a publicly accessible Colab to generate the avif png compression graphs.

  5. Safari does support hevc-encoded mp4s, but only if they are tagged with hvc1 and not hev1. To “autoplay” these mp4s, I had to include the src= attribute in the video tag and then wait for the user to interact with the page. Apparently Firefox doesn’t support hevc, so I’ll need to add alternative Firefox-compatible <source/>s.

  6. 60 characters per line seemed awkwardly narrow to me, so I went for 75 per line.

  7. For clarity, I don’t present the pre-push hook operations in their true order.

  8. When I upload assets to Cloudflare R2, I have to be careful. By default, the upload will overwrite existing assets. If I have a namespace collision and accidentally overwrite an older asset which happened to have the same name, there’s no way for me to know without simply realizing that an older page no longer shows the older asset. For example, links to the older asset would still validate under linkchecker. Therefore, I disable overwrites by default and instead print a warning that an overwrite was attempted.