When I decided to design my own website, I had no experience with web development. After 202 days, 2,220+ commits,1 and 1,008 unit tests, I present turntrout.com—the result of my inexperience.
I’m proud of this website and its design. Indulge me and let me explain the choices I made along the way.
The site is a fork of the Quartz static site generator. While the build process is rather involved, here’s what you need to know for this article:
Almost all of my content is written in Markdown.
Each page has its metadata stored in plaintext yaml.
The Markdown pages are transformed in (essentially) two stages; a sequence of transformations are applied to the intermediate representations of each page.
The intermediate representations are emitted as webpages.
The webpages are pushed to Cloudflare and then walk their way into your browser!
More detail on the transformations
Text transformations operate on the raw text content of each page. For example:
Html transformations operate on the next stage. Basically, after all the text gets transformed into other text, the Markdown document gets parsed into some proto-html. The proto-html is represented as an abstract syntax tree. The upshot: html transformations can be much more fine-grained. For example, I can easily avoid modifying links themselves.
/** * Replaces hyphens with en dashes in number ranges * Number ranges should use en dashes, not hyphens. * Allows for page numbers in the form "p.206-207" * * @returns The text with en dashes in number ranges */export function enDashNumberRange(text: string): string { return text.replace( new RegExp(`\\b(?<!\\.)((?:p\\.?)?\\d+${chr}?)-(${chr}?\\d+)(?!\\.\\d)\\b`, "g"), "$1–$2", )}
I wouldn’t want to apply this transform to raw text because it would probably break link addresses (which often contain hyphenated sequences of numbers). However, many html transforms aren’t text → text.
With the help of the LessWrong moderation team, I migrated the content from my old blog via their GraphIQL tool. The tool outputs both Markdown and html versions of the posts. However, while attempting to import my posts, I found the included Markdown to be a mess. I was staring at 120 posts’ worth of invalid Markdown, and—I found this out the hard way—the mess was too complicated to RegEx my way out of.
So I decided to convert the html to Markdown on my own using turndown. That solved the formatting issues. I was then confronted with compatibility issues. For example, throughout my six years on my old blog, there were at least three footnote formats which I used. I needed to be able to parse a single format. Now imagine that issue, but sprouting up one hundred-fold.
This site is hosted by Cloudflare. The site is set up to have few external dependencies. In nearly all cases, I host scripts, stylesheets, and media assets on my cdn. If the rest of the Web went down (besides Cloudflare), turntrout.com would look nearly the same.2 Furthermore, minimizing embeds (e.g. <iframe>s) will minimize the number of invasive tracking cookies.3
My cdn brings me comfort—about 3% of my older image links had already died on LessWrong (e.g. imgur links expired). I think LessWrong now hosts assets on their own cdn. However, I do not want my site’s content to be tied to their engineering and organizational decisions. I want my content to be timeless.
I wrote a script which uploads and backs up relevant media files. Before pushing new assets to my main branch, the script:
Uploads the assets to my cdn (assets.turntrout.com);
Copies the assets to my local mirror of the cdn content;
Removes the assets so they aren’t tracked by my git repo.
I like the pastel palettes provided by Catppuccin:
Light mode
Red
Orange
Yellow
Green
Blue
Purple
Dark mode
Red
Orange
Yellow
Green
Blue
Purple
The palettes for light and dark mode. In dark mode, I decrease the saturation of image assets.
I use the darkest text color sparingly. The margin text is medium-contrast, as are e.g. list numbers and bullets. I even used css to dynamically adjust the luminance of favicons which often appear in the margins, so that I don’t have e.g. a jet-black GitHub icon surrounded by lower-contrast text.
Color is important to this website, but I need to be tasteful and strict in my usage or the site turns into a mess. For example, in-line favicons are colorless (e.g. YouTube’s logo is definitely red). To choose otherwise is to choose chaos and distraction.
When designing visual content, I consider where the reader’s eyes go. People visit my site to read my content, and so the content should catch their eyes first. The desktop pond gif (with the goose) is the only exception to this rule. I decided that on the desktop, I want a reader to load the page, marvel and smile at the scenic pond, and then bring their eyes to the main text (which has high contrast and is the obvious next visual attractor).
During the build process, I convert all naïve css assignments of color:red (imagine if I made you read this) to the site’s red. Lots of my old equations used raw red /green /blue colors because that’s all that my old blog allowed; these colors are converted to the site theme. I even override and standardize the colors used for syntax highlighting in the code blocks.
As a static webpage, my life is much simpler than the lives of most web developers. However, by default, users would have to wait a few seconds for each page to load, which I consider unacceptable. I want my site to be responsive even on mobile on slow connections.
Quartz offers basic optimizations, such as lazy loading of assets and minifying JavaScript and css files. I further marked the core css files for preloading. However, there are a range of more interesting optimizations which Quartz and I implement.
EB Garamond Regular 8pt takes 260kb as an otf file but compresses to 80kb under the newer woff2 format. In all, the font footprint shrinks from 1.5mb to about 609kb for most pages. I toyed around with font subsetting but it seemed too hard to predict which characters my site never uses. While I could subset each page with only the required glyphs, that would add overhead and complicate client-side caching, likely resulting in a net slowdown.
I use subfont to subset each font across my entire website, taking the font footprint from 609kb to 113kb—a reduction of over 5×! Eventually, the ultimate solution will be progressive font enrichment, which will load just those glyphs needed for a webpage, and then cache those glyphs so that they aren’t reloaded during future calls. Sadly, progressive font enrichment is not yet available.
Among lossy compression formats, there are two kings: avif and webp. Under my tests, they achieved similar (amazing) compression ratios of about 10× over png. For compatibility reasons, I chose avif. The upshot is that images are nearly costless in terms of responsiveness, which is liberating.
To demonstrate this liberty, I perform a statistical analysis of the 941 avif files hosted on my cdn as of November 9, 2024.4 I downloaded each avif file and used magick to convert it back to a png, measuring the size before and after.
At first blush, most of the compression ratios seem unimpressive. However, the vast majority of the “images” are favicons which show up next to urls. These images are already tiny as pngs (e.g. 2kb), so avif can only compress them so much.This friendly avif goose clocks in below 45kb, while its png equivalent weighs 450kb—a 10× increase!Now the huge savings of avif are clearer.
Unlike the image case, I’m not yet happy with my video compression. Among modern formats, there appear to be two serious contenders: h265 mp4 (“hevc”) and webm (via the VP9 codec). Reportedly, hevc has better compression than VP9 webm. In practice, I haven’t figured out how to make that happen, and my hevc mp4s remain several times larger than my webms at similar visual quality.
Under my current compression pipeline, webm videos are hilariously well-compressed (if I remember correctly, about 10× over gif and 4× over hevc). However, there is one small problem which is actually big: while Safari technically “supports” webm,Safari refuses to autoplay & loop webms.5
The problem gets worse because—although Safari will autoplay & loop hevc, Safari refuses to render transparency. Therefore, for the looping video of the pond (which requires transparency), the only compatible choice is a stupid gif which takes up 561kb instead of 58kb. That asset shows up on every page, so that stings a bit. Inline videos don’t have to be transparent, so I’m free to use hevc for most video assets.
However, after a bunch of tweaking, I still can’t get ffmpeg to sufficiently compress hevc. I’ll fix that later—possibly I need to try a different codec.
I tried using PurgeCSS to remove unused styles, reducing the css footprint from 84kb to 73kb. Since I couldn’t safely purge selectors from my main stylesheet (there were too many false positives), so the benefit was quite marginal. The purging caused trouble in my build process and had little benefit, so I removed it.
Even after minification and purging, it takes time for the client to load the main css stylesheet. During this time, the site looks like garbage. One solution is to manually include the most crucial styles in the html header, but that’s brittle.
Instead, I hooked the critical package into the end of the production build process. After emitting the webpages, the process computes which “critical” styles are necessary to display the first glimpse of the page. These critical styles are inlined into the header so that they load immediately, without waiting for the entire stylesheet to load. When the page loads, it quickly notes the status of light vs dark mode and immediately applies the relevant theme. Once the main stylesheet loads, I delete the inlined styles (as they are superfluous at best).
When loading a new page, the micromorph package selectively loads the new elements in the page. The shared elements are not updated, cutting load times.
This website contains many design elements. To maintain a regular, assured style and to avoid patchwork chaos, I made two important design choices.
Exponential font sizing
I fixed a base font size—20px on mobile, to 22px on tablets, to 24px on full displays. I read up on how many characters should be on a single line in order to maximize readability—apparently between 50 and 60. On desktop, I set the center column to 750px (yielding about 75 characters per line).6 I decided not to indent paragraphs because that made the left margin boundary too ragged.
After consulting TypeScale, I scaled the font by 1.2n, with n=0 for body text and n≥1 for headers:
Header 1Header 2Header 3Header 4Header 5
Normal textSmaller textSmaller textSmaller textSmaller text
All spacing is a simple multiple of a base measurement
If—for example—paragraphs were separated by 3.14 lines of space but headings had 2.53 lines of margin beneath them, that would look chaotic. Instead, I fixed a “base margin” variable and then made all margin and padding calculations be simple fractional multiples (e.g. 1.5×, 2×) of that base margin.
I have long appreciated illuminated calligraphy. In particular, a dropcap lends gravity and elegance to a text. Furthermore, EB Garamond dropcaps are available.
However, implementation was tricky. As shown with the figure’s “A”, css assigns a single color to each text element. To get around this obstacle, I took advantage of the fact that EB Garamond dropcaps can be split into the foreground and background.
A
However, text blocks other text; only one letter can be in a given spot—right? Wrong! By rendering the normal letter as the background dropcap font, I apply a different (lighter) color to the dropcap background. I then use the css ::before pseudo-element to render another glyph in the foreground. The result:
A
Dropcap css
Here are the core elements of the RegEx which styles the dropcaps:
Undirected quote marks ("test") look bad to me. Call me extra (I am extra), but I ventured to never have undirected quotes on my site. Instead, double and single quotation marks automatically convert to their opening or closing counterparts. This seems like a bog-standard formatting problem, so surely there’s a standard library. Right?
Sadly, no. GitHub-flavored Markdown includes a smartypants option, but honestly, it’s sloppy. smartypants would emit strings like Bill said “’ello!” (the single quote is oriented incorrectly). So I wrote a bit of code.
RegEx for smart quotes
/** * Replaces quotes with smart quotes * @returns The text with smart quotes */export function niceQuotes(text: string): string { // Single quotes // // Ending comes first so as to not mess with the open quote (which // happens in a broader range of situations, including e.g. 'sup) const endingSingle = `(?<=[^\\s“'])['](?!=')(?=s?(?:[\\s.!?;,\\)—\-]|$))` text = text.replace(new RegExp(endingSingle, "gm"), "’") // Contractions are sandwiched between two letters const contraction = `(?<=[A-Za-z])['](?=[a-zA-Z])` text = text.replace(new RegExp(contraction, "gm"), "’") // Beginning single quotes const beginningSingle = `(^|[\\s“"])['](?=\\S)` text = text.replace(new RegExp(beginningSingle, "gm"), "$1‘") // Double quotes // const beginningDouble = new RegExp( `(?<=^|\\s|[\\(\\/\\[\\{\-—])["](?=\\.{3}|[^\\s\\)\\—,!?;:/.\\}])`, "gm", ) text = text.replace(beginningDouble, "“") // Open quote after brace (generally in math mode) text = text.replace(new RegExp(`(?<=\\{)( )?["]`, "g"), "$1“") const endingDouble = `([^\\s\\(])["](?=[\\s/\\).,;—:\-\\}!?]|$)` text = text.replace(new RegExp(endingDouble, "g"), "$1”") // If end of line, replace with right double quote text = text.replace(new RegExp(`["]$`, "g"), "”") // If single quote has a right double quote after it, replace with right single and then double text = text.replace(new RegExp(`'(?=”)`, "g"), "’") // Periods inside quotes const periodRegex = new RegExp(`(?<![!?])([’”])(?!\\.\\.\\.)\\.`, "g") text = text.replace(periodRegex, ".$1") // Commas outside of quotes const commaRegex = new RegExp(`(?<![!?]),([”’])`, "g") text = text.replace(commaRegex, "$1,") return text}
This code has 45 unit tests all on its own.
This logic seems quite robust—I recommend it if you’re looking for smart quote detection. However, there’s a problem. niceQuotes is called on each text node in the html abstract syntax tree (ast). Sometimes, the dom gets in the way. Consider the end of a Markdown quote, _I hate dogs_". Its ast is:
<em> node: I hate dogs
Text node: "
niceQuotes is called on each substring, so we get two calls. The first only processes the contents of the <em> node, which isn’t changed. However, what should niceQuotes(") output? The intended output changes with the context—is it an end quote or a beginning quote?
Considering the broader problem:
Within a parent text container, there are n elements,
The quotes should be transformed appropriately, and
The overall operation should not create or delete elements.
The solution? Roughly:
Convert the parent container’s contents to a string s, delimiting separations with a private-use Unicode character (to avoid unintended matches),
Relax the niceQuotes RegEx to allow (and preserve) the private-use characters, treating them as boundaries of a “permeable membrane” through which contextual information flows,
Apply niceQuotes to s, receiving another string with the same number of elements implied,
For all k, set element k’s text content to the segment starting at private Unicode occurrence k.
I use this same strategy for other formatting improvements, including hyphen replacement.
Typographically, capital letters are designed to be used one or two at a time—not five in a row. “NAFTA” draws far too much attention to itself. I use regular expressions to detect at least three consecutive capital letters, excluding Roman numerals like XVI.
Furthermore, I apply smallcaps to letters which follow numbers (like “100gb”) so that the letters have the same height as the numerals. For similar reasons as smallcaps, most of the site’s numerals are oldstyle (“100”) rather than lining (“100”). I also uppercase the first letter of smallcaps if it begins a sentence or a paragraph element.
The em dash (—) can function like a comma, a colon, or parenthesis. Like commas and parentheses, em dashes set off extra information, such as examples, explanatory or descriptive phrases, or supplemental facts. Like a colon, an em dash introduces a clause that explains or expands upon something that precedes it.
Technically, en dashes should be used for ranges of dates and numbers. So “p. 202-203” turns into “p. 202–203”, and “Aug-Dec” turns into “Aug–Dec”!
Some hyphens should actually be minus signs. I find raw hyphens (-2) to be distasteful when used with plaintext numbers. I opt for “−2” instead.
I chose slanted fractions in order to slightly increase the height of the numerals in the numerator and denominator. People are 2/3 water, but “01/01/2000” should not be rendered as a fraction.
Detecting multipliers
Multipliers like “2×” are 2× more pleasant than “2x.”
Full-width slashes
Used for separators like “cat” /“dog” in place of “cat” / “dog”—note how cramped the EB Garamond halfwidth “/” is!
Mathematical definitions
In the past, I used the := symbol to denote definitions (as opposed to normal equations). I now convert these symbols to the self-explanatory =def.
Superscripting ordinal suffixes
By default, ordinal numbers look a bit strange: 1st. This html transformation allows me to write about what happened on e.g. August 8th.
While EB Garamond is a nice font, it has a few problems. As of April 2024, EB Garamond did not support slashed zeroes (the zero feature). The result: zero looked too similar to “o.” Here’s a number rendered in the original font: “100”; in my tweaked font it shows as “100.” Furthermore, the italicized font did not support the cv11 OpenType feature for oldstyle numerals. This meant that the italicized 1 looked like a slanted “1”—too similar to the smallcaps capital I (“I”).
Therefore, I paid Hisham Karim $121 to add these features. I have also notified the maintainer of the EB Garamond font.
Favicons are those little website icons you see in your tab bar. Inspired by gwern.net and Wikipedia, I decided to show favicons next to links. Including favicons has several benefits, from “the reader often knows what to expect” to “it just looks nice.”
I wrote a server-side html transformation implementing the following algorithm:
Takes as input a semi-processed html syntax tree,
Finds all of the link elements,
Checks what favicon (if any) is available for each,
Downloads the favicon if needed,
Appends a favicon <img> element after the link.
There remains a wrinkle: How can I ensure the favicons look good? As gwernnoted, inline favicons sometimes appear on the next line (detached from their link). This looks bad—just like it would look bad if your browser displayed the last letter of a word on the next line, all on its own.
To tackle this, the favicon transformation doesn’t just append an <img> element. Basically, I make a new <span> which acts as a “favicon sandwich”, packaging both the last few letters of the link text and then the favicon <img> element. The <span> is styled so that if the favicon element is wrapped, the last few letters will be wrapped as well. To ensure legibility in both light and dark mode, I also dynamically style certain favicons, including this site’s favicon: .
gwern apparently initially tried using css rules. But for static websites (like turntrout.com and gwern.net), I think my approach is simpler. As my site incorporates more links, the css complexity doesn’t grow at all. Dom rendering is done server-side. I don’t have to decide whether a domain is sufficiently common to merit a new favicon—my site displays all available favicons. One minor downside: unfamiliar one-off favicons are minor page clutter, as they are unknown and so provide no useful information.
I confess that I don’t fully understand gwern’s successor approach. It seems like more work, but perhaps it’s more appropriate for their use cases!
I love these “admonition” bubbles which contain information. When an admonition is collapsed by default, the reader can decide whether or not they want more detail on a topic, reducing ambient frustration.
All admonitions for my site
Abstract
Note
Info
Example
Math
Quote
A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible.
Often, websites embed diagrams as images. However, I find this unsatisfying for several reasons:
Inconsistent styling as several different diagram suites may be used to generate images—the diagrams often use different color palettes,
Bloated page size from embedding sparse graphical information into dense image data, and
Inability to adapt to shifts between light and dark mode.
Mermaid diagrams fix these problems. The main downside was the extra difficulty of generating diagrams, but modern multimodal llms can easily take an image of a diagram and output valid Mermaid code. The diagrams are rendered server-side, avoiding a bulky JavaScript download.
Quartz comes with interactive popover previews for internal links, such as footnotes or section references. Desktop users can view popovers by hovering over an internal link. The favicon appears for links to other pages on the site, while the icon is used for within-page links.
Search
Also packaged in vanilla Quartz, my site is searchable with live content previews—rendering the entire page on the desktop view. To accord with classic keybindings, I ensured that the search window can be toggled by pressing /.
Metadata
Every page has an html description and tags (if appropriate), along with a table of contents which (on desktop) highlights the current section. I track original publication date and display when each was page was last modified by a git push to the main branch. I also support “sequences” of blog posts:
I made a Markdown plugin which lets me specify spoilers by starting the line with >!. The results are unobtrusive but pleasant:
Have you heard? Snape kills Dumbledore.
Server-side math rendering via KATEX
I initially chose KATEX over MathJax due to its faster client-side rendering speed. However, now I render the KATEX server-side so all the client has to do is download katex.min.css (27kb). Easy.
I quickly learned the importance of comprehensive tests and documentation. The repository now has very strong code health. My test suite protects my site from so many errors. Before a new commit touches the live site, it must pass a gauntlet of challenges:
The pre-commitgithook runs before every commit is finalized.
The pre-push hook runs before commits are pushed to the main branch.
Github actions ensure that the site still works properly on the remote server.
Lastly, external static analysis alerts me to potential vulnerabilities and anti-patterns. If somehow a bad version slips through anyways, Cloudflare allows me to instantly revert the live site to a previous good version.
lint-staged improves the readability and consistency of my code. While I format some filetypes on save, there are a lot of files and a lot of types. Therefore, my package.json specifies what linting & formatting tools to run on what filetypes:
Whenever I find a bug, I attempt to automatically detect it in the future. The result is this long pipeline of checks, designed to surface errors which would take a long time to notice manually. The push operation is aborted if any of the following checks7 fail.
I run eslint --fix to automatically fix up my TypeScript files. By using eslint, I maintain a high standard of code health, avoiding antipatterns such as declaring variables using the any type. I also run stylelint --fix to ensure scss quality and ensure that pylint rates my code health at 10/10.
I use mypy to statically type-check my Python code. Since my JavaScript files are actually TypeScript, the compiler already raises exceptions when there’s a type error.
I run a multi-purpose spellchecking tool. The tool maintains a whitelist dictionary which the user adds to over time. Potential mistakes are presented to the user, who indicates which ones are real. The false positives are ignored next time. The spellchecker also surfaces common hiccups like “the the.”
I then lint my Markdown links for probable errors. I found that I might mangle a Markdown link as [here's my post on shard theory](shard-theory). However, the link url should start with a slash: /shard-theory. My script catches these. I check the yaml metadata, ensuring that each article has required fields filled in (like title and description). I also check that no pages attempt to share a url.
I check that my KATEX expressions avoid using \tag{...}, as that command wrecks the formatting in the rendered html.
I lastly check that my css:
Defines font-faces using fonts which actually exist in the filesystem, and
As of first posting, I have 843 JavaScript unit tests and 164 pytest Python tests. I am quite thorough—these tests are my pride and joy. Writing tests is easy these days. I use cursor—AI churns out dozens of high-coverage lines of test code in seconds, which I then skim for quality assurance.
Pure unit tests cannot test the end-to-end experience of my site, nor can they easily interact with a local server. playwright lets me test dynamic features like search, spoiler blocks, and light /dark mode. What’s more, I test these features across a range of browsers and viewport dimensions (mobile vs desktop).
Many errors cannot be caught by unit tests. For example, I want to ensure that my site keeps looking good—this cannot (yet) be automated. To do so, I perform visual regression testing. The testing also ensures that the overall site theme is retained over time and not nuked by unexpected css interactions.
I use playwright to interact with my website and argos-ci to take stable pictures of the website. playwright renders the site at pre-specified locations, at which point argos-ci takes pictures and compares those pictures to previously approved reference pictures. If the pictures differ by more than a small percentage of pixels, I’m given an alert and can view a report containing the pixel-level diffs. Using argos-ci helps reduce flakiness and track the evolution of the site.
playwright and argos-ci can tell you “hey, did you mean for your picture of a mountain to now have snow on it?”.
However, it’s not practical to test every single page. So I have a test page which stably demonstrates site features. My tests screenshot that page from many angles. I also use visual regression testing to ensure the stability of features like search.
At this point, I also check that the server builds properly.
My goal is a zero-hassle process for adding assets to my website. In order to increase resilience, I use Cloudflare R2 to host assets which otherwise would bloat the size of my git repository.
I edit my Markdown articles in Obsidian. When I paste an asset into the document, the asset is saved in a special asset_staging/ directory. Later, when I move to push changes to my site, the following algorithm runs:
Move any assets from asset_staging/ to a slightly more permanent static/ asset directory, updating any filepath references in the Markdown articles;
Compress all relevant assets within static/, updating filepath references appropriately;
Run exiftool to strip Exif metadata from images, preventing unintended information leakage;
Upload the assets to assets.turntrout.com, again updating references in the Markdown files;8
Copy the assets to my local mirror of my R2 asset bucket (in case something happens to Cloudflare).
While this pipeline took several weeks of part-time coding to iron out, I’m glad I took the time.
Over time, links decay and rot, eventually emitting 404 errors. Unlike gwern, I do not yet have a full solution to this problem. However, links I control should never 404:
Reordering elements in <head> to ensure social media previews
I want nice previews for my site. Unfortunately, the behavior was flaky—working on Facebook, not on Twitter, not on Slack, working on Discord… Why? I had filled out all of the OpenGraph fields.
Apparently, Slack only reads the metadata from the first portion of the <head>. However, my OpenGraph <meta> tags were further back, so they weren’t getting read in. Different sites read different lengths of the <head>, explaining the flakiness.
The solution: Include tags like <meta> and <title> as early as possible in the <head>. As a post-build check, I ensure that these tags are confined to the first 9kb of each file.
Updating page metadata
For posts which are being pushed for the first time, I set their publication date. For posts which have been updated since the last push, I update their “last updated” date.
Cryptographic timestamping
I concatenate the sha-1 commit hashes of all commits being pushed to main and hash their concatenation with sha-256. Using a slight variant of gwern’s timestamping procedure, I use OriginStamp to commit the sha-256 hash to the blockchain by the next day.
By committing the hash to the blockchain, I provide cryptographic assurance that I have in fact published the claimed commits by the claimed date. This reduces (or perhaps eliminates) the possibility of undetectably “hiding my tracks” by silently editing away incorrect or embarrassing claims after the fact, or by editing my commit history.
A verbose linter which surfaces a huge range of antipatterns. For example, in Python it points out variables which are redeclared from an outer scope.
An autofix tool which—for a subset of issues—can create a pull request fixing the issues.
I try to keep the repository clean of DeepSource issues, but it does point out a lot of unimportant issues (which I ignore). Sadly, their command-line tool cannot be configured to only highlight sufficiently important problems. So the DeepSource analysis is not yet part of my automated pre-push hook.
Emma Fickel decisively pushed me to create this site, which has been one of my great joys of 2024. The LessWrong moderators helped me export my post data. Chase Denecke provided initial encouragement and expertise. Garrett Baker filed several bug reports. Thomas Kwa trialed an integration of Plot.ly graphs.
I counted my commits by running git log --author="Alex Turner" --oneline | wc -l. ⤴
Examples of content which is not hosted on my website: There are several <iframe> embeds (e.g. Google forms and such). I also use the privacy-friendlier umami.is analytics service—the script is loaded from their site. ⤴
Safari does support hevc-encoded mp4s, but only if they are tagged with hvc1 and not hev1. To “autoplay” these mp4s, I had to include the src= attribute in the video tag and then wait for the user to interact with the page. Apparently Firefox doesn’t support hevc, so I’ll need to add alternative Firefox-compatible <source/>s. ⤴
60 characters per line seemed awkwardly narrow to me, so I went for 75 per line. ⤴
For clarity, I don’t present the pre-push hook operations in their true order. ⤴
When I upload assets to Cloudflare R2, I have to be careful. By default, the upload will overwrite existing assets. If I have a namespace collision and accidentally overwrite an older asset which happened to have the same name, there’s no way for me to know without simply realizing that an older page no longer shows the older asset. For example, links to the older asset would still validate under linkchecker. Therefore, I disable overwrites by default and instead print a warning that an overwrite was attempted. ⤴