Amalgamation

This page contains all posts amalgamated into a single page SQL-style. You can use this if you want to doomscroll all my posts, I guess. For me, it was a nice way to learn about how to improve load times using content-visibility.

A surprisingly simple way to package Deno applications for Nix

Introduction

Recently, I was working on a Deno project which I wanted to package for Nix. Usually, when packaging a piece of software for Nix, there exists a language-specific stdenv.mkDerivation derivative which works to bridge the gap between between the langauge-specific package managers and Nix. These are functions like buildNpmPackage and buildPythonPackage but, alas, there is no buildDenoPackage.

Deno is particularly tricky (as compared to for example TCL), because it uses “URL imports” to import directly from URLs as runtime. Doing so is obviously not deterministic which means that bundling Deno applications becomes a bit of a challenge.

In this post, I will go over why none of the existing, community-driven solutions worked for me, what I did instead, and some of the potential drawbacks of my solution.

Existing solution

During my initial research, I found this thread disussing my exact issue: wrapping Deno applications in Nix. The thread settles on using deno2nix. deno2nix works by parsing the lockfiles that Deno generates1 and generating a matching Nix derivation.

There’s a lot of work involved in what deno2nix does; it has to parse Deno’s lockfile format, clean it up, then generate a matching Nix derivation. All of this code has potential for bugs. Nothing illustrates this better than this issue. It essentially boils down to Deno’s resolution algorithm setting a different User-Agent header than what the Nix builder did. esm.sh was using the user-agent to send different content to Deno than to the browser

The underlying issue here is that deno2nix is trying to replicate the exact behavior of Deno, which is a hard task.

deno2nix also does not support NPM modules (i.e. imports using an npm: specifier) at the time of writing. Doing so will likely cause the amount of code in the repo to double, since NPM packages are handled entirely differently both in the lockfile format and Deno’s resolution algorithm.

My solution

After fighting with deno2nix for a while, I decided to take a different approach.

Deno supports a pretty niche subcommand: deno vendor. This command downloads all dependencies of the given file into a folder. This is called vendoring, hence the name of the command. It also generates an import map2 which can be used to make Deno use these local dependencies, rather than fetching from online.

This command is very convenient for us because we can use it to download and fix bundles ahead of time. To make evaluation pure, we can fix the hash of the output (i.e. a fixed output derivation).

In case this sounds too abstract, here’s an example. Suppose we have a simple program which just prints a random string. main.ts just contains:

import { bgMagenta } from "https://deno.land/[email protected]/fmt/colors.ts";
import { generate } from "https://esm.sh/randomstring";

const s = generate();
console.log("Here is your random string: " + bgMagenta(s));

First, we’ll build the vendor directory. We pull out the src attribute into a separate variable, as it is shared between both derivations. The fact that we specify the outputHash attribute means that this is going to be a fixed-output derivation. As such, the builder will be allowed network access in return to guaranteeing that the output has a specific hash.

# This could of course be anywhere, like a GitHub repository.
src = ./.;

# Here we build the vendor directory as a separate derivation.
random-string-vendor = stdenv.mkDerivation {
  name = "random-string-vendor";

  nativeBuildInputs = [ deno ];

  inherit src;
  buildCommand = ''
    # Deno wants to create cache directories.
    # By default $HOME points to /homeless-shelter, which isn't writable.
    HOME="$(mktemp -d)"

    # Build vendor directory
    deno vendor --output=$out $src/main.ts
  '';

  # Here we specify the hash, which makes this a fixed-output derivation.
  # When inputs have changed, outputHash should be set to empty, to recalculate the new hash.
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
  outputHash = "sha256-a4jEqwyp5LoORLYvfYQmymzu9448BoBV5luHnt4BbMg=";
};

Let’s try building this and taking a peek inside. In the transcript below, you will see that the output contains a directory hierarchy corresponding to our dependencies. It also contains import_map.json at the top level.

$ nix-build vendor.nix
/nix/store/…-random-string-vendor
$ tree /nix/store/…-random-string-vendor
/nix/store/…-random-string-vendor
├── deno.land
│   └── [email protected]
│       └── fmt
│           └── colors.ts
├── esm.sh
│   ├── v135
│   │   ├── @types
│   │   │   └── [email protected]
│   │   │       └── index.d.ts
│   │   ├── [email protected]
│   │   │   └── denonext
│   │   │       └── randombytes.mjs
│   │   └── [email protected]
│   │       └── denonext
│   │           └── randomstring.mjs
│   ├── [email protected]
│   └── [email protected]
└── import_map.json

Now we can build the actual application. We are going to create a little wrapper script which will invoke Deno with the right arguments. We use --import-map to have Deno use our local dependencies and --no-remote to force Deno not to fetch dependencies at run-time, in case random-string-vendor is outdated (i.e. doesn’t include all dependencies imported by the script).

random-string = writeShellScript "random-string" ''
  ${deno}/bin/deno run \
    --import-map=${random-string-vendor}/import_map.json \
    --no-remote \
    ${src}/main.ts -- "$@"
'';

That’s basically all there is to it! The great thing about this approach is that it (by definition) uses Deno’s exact resolution algorithm. We don’t run into trouble with esm.sh because Deno sets the correct UA. That’s an entire class of bugs eliminated!

Shortcomings

It’s not all sunshine and rainbows, though. There are some significant drawbacks to this approach which I will go over in this section.

First of all, the vendor subcommand is woefully undercooked. npm: specifiers are just silently ignored. It is outlined in this issue, which has been open for quite some time. In general, it doesn’t seem like this command has been getting a whole lot of love since its introduction, probably on account of being so niche.

Nevertheless, when Deno does finally get support for vendoring NPM modules, this module will automatically also support them. This is in stark contrast with deno2nix which would require a lot of work to support npm: specifiers.

The second major issue is that this approach doesn’t make good use of caching. The random-string-vendor-derivation we constructed above is essentially a huge blob; if we change a single dependency, the entire derivation is invalidated. If I understand deno2nix correctly, it actually makes a derivation for each dependency and then uses something akin to symlinkJoin to combine them. Such an approach allows individual dependencies to be cached and shared in the Nix store.

The issue of caching is tangentially related to some of the issues outlined by @volth’s Status of lang2nix approaches. A lot of their criticism also applies here.

Conclusion

In this post I have described a simple approach to packaging Deno applications for Nix. I much prefer it to deno2nix simply because I understand exactly how it works. Even then, there are some major drawbacks to using this method. Before implementing this approach in your project, you should consider if those trade-offs make sense for you.

  1. For the uninitiated, I suggest reading the official introduction to Deno’s lockfiles. In essence, lockfiles are just a mapping from URLs to their expected hashes. Their purpose is for locking dependencies to specific versions. Since this sounds a lot like what Nix is trying to do (though admittedly at a much smaller scale) they are usually the input for the various mkDerivation derivatives.

  2. Import maps allow you to tell Deno “when you see an import statement for A, you should actually import B.” The actual file is just a JSON object where the keys are A and the values are B. If this is new to you, you might want to check out the official documentation.

I made a thing and it sucks ass

It is not a big thing. It’s just a little utility that shows a hover preview when the mouse hovers a link. I think it’s kind of useful for checking where a link goes when I see text like “You can read more about that here”.

A screenshot of a blog post. The mouse hovers a link. A popup above the link shows the title and description of the page

The thing is, it didn’t take me very long to make this. An hour or two including setting up the user script manager and whatnot. However, the issue is: I just keep finding edge cases. Here’s a couple of examples:

Each of these issues on their own isn’t the end of the world, but the cumulative time spent fixing bugs just isn’t worth it for a small, semi-useful utility.

This whole ordeal reminded me a lot of my uwuifier extension; a silly idea which ended up taking multiple iterations over the span of two years, APIs spanning the entire history of the DOM, and a way too intimate understanding of the DOM tree’s structure. And it’s still not finished! I stuck with that project, because I thought it was funny and some of my friends were using it, but I just don’t have that motivation this time.

The tl;dr is that I made a kind of useful utility, but to implement it fully would require a disproportionate amount of work. So much work that I don’t think I’ll finish the thing. And that’s why I hate making stuff for the web.

Link rot and the innevitable heat death of the internet

Introduction

Yesterday I was reading the slides for Maciej Ceglowski’s talk on The Website Obesity Crisis. It’s a really good talk. I highly recommend giving it a read (or a watch). You might be a bit skeptical since it was written in 2015, but I think it is still highly relevant, even to HTML kiddies1 such as myself. Much like all good dystopian works, it identified the beginnings of a trend which has now become such a huge issue that the original work seems almost prophetic.

Anyway, in one of the slides he talks about an experiment by some Adam Drake guy.

Adam Drake wrote an engaging blog post about analyzing 2 million chess games. Rather than using a Hadoop cluster, he just piped together some Unix utilities on a laptop, and got a 235-fold performance improvement over the ‘Big Data’ approach.

It seemed pretty interesting, so I clicked the link and… nothing? The link just took me to his homepage2. After a bit of detective work I figured out that the server was just redirecting me to the homepage instead of showing a 404. Specifically, the Curl output below shows that any request made to his old domain (aadrake.com) is met with a 301 “permanently moved” pointing to the index page of his new domain (adamdrake.com), completely disregarding the actual resource being requested.

Curl output

Emphasis mine.

$ curl -v http://aadrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html
*   Trying 192.64.119.137:80...
* Connected to aadrake.com (192.64.119.137) port 80 (#0)
> GET /command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html HTTP/1.1
> Host: aadrake.com
> User-Agent: curl/8.1.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Date: Sun, 22 Oct 2023 21:12:20 GMT
< Content-Type: text/html; charset=utf-8
< Content-Length: 56
< Connection: keep-alive
< Location: https://adamdrake.com
< X-Served-By: Namecheap URL Forward
< Server: namecheap-nginx
<
<a href='https://adamdrake.com'>Moved Permanently</a>.
* Connection #0 to host aadrake.com left intact

He is not the only one doing this. I’ve encountered multiple websites with this 404-is-index strategy and I hate it. It’s the HTTP equivalent of going “what? nah bro i never said that”. Why is your server gaslighting me?

Link rot

Even then, it is not the end of the world. It is just mildly annoying. I do, however, think that it is part of the much larger issue of link rot. The term “link rot” refers to the tendency for hyperlinks to ‘go bad’ over time, as the resources are relocated or taken offline. It is a kind of digital entropy that is slowly engulfing much of the older/indie web.

The 404-is-index strategy is a particularly bad case of link rot. Unlike a usual 404 which presents the reader with some kind of error page, a 301 happens quietly. For example many bookmark managers will quietly update references when it encounters a 301. This means that your bookmarks just disappear because they’ve been ‘moved’ to the website’s index page.

Maciej Ceglowski’s talk is yet another example. His link to aadrake.com was quietly broken and with that another edge in the huge directed graph that is the open web.

What can I do about it?

Okay so link rot is bad. How can I avoid further contributing to link entropy? Just to be clear the “I” in the section title does not refer to you, the reader, and that question before wasn’t rhetorical. I’m not going to pretend to have all the answers and this most certainly isn’t a how-to guide. Think of it more like a kind of public diary of my feeble attempts to fight my own innevitable breaking of this site.

I have tried to be very careful about the layout of the site, URI-wise. For example, all posts are located at /posts/<glob>.html and I never change the glob. I also try to avoid leaking implementatin details in the URL. This is rather easy since this is a static site, but in the future I might add dynamic elements. In that case I’ll try not to introduce extensions like .cgi or .php that couple the URL to the underlying tech stack.

Still, I fear it might not be enough. After all, perfection is the enemy of progress. As I learn and become a better web developer I will surely realise fallacies in my original layout. Already, I am growing a tad annoyed at the fact that images and CSS are grouped under /assets/. It’s rather arbitrary to decide that images and CSS are “assets” but HTML isn’t.

Maybe then I can use 301 for good; as resources are relocated I can maintain a list of rewrite rules. Perhaps I could model it as database migrations. In the database world, when one modifies a schema (e.g. adding or removing a column) that change is associated with a bit of code specifying how to go from one schema to another. That way, all the data (existing hyperlinks) remains valid as the schema (site layout) changes.

There are also more radical approached like IPFS. In this protocol resources are addressed by a content ID computed from their content rather than their location. That way, multiple peers can host it, reducing the likelyhood of the website ever disappearing or going down. It’s all very smart, but I doubt we can convince everyone to switch protocols just like that.

Conclusion

The tl;dr is this: There’s an issue on the web where links tend to ‘go bad’. That is, the location of the underlying resources change and hyperlinks pointing to the old location are broken. This problem is especially prevalent in the IndieWeb community because we don’t have teams of engineers managing our sites and checking for backwards compatibility.

So far my only solution is “be very careful” which isn’t a very inspiring conclusion. In the coming weeks I’ll look into making some automated testing, so I can be notified when I accidentally change the site layout in such a way that it breaks old links.

  1. For lack of a better terminology I’m just going to reuse that of script kiddies.

  2. A home that included the phrase “While I specialize in executive advising on leadership and process, I can also dive into deep technical problems with Data Science”, a sentence so douchebag-techbro-y it made me reconsider whether tech was really the industry for me.

How to treat libgit2 blobs as file handles

I figured I’d document the solution to my hyper-specific problem in case anyone in the future has the same issue.

So here’s the setup: I am using libgit2 to operate on the contents of some files stored in a repository and I would like to pass the contents of a blob (i.e. a piece of data in the Git store) to foo, but libgit2 only lets me access the content of the blob through git_blob_rawcontent which returns a char * and foo only operates on FILE *s.

The POSIX standard comes to the rescue! It defines the special function fmemopen which allows one to construct a file handle from a piece of memory. Here’s an example from the docs:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

static char buffer[] = "foobar";

int main (void)
{
    int ch;
    FILE *stream;

    stream = fmemopen(buffer, strlen (buffer), "r");
    if (stream == NULL) {
        perror("failed to fmemopen buffer");
        exit(EXIT_FAILURE);
    }

    while ((ch = fgetc(stream)) != EOF) {
        printf("Got %c\n", ch);
    }

    fclose(stream);
    return EXIT_SUCCESS;
}

It produces the following output.

Got f
Got o
Got o
Got b
Got a
Got r

Just what I was looking for! With this I can wrap the result of git_blob_rawcontent in a FILE * and pass it to foo.

#include <git2.h>
#include <stdio.h>
git_blob *blob = /* ... */;
FILE *fp = fmemopen(git_blob_rawcontent(blob), git_blob_rawsize(blob), "rb");
foo(fp);

Hopefully this is the solution to your hyper-specific problem as well :^)

Common font fallbacks

I quite like @yesiamrocks’s CSS fallback fonts repository. It contains a lot of common CSS fallback chains. My only gripe is that I can’t see the fonts in use. Here, I’ve taken the liberty of converting the Markdown to some HTML with examples of each CSS chunk.

Keep in mind that if you don’t have the fonts installed on your system, you will see the first fallback that is installed. Any browser worth its salt will let you see this using its development tools. For example, here’s how to do it in Firefox.

Regrettably, it is not possible to highlight failing fonts, as doing so would allow for easy fingerprinting. This exact issue has been discussed by the CSS Working Group.

This document is split into 3 sections:

Sans-serif fonts

Arial

To use Arial on your webpage, copy the following CSS rule.

body {
	font-family: Arial, "Helvetica Neue", Helvetica, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Arial Black

To use Arial Black on your webpage, copy the following CSS rule.

body {
	font-family: "Arial Black", "Arial Bold", Gadget, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Arial Narrow

To use Arial Narrow on your webpage, copy the following CSS rule.

body {
	font-family: "Arial Narrow", Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Arial Rounded MT Bold

To use Arial Rounded MT Bold on your webpage, copy the following CSS rule.

body {
	font-family: "Arial Rounded MT Bold", "Helvetica Rounded", Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Century Gothic

To use Century Gothic on your webpage, copy the following CSS rule.

body {
	font-family: "Century Gothic", CenturyGothic, AppleGothic, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Calibri

To use Calibri on your webpage, copy the following CSS rule.

body {
	font-family: Calibri, Candara, Segoe, "Segoe UI", Optima, Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Candara

To use Candara on your webpage, copy the following CSS rule.

body {
	font-family: Candara, Calibri, Segoe, "Segoe UI", Optima, Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Avant Garde

To use Avant Garde on your webpage, copy the following CSS rule.

body {
	font-family: "Avant Garde", Avantgarde, "Century Gothic", CenturyGothic, AppleGothic, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Helvetica

To use Helvetica on your webpage, copy the following CSS rule.

body {
	font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Franklin Gothic Medium

To use Franklin Gothic Medium on your webpage, copy the following CSS rule.

body {
	font-family: "Franklin Gothic Medium", "Franklin Gothic", "ITC Franklin Gothic", Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Futura

To use Futura on your webpage, copy the following CSS rule.

body {
	font-family: Futura, "Trebuchet MS", Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Impact

To use Impact on your webpage, copy the following CSS rule.

body {
	font-family: Impact, Haettenschweiler, "Franklin Gothic Bold", Charcoal, "Helvetica Inserat", "Bitstream Vera Sans Bold", "Arial Black", "sans serif";
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Tahoma

To use Tahoma on your webpage, copy the following CSS rule.

body {
	font-family: Tahoma, Verdana, Segoe, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Segoe UI

To use Segoe UI on your webpage, copy the following CSS rule.

body {
	font-family: "Segoe UI", Frutiger, "Frutiger Linotype", "Dejavu Sans", "Helvetica Neue", Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Geneva

To use Geneva on your webpage, copy the following CSS rule.

body {
	font-family: Geneva, Tahoma, Verdana, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Optima

To use Optima on your webpage, copy the following CSS rule.

body {
	font-family: Optima, Segoe, "Segoe UI", Candara, Calibri, Arial, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Gill Sans

To use Gill Sans on your webpage, copy the following CSS rule.

body {
	font-family: "Gill Sans", "Gill Sans MT", Calibri, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Trebuchet MS

To use Trebuchet MS on your webpage, copy the following CSS rule.

body {
	font-family: "Trebuchet MS", "Lucida Grande", "Lucida Sans Unicode", "Lucida Sans", Tahoma, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Lucida Grande

To use Lucida Grande on your webpage, copy the following CSS rule.

body {
	font-family: "Lucida Grande", "Lucida Sans Unicode", "Lucida Sans", Geneva, Verdana, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Verdana

To use Verdana on your webpage, copy the following CSS rule.

body {
	font-family: Verdana, Geneva, sans-serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Serif fonts

Big Caslon

To use Big Caslon on your webpage, copy the following CSS rule.

body {
	font-family: "Big Caslon", "Book Antiqua", "Palatino Linotype", Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Didot

To use Didot on your webpage, copy the following CSS rule.

body {
	font-family: Didot, "Didot LT STD", "Hoefler Text", Garamond, "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Lucida Bright

To use Lucida Bright on your webpage, copy the following CSS rule.

body {
	font-family: "Lucida Bright", Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Baskerville

To use Baskerville on your webpage, copy the following CSS rule.

body {
	font-family: Baskerville, "Baskerville Old Face", "Hoefler Text", Garamond, "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Hoefler Text

To use Hoefler Text on your webpage, copy the following CSS rule.

body {
	font-family: "Hoefler Text", "Baskerville Old Face", Garamond, "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Goudy Old Style

To use Goudy Old Style on your webpage, copy the following CSS rule.

body {
	font-family: "Goudy Old Style", Garamond, "Big Caslon", "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Cambria

To use Cambria on your webpage, copy the following CSS rule.

body {
	font-family: Cambria, Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Rockwell

To use Rockwell on your webpage, copy the following CSS rule.

body {
	font-family: Rockwell, "Courier Bold", Courier, Georgia, Times, "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Times New Roman

To use Times New Roman on your webpage, copy the following CSS rule.

body {
	font-family: TimesNewRoman, "Times New Roman", Times, Baskerville, Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Perpetua

To use Perpetua on your webpage, copy the following CSS rule.

body {
	font-family: Perpetua, Baskerville, "Big Caslon", "Palatino Linotype", Palatino, "URW Palladio L", "Nimbus Roman No9 L", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Bodoni MT

To use Bodoni MT on your webpage, copy the following CSS rule.

body {
	font-family: "Bodoni MT", Didot, "Didot LT STD", "Hoefler Text", Garamond, "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Georgia

To use Georgia on your webpage, copy the following CSS rule.

body {
	font-family: Georgia, Times, "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Palatino

To use Palatino on your webpage, copy the following CSS rule.

body {
	font-family: Palatino, "Palatino Linotype", "Palatino LT STD", "Book Antiqua", Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Rockwell Extra Bold

To use Rockwell Extra Bold on your webpage, copy the following CSS rule.

body {
	font-family: "Rockwell Extra Bold", "Rockwell Bold", monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Garamond

To use Garamond on your webpage, copy the following CSS rule.

body {
	font-family: Garamond, Baskerville, "Baskerville Old Face", "Hoefler Text", "Times New Roman", serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Book Antiqua

To use Book Antiqua on your webpage, copy the following CSS rule.

body {
	font-family: "Book Antiqua", Palatino, "Palatino Linotype", "Palatino LT STD", Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Calisto MT

To use Calisto MT on your webpage, copy the following CSS rule.

body {
	font-family: "Calisto MT", "Bookman Old Style", Bookman, "Goudy Old Style", Garamond, "Hoefler Text", "Bitstream Charter", Georgia, serif;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Monospace fonts

Lucida Console

To use Lucida Console on your webpage, copy the following CSS rule.

body {
	font-family: "Lucida Console", "Lucida Sans Typewriter", monaco, "Bitstream Vera Sans Mono", monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Andale Mono

To use Andale Mono on your webpage, copy the following CSS rule.

body {
	font-family: "Andale Mono", AndaleMono, monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Courier New

To use Courier New on your webpage, copy the following CSS rule.

body {
	font-family: "Courier New", Courier, "Lucida Sans Typewriter", "Lucida Typewriter", monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Monaco

To use Monaco on your webpage, copy the following CSS rule.

body {
	font-family: monaco, Consolas, "Lucida Console", monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Consolas

To use Consolas on your webpage, copy the following CSS rule.

body {
	font-family: Consolas, monaco, monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Lucida Sans Typewriter

To use Lucida Sans Typewriter on your webpage, copy the following CSS rule.

body {
	font-family: "Lucida Sans Typewriter", "Lucida Console", monaco, "Bitstream Vera Sans Mono", monospace;
}

The following is an example of the font in use.

The quick brown fox jumped over the lazy dog.

Conclusion

Those are all the fonts @yesiamrocks included! I’ll leave you to your decision parallysis now…

How to give Simple Voice Chat microphone permissions on MacOS

tl;dr: Click here to view a step-by-step guide

If you voice that doesn’t work and you’re getting an error complaining about MacOS permissions, you can execute the following code in Terminal.app to give the Minecraft launcher the correct permissions. After executing the code, restart the your computer.

sqlite3 "/Users/$USER/Library/Application Support/com.apple.TCC/TCC.db" <<EOF
INSERT INTO access VALUES(
	'kTCCServiceMicrophone',        -- service
	'com.mojang.minecraftlauncher', -- client
	0, -- client_type (0 = bundle id)
	2, -- auth_value (2 = allowed)
	3, -- auth_reason (3 = user set)
	1, -- auth_version (always 1)
	-- csreq:
	X'fade0c00000000a80000000100000006000000060000000600000006000000020000001c636f6d2e6d6f6a616e672e6d696e6563726166746c61756e636865720000000f0000000e000000010000000a2a864886f763640602060000000000000000000e000000000000000a2a864886f7636406010d0000000000000000000b000000000000000a7375626a6563742e4f550000000000010000000a48523939325a454145360000',
	NULL,       -- policy_id
	NULL,       -- indirect_object_identifier_type
	'UNUSED',   -- indirect_object_identifier
	NULL,       -- indirect_object_code_identity
	0,          -- flags
	1612407199, -- last_updated
	NULL,     -- pid (no idea what this does)
	NULL,     -- pid_version (no idea what this does)
	'UNUSED', -- boot_uuid (no idea what this does)
	0         -- last_reminded
);
EOF

This is confirmed to be working on the following software versions.

  • MacOS 14.5 (23F79)
  • Minecraft 1.21.1
  • Fabric 0.16.4
  • Simple Voice Chat 2.5.21

It is probably going to break slightly in future updates to MacOS. In that case see the rest of this post.

Yesterday I wanted to play Minecraft on a server that was using the Simple Voice Chat plugin. However, when I joined the server, I got a warning message about the Minecraft not having microphone permissions. This makes sense: MacOS applications have to explicitly request permissionto do stuff like listening to the microphone and the Minecraft launcher doesn’t have any reason to request that permission so it doesn’t have it!

The recommended solution on Simple Voice Chat’s wiki is to use a Prism, a custom launcher. I didn’t quite feel like installing and learning some random launcher just to fix this one issue so I started looking around for other solutions. MacOS has to be storing the permissions somewhere and if I could just manually enter Minecraft into there, I wouldn’t have to go through Prism.

After a bit of searching I found this article which explains that TCC is the mechanism by which MacOS manages permissions and it stores all its per-user data in a file located1 at /Users/$USER/Library/Application Support/com.apple.TCC/TCC.db. This file is actually just an SQLite database which we can modify using a generic SQLite tool like sqlite3.

$ sqlite3 "/Users/$USER/Library/Application Support/com.apple.TCC/TCC.db"

Executing the above will open an interactive SQL REPL. We can see all the tables contained in the database with a special command.

sqlite> .table
access            active_policy     expired
access_overrides  admin             policies

The article also explained that the table we’re mainly interested in is called access. We can see its schema using .schema. Of note are the fields service and client which specify the permission and application respectively. See the article for the meaning of the rest of the columns.

sqlite> .schema access
CREATE TABLE access (
	service        TEXT        NOT NULL,
	client         TEXT        NOT NULL,
	client_type    INTEGER     NOT NULL,
	auth_value     INTEGER     NOT NULL,
	auth_reason    INTEGER     NOT NULL,
	auth_version   INTEGER     NOT NULL,
	csreq          BLOB,
	policy_id      INTEGER,
	indirect_object_identifier_type    INTEGER,
	indirect_object_identifier         TEXT NOT NULL DEFAULT 'UNUSED',
	indirect_object_code_identity      BLOB,
	flags          INTEGER,
	last_modified  INTEGER     NOT NULL DEFAULT (CAST(strftime('%s', 'now') AS INTEGER)),
	pid INTEGER,
	pid_version INTEGER,
	boot_uuid TEXT NOT NULL DEFAULT 'UNUSED',
	last_reminded INTEGER NOT NULL DEFAULT 0,
	PRIMARY KEY (service, client, client_type, indirect_object_identifier),
	FOREIGN KEY (policy_id) REFERENCES policies(id) ON DELETE CASCADE ON UPDATE CASCADE
);

Now all we have to do is give kTCCServiceMicrophone permissions to com.mojang.minecraftlauncher by inserting it in the access table as if the permission had been requested and granted by the user. We can do that with the following query:

INSERT INTO access VALUES(
	'kTCCServiceMicrophone',        -- service
	'com.mojang.minecraftlauncher', -- client
	0, -- client_type (0 = bundle id)
	2, -- auth_value (2 = allowed)
	3, -- auth_reason (3 = user set)
	1, -- auth_version (always 1)
	-- csreq:
	X'fade0c00000000a80000000100000006000000060000000600000006000000020000001c636f6d2e6d6f6a616e672e6d696e6563726166746c61756e636865720000000f0000000e000000010000000a2a864886f763640602060000000000000000000e000000000000000a2a864886f7636406010d0000000000000000000b000000000000000a7375626a6563742e4f550000000000010000000a48523939325a454145360000',
	NULL,       -- policy_id
	NULL,       -- indirect_object_identifier_type
	'UNUSED',   -- indirect_object_identifier
	NULL,       -- indirect_object_code_identity
	0,          -- flags
	1612407199, -- last_updated
	NULL,     -- pid (no idea what this does)
	NULL,     -- pid_version (no idea what this does)
	'UNUSED', -- boot_uuid (no idea what this does)
	0         -- last_reminded
);

Generating a value for the csreq column was a little tricky. Luckily this Stackoverflow post has the answer. It basically boils down to this2:

REQ_STR=$(codesign -d -r- /Applications/Minecraft.app/ 2>&1 | awk -F ' => ' '/designated/{print $2}')
echo "$REQ_STR" | csreq -r- -b /tmp/csreq.bin
REQ_HEX=$(xxd -p /tmp/csreq.bin  | tr -d '\n')
echo "X'$REQ_HEX'"

If you looked closely, you have probably also noticed that the schema I found has a few more columns than the one in the article. I assume these have been added in later MacOS updates. When constructing my INSERT query, I just left the values as NULL and 'UNUSED' because that what similar rows in the table seemed to be doing.

And that’s pretty much it! I didn’t know if/how I needed to restart TCC, so I just rebooted my computer. Afterwards I confirmed that Minecraft was now showing up under the microphone permission in settings.

  1. Here I’m using $USER as a place-holder for the current user’s username. It should just be expanded when using it in Terminal. When doing so, be careful about the space in the path!

  2. Look at [the Stackoverflow answer][csreq-explain] for an explanation of how the commands work.

The obligatory meta post

The current meta seems to be making personal websites. Everybody’s doing it and, if you are reading this, I am too. I hope this is the start of a lasting, healthy online presence.

Another trend I’m noticing with these online spaces is the tendency for the first (and often only) post to be about the site’s setup and such. Since I love talking about myself, here’s a little write up about this site’s current inner workings!

The server

First up, the hardware! This is probably the most boring part of the setup. My site is currently running on a shitty laptop sitting in my basement. The power cable is broken so if anyone even slightly nudges it, the computer shuts off instantly. Not exactly Production Quality 99.99% Uptime…

Picture of the man behind the camera giving a computer on a desk the middle finger

It would probably have been cheaper and easier to just rent one of those near-free VPSs somewhere but setting up this laptop was a pretty fun learning experience. Until then, I had never tried replacing the operating system on a computer. It was honestly pretty refreshing feeling like I was the master of the computer and not the other way around for a change.

The server is running behind a Cloudflare proxy to provide a basic level of security. I’ll refrain from further explanations of my networking stack due to some pretty glaring security issues which I’d rather not elaborate on…

NixOS

An old laptop running as a server isn’t that unusual. Much more unusual is the choice of operating system. Rather than opting for something like Ubuntu or Arch, I went with NixOS.

Both my ““server”” and my Macbook Pro have their configurations stored in a single monorepo. That approach definitely has its pros and cons: it’s nice being able to share overlays and packages between the two configurations but trying to reconcile NixOS and nix-darwin has proven to be quite a hassle. I definitely spent waaay more time than is reasonable figuring out how to manage such a monorepo, an issue that was not helped by Nix’s absolutely bonkers module system. Maybe I’ll talk more about my ambivalent thoughts on NixOS and the Nix ecosystem in some other post.

Once I had actually gotten NixOS configured and working, setting up the actual server was probably something like 7 LOC. Pretty simple, since running NGINX as a reverse proxy is a pretty common use case on NixOS1.

Furthermore, if I ever decide to actually switch to a proper VPS like I should’ve done from the start, I can just rebuild my NixOS config on that machine! Magical!

linus.onl

Finishing off my mega-scuffed config, I obviously couldn’t go with a well established SSG like Hugo or Jekyll. Instead, I decided to take some inspiration from Karl B. and write my own bespoke build script.

I decided to try using TCL for implementing this script, figuring the language’s “everything is a string” philosophy2 would make it an excellent shell-script replacement. While that definitely was the case, the script actually ended up not relying that much on external tools as it grew.

While exploring the language, I learned that where TCL really shines is in its metaprogramming capabilities. I used those to add a pretty cool preprocessing phase to my post rendering pipeline: everything between a <? and a ?> is evaluated as TCL and embedded directly within the post. The preprocessor works in three steps. First it takes the raw markup, which looks like this:

# My post

Here's some *markdown* with __formatting__.

The current time is <?
    set secs [clock seconds]
    set fmt [clock format $secs -format %H:%M]
    emit $fmt
?>.

That markup is then turned into a TCL program, which is going to generate the final markdown, by the parse procedure.

emit {# My post

Here's some *markdown* with __formatting__.

The current time is }

    set secs [clock seconds]
    set fmt [clock format $secs -format %H:%M]
    emit $fmt

emit .

That code is then evaluated in a child interpreter, created with interp create. All invocations of the emit procedure are then collected by collect_emissions into the following result:

# My post

Here's some *markdown* with __formatting__.

The current time is 02:46.

This is the final markup which is passed through a markdown renderer3 to produce the final html. This whole procedure is encapsulated in render_markdown.

Embedded TCL is immensely powerfull. For example, the index and archive pages don’t recieve any special treatment from the build system, despite containing a list of posts. How do they include the dynamic lists, then? The list of posts that are displayed are generated by inline TCL:

# Archive

Here's a list of all my posts. All <? emit [llength $param(index)] ?> of them!

<?
	proc format_timestamp ts {
		return [string map {- /} [regsub T.* $ts {}]]
	}

	# NOTE: Should mostly match pages/index.md
	emitln <ul>
	foreach post $param(index) {
		lassign $post path title id created updated
		set link [string map {.md .html} $path]
		emitln "<li>[format_timestamp $created]: <a href=\"[escape_html $link]\">[escape_html $title]</a></li>"
	}
	emitln </ul>
?>

And that code sample was generated inline too!! The code above is guaranteed to always be 100% accurate, because it just reads the post source straight from the file system.4 How cool is that!?.

I quite like this approach of writing a thinly veiled program to generate the final HTML. In the future I’d like to see if I can entirely get rid of the markdown renderer.

P.S. Here’s a listing of the site’s source directory. Not for any particular reason other than that I spent 20 minutes figuring out how to get the <details> element to work.

Directory listing
linus.onl/
├── assets
│   ├── images
│   │   └── ahmed.jpg
│   └── styles
│       ├── normalize.css
│       └── site.css
├── pages
│   ├── about.md
│   ├── archive.md
│   └── index.md
├── posts
│   ├── first-post.md
│   ├── my-lovehate-relationship-with-nix.md
│   ├── second-post.md
│   ├── the-obigatory-metapost.md
│   └── third-post.md
├── Makefile
├── README.md
├── build.tcl
├── local.vim
└── shell.nix

Conclusion

All in all, this isn’t the most exotic setup, nor the most minimal, but it’s mine and I love it. Particularly the last bit about the build system. I love stuff that eat’s its own tail like that.

I hope this post was informative, or that you at least found my scuffed setup entertaining :)

  1. Actually, my setup is a little longer because I use a systemd service to fetch and rebuild the site every five minutes as a sort of poor-mans replacement for an on-push deployment. Not my finest moment…

  2. Salvatore Antirez has written a great post about the philosophy of TCL. I highly recommend it, both as an introduction to TCL and as an interesting perspective on simplicity.

  3. Initially I was shelling out to smu but I switched to tcl-cmark because smu kept messing up multi-line embedded HTML tags.

  4. That does also mean that if the above sample is totally nonsensical, it’s because I changed the implementation of the archive page and forgot to update this post.

Use NeoVim anywhere on OSX

I often want to use NeoVim outside the terminal, when writing emails or notes. Luckily, I found Jamie Schembri’s post NeoVim everywhere on MacOS. I have made a few improvements with regards to stability and ““stability””.

‘Edit in NeoVim’ service

This workflow takes as input the current selection and outputs the text to replace it. It uses iTerm2 and NeoVim to edit the text.

If you haven’t used Automator before, I recommend following the official guide hon how to create a Quick Action workflow. You’ll want to set Workflow receives current to “text” and check the box Output replaces selected text. Then add a Run AppleScript action to the workflow with the below code.

on readFile(unixPath)
	set fileDescriptor to (open for access (POSIX file unixPath))
	set theText to (read fileDescriptor for (get eof fileDescriptor) as «class utf8»)
	close access fileDescriptor
	return theText
end readFile

on writeTextToFile(theText, filePath, overwriteExistingContent)
	try
		-- Convert the file to a string
		set filePath to filePath as string

		-- Open the file for writing
		set fileDescriptor to (open for access filePath with write permission)

		-- Clear the file if content should be overwritten
		if overwriteExistingContent is true then set eof of fileDescriptor to 0

		-- Write the new content to the file
		set theText to theText as string
		write theText to fileDescriptor starting at eof as «class utf8»

		-- Close the file
		close access fileDescriptor

		-- Return a boolean indicating that writing was successful
		return true

		-- Handle a write error
	on error errMessage
		-- Close the file
		try
			close access file theFile
		end try

		display alert "Failed to write to file" message "Failed to write to file " & theFile & ": " & errMessage

		-- Return a boolean indicating that writing failed
		return false
	end try
end writeTextToFile

on run {input, parameters}
	-- Save the frontmost application for later.
	tell application "System Events"
		set activeProc to first application process whose frontmost is true
	end tell

	-- Write the selected text (input) to a temporary file.
	set tempfile to do shell script "mktemp -t edit-in-vim"
	if writeTextToFile(input, tempfile, true) is false then
		-- Failed to write the input to the file. The function has already
		-- displayed an error message, so let us just return the input unaltered.
		return input
	end if

	-- Edit that temporary file with Neovim under iTerm2.
	tell application "iTerm2"
		-- If General>Closing>'Quit when all windows are closed' is enabled,
		-- this will create two windows if iTerm2 was previosly closed.
		--
		-- We use a custom profile (with a descriptive name) to reduce the
		-- risk of idiot Linus accidentally breaking something by changing
		-- the default profile.
		create window with profile "Rediger i Neovim (brugt af workflow)"

		tell the current window
			tell the current session
				-- Edit the file using Neovim. We set 'nofixeol' to avoid inserting
				-- extraneous linebreaks in the final output. We also set 'wrap' since
				-- we seldom want to rewrite as we often don't want to manually break
				-- lines in MacOS input fields.
				write text "nvim -c 'set nofixeol' \"" & tempfile & "\""

				-- Wait for the editing process to finish.
				-- This requires shell-integration to be enabled.
				delay 0.5
				repeat while not is at shell prompt
					delay 0.2
				end repeat
			end tell

			-- Close the window we just created so it doesn't clutter up the desktop.
			close
		end tell
	end tell

	-- Switch back to the previously active application.
	tell application "System Events"
		set the frontmost of activeProc to true
	end tell

	-- The new text is stored in tempfile.
	return readFile(tempfile)
end run

The functions readFile and writeFile come from the Mac Automation Scripting Guide. Do beware that the original writeFile linked before doesn’t handle Unicode text properly.

The code for restoring the focused application was taken from a patch on the pass mail archives.

Neovim as a standalone application.

This next snippet wraps NeoVim in a proper Application™. Doing this means it will be recognized by MacOS in various places. For example, when opening files in Finder it is now possible to choose NeoVim. I have set it as the default for all markdown files on my computer.

A more robust – and probably all around better solution – would be to use VimR. I have yet to test it, but I might replace this script with VimR in the future.

on run {input, parameters}
	if input is not {} then
		set filePath to POSIX path of input
		set cmd to "nvim \"" & filePath & "\""
	else
		set cmd to "nvim"
	end if

	tell application "iTerm2"
		create window with default profile
		tell the current window
			tell the current session
				write text cmd

				-- Wait for command to finish.
				repeat while not is at shell prompt
					delay 0.2
				end repeat
			end tell

			close
		end tell
	end tell
end run

Longing for (digital) community

There’s this tweet that has been stuck in my head lately.

What everyone wants to belong to is a community but they keep winding up in audiences instead and I think this is the cause of a tremendous amount of suffering right now.
— @girlziplocked September 2, 2020

I used to be part of a tight-knit online community and it was such a gratifying experience. For a few cozy years it felt like I was living the dream we were promised in the 90’s. The dream of a web where you could connect with like-minded persons from all over the world unrestricted by age, gender, or physical location.

We built stuff for each other in that community. Some of it was silly, some of it was useful, all of it was situated software. The code had soul. Sometimes we purposefully didn’t refactor a clumsy/slow piece of code because it was associated with a particularly fun in-joke or because someone was very proud of an algorithm they’d made. Using a video call and Live Share to collaboratively edit source was some of the most fun I’ve ever had on a computer.

As I write this, I keep wanting to use that same word to describe everything: ‘gratifying’. I think that word so perfectly captures the vibe of knowing the in-jokes, having relationships to other members, and feeling like your voice mattered.

That community eventually died. Not from some devastating event, just slowly over time as people’s attentions shifted. I’ve been trying to recreate that same feeling of belonging over on Cohost1 by posting frequently and interacting with other people’s posts. But I worry Cohost might be to loosely coupled a community. Social media just isn’t the same as the semi-public space of a chat room2.

  1. I’m @linuwus. Feel free to send an ask and say hello!

  2. Or maybe people just don’t want to talk to me, idk.

Documenting open source code

Today a friend of mine mentioned an issue they’d had with Radarr, the movie organizer for torrent users. Since it’s an open source project, I figured I could just fix the issue myself – easy peasy. But man, the code for radarr is obtuse… Everything is encapsulated in a subclass of an abstract base class implementing an interface for a service (sic) and there’s not a single comment to be found.

Documentation of open source projects is yet another instance of the 80/20 rule: relatively little documentation goes a long way in helping new contributors find their footing in a project. It can be as simple as writing a couple of lines at the top of every file describing what the module does, or throwing a couple of readme.txts in the folder for each major component.

Wren is of course the ideal. Its source code reads more like a book than a program, though such a comparison is hardly fair; Wren is as much a tutorial as it is an actual live programming language1. Obviously, not every project can dedicate that much time to documentation.

The Neovim project strikes a much more reasonable balance. Let’s take a look at a src/nvim/undo.c, it contains some rather tricky code for managing the editor’s multi-level undo tree. The reason I know this isn’t because I’m some kind of Neovim expert, I just read comment at the top of the file!

// undo.c: multi level undo facility

That comment is followed by a longer comment explaining how the main data structure of the module works. I think that’s a much better use of the space at the top of the file than the ever-present license comment.

If the Radarr maintainers had been better about documenting their project, I would’ve already finished making my change. Instead, I’m stuck doing detective work trying to figure out what part of the code I’m even looking for. I do of course understand that writing documentation takes time but I think it is well worth it because it supercharges one-time contributors.

  1. As of c2a75f1e there are 3811 semicolons and 2665 comment lines. That’s a ratio of approximately 0.70 comments for every statement!

That’s it! There are no more posts…