Image handling and compression in static websites

What are the different techniques used for handling images in static websites that are published from a local or remote repository of source files?

This is a collection of notes on the subject.

Problem space

Publishing a folder of markdown files as an HTML website is fairly straightforward and requires at the very least some form of parsing.

The markdown source files have a small footprint and usually increase in size when transformed to HTML.

Often, I want to include images in my files, and I would like to keep the high-resolution image files adjacent to the appropriate markdown files.

These files are often large, uncompressed and altogether not suitable for publishing on a website. I would like to serve up resized and compressed image files for the end user.

Solution space

There are many possible approaches to this. I will attempt to map some of these out in the hopes of finding, piecing together, or coming up with a new way of publishing a static website with compressed images.

What are the steps that need to be taken?

Locally, in development I don’t mind serving up the full high resolution images. (this might change)

I want to be able to link to, and include links directly to the high resolution images.

The original links will have to be replaced with the URI(s) to the compressed version(s).

  1. Using a command line tool to resize and compress images when publishing the site.
    • These could be stored in a .gitignored folder
  2. Manually compressing the images and storing them next to the original.
  3. Uploading the original images to a specific cloud provider.
  4. Uploading the images to a static bucket and using an image transform CDN (is this even an option?)

Option 1

Compressing images is quite resource-heavy, and doing this every time the site is published is a bad and expensive idea.

The build script could check the existence of a remote file and skip the compression, only linking to the existing compressed image.

Option 2

Manually compressing the images is pretty much out of the question. I’ve listed it mostly to rule it out.

Option 3

Using a cloud image provider might be the easiest solution. Originals could be uploaded once, stored with a unique key (filename or hash) and we could use an API at build-time to figure out whether to upload an image or not.

This approach is very flexible, and we can request a new size at any point in the future.

Some downsides are minimum running costs, less control over content delivery and proprietary APIs.

The originals are stored and backed up locally (with the rest of the files), so having the images on ‘someone else’s computer’ is not a problem.

We’re not locking into a particular provider, but will have to build some customisation to make it work.

Potential providers are

Cloudinary Aws media transform? Imgix

See also https://github.com/google/eleventy-high-performance-blog

Resources

https://github.com/jlengstorf/rehype-local-image-to-cloudinary

#definition