Static Site Optimization: Tenfold Acceleration

Original author: JonLuca De Caro
  • Transfer
Dzhonluka De Caro, the author of the material, the translation of which we publish today, once found himself in a trip abroad and wanted to show his friend his personal page on the Internet. I must say that this was a regular static site, but during the demonstration it turned out that everything works more slowly than you might expect.

image

No dynamic mechanisms were used on the site - there was a bit of animation, it was created using responsive design methods, but the content of the resource almost always remained unchanged. The author of the article says that what he saw, quickly analyzing the situation, literally terrified him. Events DOMContentLoaded I had to wait about 4 seconds, it took 6.8 seconds to fully load the page. During the download process, 20 requests were completed, the total amount of data transferred was about megabytes. But this is a static site. Then Jonluka realized that he previously considered his site incredibly fast only because he was used to a gigabit Internet connection with a low latency, using which, he, from Los Angeles, was working with a server located in San Francisco. Now he ended up in Italy and used an 8 Mbps Internet connection. And this completely changed the picture of what is happening.

In this article, Jonluca De Caro will talk about how he managed to speed up his static site tenfold.

Overview


This is how the site looked from the point of view of the number of requests necessary for its formation, load time and volume of downloaded data, when it became clear that it needed to be optimized.


Data from an unoptimized site
When I saw all this, I first started optimizing my site. Up to this point, if you needed to add a library or something else to the site, I would just grab and upload what I needed using the design src="…" . I completely did not pay attention to performance, I did not think about anything that could affect it, including caching, embed code, lazy loading.

Then I started looking for people who were faced with something like that. Unfortunately, publications devoted to the optimization of static sites become outdated very quickly. For example, recommendations dated 2010-2011 are devoted to the use of libraries, and some of them have made assumptions that libraries are not used at all when developing a site. Analyzing these materials, I was faced with the fact that some of them simply repeat the same sets of rules over and over again.

However, I still found two excellent sources of information: the High Performance Browser Networking resource and the publication of Dan Luu . Although I didn’t go as far as optimization as Dan, analyzing the contents of the site and its formatting tools, I managed to speed up the loading of my page about ten times. Now I DOMContentLoaded had to wait DOMContentLoaded about a fifth of a second, and it took 388 ms to fully load the page (I must say that these results cannot be called absolutely accurate, because they were affected by lazy loading, which will be discussed below).


Optimization Results
Now let's talk about how we managed to achieve such results.

Site analysis


The first step in the optimization process was profiling the site. I wanted to understand what exactly requires the most time, and how best to parallelize the execution of tasks. I used various tools to profile the site and checked how the site loads from different places on Earth. Here is a list of resources that I used:


Some of these resources made recommendations for improving the site. However, when several dozen requests are required to form a static site, you can do a lot of things - from tricks with gif files that came from the 90s to getting rid of unused resources (for example, I downloaded 6 fonts, although I used only one of them).


Timeline for a site very similar to mine. I didn’t take a screenshot for my site on time, but what is shown here looks almost the same as what I saw a few months ago, analyzing my site.
I wanted to optimize everything that I can influence on the content of the site and speed scripts, to the web server settings (Nginx in my case) and DNS.

Optimization


▍Minification and pooling of resources


The first thing I noticed was that the page makes a lot of requests to load CSS and JS files (it did not use persistent HTTP connections), which lead to various resources, some of which were carried out via HTTPS. All this meant that by the time the data was downloaded, the time needed to go through numerous requests to the content delivery networks or to ordinary servers was added, and to receive answers from them. At the same time, some JS files requested loading of other files, which led to the situation of lock cascades shown in the figure above.

In order to combine everything you need into one JS file, I used webpack . Each time I made some changes to the contents of the page, webpack automatically minified and collected all the dependencies into a single file. Here is the contents of my file webpack.config.js :

const UglifyJsPlugin = require('uglifyjs-webpack-plugin');
const ZopfliPlugin = require("zopfli-webpack-plugin");

module.exports = {
  entry: './js/app.js',
  mode: 'production',
  output: {
    path: __dirname + '/dist',
    filename: 'bundle.js'
  },
  module: {
    rules: [{
      test: /\.css$/,
      loaders: ['style-loader', 'css-loader']
    }, {
      test: /(fonts|images)/,
      loaders: ['url-loader']
    }]
  },
  plugins: [new UglifyJsPlugin({
    test: /\.js($|\?)/i
  }), new ZopfliPlugin({
    asset: "[path].gz[query]",
    algorithm: "zopfli",
    test: /\.(js|html)$/,
    threshold: 10240,
    minRatio: 0.8
  })]

};

I experimented with various options, as a result I got one file bundle.js , the blocking download of which is performed in the section of <head> my site. The size of this file was 829 Kb. This included everything except images (fonts, CSS, all libraries and dependencies, as well as my JS code). Most of this volume, 724 of 829 Kb, was in font-awesome fonts.

Next, I worked with a library Font Awesome and cleaned out everything except the three icons that are used: fa-github , fa-envelope , and fa-code . In order to extract from the library only the icons I need, I used the fontello service . As a result, I managed to reduce the size of the downloaded file to only 94 Kb.

The way the site is designed does not allow it to be displayed correctly if the browser processes only HTML and CSS, so I did not try to deal with blocking file downloads bundle.js . The boot time was approximately 118 ms, which was more than an order of magnitude better than before.

This approach also revealed several additional useful properties. So, now I did not have to access third-party servers or content delivery networks, as a result, when my page was loaded into the browser, the system was freed from the following tasks:

  • Making DNS queries to third-party resources.
  • Performing procedures for establishing HTTP connections.
  • Performing a complete download of the required data.

While using CDNs and distributed caching may make sense for large-scale web projects, my small static site does not benefit from the use of CDN resources. Refusing them, at the cost of some effort on my part, allowed me to save an additional something about a hundred milliseconds, which, in my opinion, justifies these efforts.

▍ Image Compression


Continuing the analysis of the site, I found that my 8-megabyte portrait was loaded onto the page, which was displayed in a size of 10% in width and height from its original size. This is not just insufficient optimization. Such things demonstrate the almost complete indifference of the developer to the use of the bandwidth of the user's Internet channel.


Image
I compressed all the images used on the site using
https://webspeedtest.cloudinary.com/ . There I was advised to use the webp format , but I wanted my page to be compatible with as many browsers as possible, so I settled on the jpeg format. You can configure everything so that webp images are used only in browsers that support this format, but I aimed for maximum simplicity, and decided that the benefits of an additional level of abstraction associated with images were not worth the effort spent to achieve these benefits.

▍ Improvement of the web server: HTTP2, TLS and something else


Having started optimizing the server, I immediately switched to HTTPS. At the very beginning, I used regular Nginx on the 80th port, which simply served files from /var/www/html . Here is the code for my source file nginx.conf .

server{
    listen 80;
    server_name jonlu.ca www.jonlu.ca;

    root /var/www/html;
    index index.html index.htm;
    location ~ /.git/ {
          deny all;
    }
    location ~ / {
        allow all;
    }
}

So, I started by setting up HTTPS and redirecting all HTTP requests to HTTPS. I received a TLS certificate from Let's Encrypt (this wonderful organization, in addition, recently began to sign wildcard certificates). Here is the updated one nginx.conf .

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name jonlu.ca www.jonlu.ca;

    root /var/www/html;
    index index.html index.htm;

    location ~ /.git {
        deny all;
    }
    
    location / {
        allow all;
    }

    ssl_certificate /etc/letsencrypt/live/jonlu.ca/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/jonlu.ca/privkey.pem; # managed by Certbot
}

By adding the directive http2 , Nginx was able to use the most useful features of the latest HTTP features. Please note that in order to use HTTP2 (previously called SPDY), you must use HTTPS. Details about this can be found here . In addition, using view constructs, http2_push images/Headshot.jpg; you can use the HTTP2 push directives.

Please note that using gzip and TLS increases the risk of a BREACH attack on a web resource. In my case, since this is a static site, the risk of such an attack is very low, so I calmly use compression.

▍Use of caching and compression directives


What else can you do using only Nginx? The first thing that comes to mind is the use of caching and compression directives.

Previously, my server delivered regular, uncompressed HTML to clients. Using a single line gzip: on; , I was able to reduce the size of the transmitted data from 16000 bytes to 8000, which means a decrease in their volume by 50%.

In fact, this indicator can be further improved by using the directive Nginx gzip_static: on; , which will allow the system to focus on the use of pre-compressed versions of files. This is consistent with the above configuration of webpack, in particular, in order to organize the preliminary compression of all files during assembly, you can use ZopfliPlugin . This saves computing resources and allows you to maximize compression without losing speed.

In addition, my site changes quite rarely, so I tried to ensure that its resources would be cached for as long as possible. This would lead to the fact that, visiting my site several times, the user would not need to re-download all the materials (especially - bundle.js ).

This is what server configuration ( nginx.conf ) I ended up with.

worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 30000;

events {
    worker_connections 65535;
    multi_accept on;
    use epoll;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;

    # Turn of server tokens specifying nginx version
    server_tokens off;

    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    add_header Referrer-Policy "no-referrer";

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_dhparam /location/to/dhparam.pem;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

    ssl_session_timeout 1d;
    ssl_session_cache shared:SSL:50m;
    ssl_stapling on;
    ssl_stapling_verify on;
    add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';

    ssl_certificate /location/to/fullchain.pem;
    ssl_certificate_key /location/to/privkey.pem;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;
    gzip_disable "msie6";

    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
    gzip_min_length 256;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Please note that this does not cover all previous improvements regarding TCP settings, gzip directives, and caching. If you want to learn more about all of this, take a look at this Nginx configuration material .

And here is the server configuration block from mine nginx.conf .

server {
    listen 443 ssl http2;

    server_name jonlu.ca www.jonlu.ca;

    root /var/www/html;
    index index.html index.htm;

    location ~ /.git/ {
        deny all;
    }

    location ~* /(images|js|css|fonts|assets|dist) {
        gzip_static on; # Tells nginx to look for compressed versions of all requested files first
        expires 15d; # 15 day expiration for all static assets
    }

}

▍ Lazy loading


And finally, I made a small change to the site, which could significantly improve the situation. There are 5 images on the page that you can see, just clicking on the corresponding thumbnails. These images were loaded during the loading of the rest of the site content (the reason is that the paths to them were in the tags <img align="center" src="…"> ).

I wrote a small script to modify the corresponding attribute of each element with a class lazyload . As a result, these images are now loaded only by clicking on their corresponding thumbnails. This is what the file looks like lazyload.js :

$(document).ready(function() {
    $("#about").click(function() {
        $('#about > .lazyload').each(function() {
            // установка src для элемента img на основании data-src
            $(this).attr('src', $(this).attr('data-src'));
        });
    });

    $("#articles").click(function() {
        $('#articles > .lazyload').each(function() {
            // установка src для элемента img на основании data-src
            $(this).attr('src', $(this).attr('data-src'));
        });
    });

});

This script refers to the elements <img> , setting <img src="…"> on the basis of <img data-src="…"> what allows you to download images when they are needed, and not while loading the main materials of the site.

Further improvements


I have in mind several more improvements that can improve page loading speed. Perhaps the most interesting of them is the use of a service worker to intercept network requests, which will allow the site to work even without an Internet connection, and caching data using CDN, which will allow users located far from my server in San Francisco to save time, necessary to appeal to him. These are valuable ideas, but they are not particularly important in my case, since this is a small static site that plays the role of an online resume.

Summary


The above optimizations allowed me to reduce the loading time of my page from 8 seconds to about 350 milliseconds the first time I access it, and, on subsequent calls, to just incredible 200 milliseconds. As you can see, to achieve such improvements it took not so much effort and time. And by the way, for those who are interested in optimizing websites, I recommend reading this .

Dear readers! How do you optimize your sites?