Th3 Core

Why We Are Here => Web Development => Topic started by: Mackin USA on September 01, 2017, 11:37:37 AM

Title: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: Mackin USA on September 01, 2017, 11:37:37 AM
There are many things you can do to speed up your pages' load -time. Some are common knowledge, others not so much, for those who don't work with websites every day. Here are a few hints on things you can do to speed up your pages - a worthwhile effort, since Google has said they intend to eventually start using their mobile-first index to rank pages. That means page speed with be even more critical.

Title: Q
Post by: ergophobe on September 02, 2017, 06:00:51 PM
This may be the worst article I have read in 2017. It's not just that it is incomplete, some of the advice goes in the exact opposite direction of the desired effect.

Okay, let's start with this, which always makes me want to scream. There are hundreds of articles on the web pleading with people to spreading this disinformation, and yet it persists.

For most Internet purposes, it used to be the case that you'd gain nothing by saving images in 300dpi resolution.

And you would LOSE NOTHING. Images on the web show at the resolution you set for your screen, not the DPI. This is one of my pet peeves when people talk about optimization.

Question: How much water do you need?
Answer: 1kg per liter

Question: How far is it to New York?
Answer: 70mph

Question: How far does this car travel on a tank of gas?
Answer: 32 mpg

Question: How big does this image need to be?
Answer: 300DPI

Arrgghhh. You can set the resolution to anything you want and for the most part it will change nothing. The display resolution will always be determined by the resolution of the monitor, not the resolution set in the metadata of the image, which generally only comes into play when you print something from a printer that has a wide range of available resolutions. This may come if pixel sizes continue to drop and we see 300DPI monitors, but generally it still doesn't matter, because of the difference between CSS pixels and physical pixels.

If you don't need transparency, there's no reason to use a png file. They're nearly always larger files than jpg files

Again, total BS. PNG and JPEG use completely different compression algorithms. The "messier" the image (photo of a field vs clean vector-based logo) the better jpg does and the worse png does, but the opposite is true as well.

PNG is basically like a zip file - it looks for repeated bits of code that are exact matches and aggregates them. If you have a lot of exact matches, you'll get fantastic compression without any artifacts around letters. JPG works by liking for similar pixels and aggregates them and as long as they are similar enough, you don't see the artifacts. But if you try this with anti-aliased text and you want high quality, JPG will be bigger unless the text is overlayed on a photo background.

Once you have your images set to the proper dimensions and file sizes, you can compress them to make them download even faster. For some files, gzip is the right choice, but gzip tends to make image files larger, rather than smaller. A good alternative is the online image compression tool, ShortPixel.

Mabye. If you have people uploading images without properly optimizing them manually in Photoshop, then this might help. But at the end of the day, JPG is JPG and Shortpixel and similar services will not compress them further in my experience. You'll get a sense of this in your Pagespeed or Lighthouse reports, but quite often when I go in and try to optimize to the level suggested, the quality falls below a clear visual threshhold. And often, it's just a few bytes and there are other optimizations that matter more.

Now, you can get benefits by converting to WebP and getting the same quality in a smaller package, but if you are already serious about optimizing, Shortpixel is probably going to be no help. In any case, every  one of these I have tested has minimal effect unless people are uploading massively oversized images.

The beauty of vector graphics such as svg files is that they are very small files and they can be scaled up in size with no loss of quality from pixelation

Again, this needs to be tested. The more complex the vector, the less efficient SVG is. I have seen SVG images that weigh in a 4MB and which render perfectly fine at the size needed as a 30KB PNG. The designer and developer, buying into the "svg files... are very small files" thing didn't even check until I ran a Pagespeed report.

As with PNG vs JPG, there is no clear winner and sometimes SVG makes sense, and sometimes it doesn't, but you can't say "If it's a vector-based line drawing, use SVG." Sometimes SVG will be *massively* larger than the corresponding PNG (and BTW, this a case where PNG will almost always beat JPG).

JS, CSS and HTML can all be minified.... If you're running on WordPress ... go with an all-in-one plugin, like Autoptimize.

Yes. This will speed things up a lot. This helps a lot on WP. The equivalent is built into Drupal and put on steroids with AdvAgg and I believe you can do the same via Cloudflare on any site.

Finally, something that is not BS

Combine CSS and JS. You can eliminate several server calls by combining your CSS and JS files

Usually way more important than the previous, but this can break your site. All JS-driven functionality needs to get tested.

As mentioned, gzip isn't for use on image files, but on HTML and scripts, it works a treat! It removes all the superfluous comments, line-breaks and indents

Seriously????? gzip does no such thing. gzip compresses text by finding similar strings and aggregating them and creating a shortcode representation that gets replaced with the full code on expansion. It is a lossless compression, so when decompressed, you have the same file you started with, including line breaks and comments.

Minification removes line breaks and comments...

Lazyloading is simply delaying the download of images until needed.

Yes. If possible. You can lazy load other resources or delay loading other resources. On one site where the client simply would not consider changing the front page, I delayed loading the navigation, which took the site from having a full 10 seconds of white screen as their stupid menu, which depending on render-blocking scripts, loaded.

Another method of reducing server calls is to avoid the use of inline CSS wherever practical.

What is he a doctor OF exactly? He has this exactly backwards. This should read "Another method of reducing server calls is to promote the use of inline CSS ."

Inline CSS reduces the number of server calls because it is sent out in the initial HTML document. We don't do this all the time because it increases the payload of that initial document. But if you have a one-page site or you really want your site to be fast on first page load and don't care about optimizations for subsequent loads by the same user, then inline CSS will *reduce* the number of server calls.

Google officially recommends that you *DO* inline your critical path CSS so that CSS that renders above the fold content gets sent down the pipeline with the first response.

Buckworks and I had a discussion here recently about critical path CSS. See here
Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: ergophobe on September 02, 2017, 06:17:14 PM
Oh, and what's missing?

Big things that are missing

1. Every bit of Javascript you add is slowing down your site, including all those damn tracking tags and analytics and all that you need. Do it server-side if you can for traditional web pages (not counting Angular or React-driven front ends that will then just put in JSON calls and need the JS to drive the page).

2. Render-blocking scripts and CSS. This doesn't change the time to fully load the page, but it can have huge impacts on percevied loading and time to first interaction

3. Server-side issues. These days, this is usually not the problem, but look for slow queries. Measure time to first byte. If it's really slow, you have an issue with either a very slow script, very slow DNS lookups or very slow server or some combination.

4. Critical path components get loaded early, ideally inline.

5. Inline images in CSS. Small images that a few years ago we would have sprited can simple be BASE-64 encoded and included in your CSS. Like anything else, there are tradeoffs. You get rid of a request, but you bloat your CSS file. If you are serving your CSS from a static URL using gzip compression, the cost isn't that high - the BASE-64 encoded image served as a gzipped file may be similar in size to the underlying GIF*** or PNG and if you are serving this CSS file on every page, it *will* be cached. So not bad in that situation.

6. HTTP/2 will help tremendously with HTTPS overhead and the overhead caused by many simultaneous requests. So if you *do* have a lot of requests for your page, HTTP/2 will be your friend.

7. This is PHP-specific, but he seems focussed on WP, so that's fair. If time to first byte is high because you have a zillion plugins and you just can't part with them all, switch to PHP 7 - in general it is much faster, similar to PHP 5 with an opcode cache, but some plugins may break if they haven't been updated in a while

If it's a rarely-used image and that CSS rule only fires on a couple of pages, then it's not worth it. But for a nav element or similar that appears on every page, inlining can be a good idea.

***GIF. We haven't talked about GIF, but at the smallest file sizes, GIFs can be the smallest. They are missing many features of PNG and lose their edge quickly because their compression is not as good, but they also have less "overhead" in the meta info, so at the smallest files, it can still be the optimal format.
Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: Torben on September 02, 2017, 06:24:04 PM

What ergophobe said plus combining css and js is not necesarry when using https
Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: ergophobe on September 02, 2017, 06:27:36 PM

What ergophobe said plus combining css and js is not necesarry when using https

When using HTTPS or when using HTTP/2?  I would have said the latter. I don't think HTTPS would have an impact there would it?
Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: ergophobe on October 03, 2017, 06:24:10 PM
Meanwhile, I told a provider that I didn't want their widget on my home page because of page load impact.

They told me not to worry, because it only added 1.5 to 2 seconds to page load time. I bit my tongue, but wanted to say that for some sites my target *total* is 2 seconds on GTMetrix or

It's like when we were building a house and people would tell us it was "only" an extra $30,000 to add X.
Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: BoL on October 03, 2017, 07:29:31 PM
There's an epic blog post in the contents of this thread.

To add a couple of points to what's already covered:

> PNG and JPEG

Worth mentioning 'lossy vs lossless' for Googling purposes

> gzip is the right choice

IIRC with NGINX you also have the option of saving an already gzipped version of a file, potentially speeding up a CPU-bound (or even disk bound) setup

Also I think servers tend to not use gzip for filetypes like JPG so it's a non-issue in respect to certain file extensions, which was kind of mentioned but not explicitly.

I'd mentioned optimising images to someone recently, who had 250KB "header" images on their WP site. I proxy from a front-end server to a WP server so there's a slight time penalty involved... but I'd pointed out that their images could be halved in size with little difference in image quality regardless.


Definitely HTTP2 would help. wrt HTTPS, certainly issues come into play with how many hostnames are involved. Keep-Alive and the Nagle algo are maybe worth mentioning too in this area.

Also loading asynchronously...

Other stuff...

A CDN is going to help a lot for static stuff. 'loads quick' is pretty relative to geographic location, ISP etc... which I'm sure Google is more than aware of.

All the other stuff about caching server side stuff, between the server and application, between application and DB, memory allocated to DB etc etc.

There's a ton to cover. Great points raised. Seems like the topic deserves a good 10,000 words going through the nuts and bolts.
Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: BoL on October 03, 2017, 08:13:27 PM
One thing to easily overlook is the actual web server itself too. I'd say the easy way out is to use a CDN which is going to be highly performant at spitting out large volumes of files.

I was benchmarking a standard NGINX setup vs Caddy and a lesser known one called haywire. The former can throw out around 160K requests a second on my laptop, and the latter closer to 400K requests a second.

The latter doesn't have fancy bells and whistles, though, whereas with NGINX (and Apache) you can bolt on pretty much anything with ease.

Kind of ties in with what I was reading with NGINX and microservices:

There's lots of general advice wrt optimisation, but there's a bit to be said about delegating tasks to one server/app that does the job very well.

Title: Re: Doc Sheldon POST > Making your Website Lightning-fast Isn't that Difficult
Post by: ergophobe on October 04, 2017, 01:50:41 AM
delegating tasks to one server/app that does the job very well

Which makes me think we have yet to touch on reverse proxies and nginx and general server vs reverse proxies.

10,000 words

No doubt... and it should be too hard to make it much higher quality than the advice Doc Sheldon was peddling.