Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't inline ALL SVGs #65

Closed
XhmikosR opened this issue Aug 6, 2017 · 19 comments
Closed

Don't inline ALL SVGs #65

XhmikosR opened this issue Aug 6, 2017 · 19 comments
Labels

Comments

@XhmikosR
Copy link
Contributor

XhmikosR commented Aug 6, 2017

https://gtmetrix.com/reports/gohugo.io/1afmLJr4

This is bad as you can see. The initial HTML size is quite big. Netlify supports http/2 so it's not performant inlining everything like you do.

@bep bep added the bug label Aug 6, 2017
@budparr
Copy link

budparr commented Aug 6, 2017

@rdwatters and I have had several conversations on this topic. I think he felt it was early, browser-wise, to go all in on http2 (and we did some tests where the page weight was faster than the request).

Also, It looks like the assets aren't gzipped, which is odd, because I think that Netlify does that automatically (I'm getting ready to switch them to hashes, so will look at it then). The other big fail is a piece of content, a PNG.

Maybe the title of the issue should be performance review?

@budparr
Copy link

budparr commented Aug 6, 2017

confirming there is no content-encoding:gzip

@XhmikosR
Copy link
Contributor Author

XhmikosR commented Aug 6, 2017

My main concern was the huge HTML side due to you inlining SVGs, the logo especially.

The rest are still issues of course but I thought you were aware of those.

Anyway, means of generating the CSS and JS files are missing from the repo too.

@budparr
Copy link

budparr commented Aug 6, 2017

I wasn't aware of the gzip issue. I was aware of the issue you raised, but it's something we discussed and decided to go with this approach, for the moment.

Anyway, means of generating the CSS and JS files are missing from the repo too.

I might be missing your meaning, but the CSS/JS files are in the SRC folder and they're generated via Webpack (or, specifically, an NPM script that generates them via Webpack). Is that what you're looking for?

At any rate, thanks for bringing up the issue.

@XhmikosR
Copy link
Contributor Author

XhmikosR commented Aug 6, 2017

Ah, you are right. I was looking at the root of the repo.

You should also look into hooking uncss or at least purifycss. And after that critical-css and make sure you async load all JS.

@XhmikosR
Copy link
Contributor Author

XhmikosR commented Aug 6, 2017

BTW this is why I had an argument with rdwatters on gohugoio/hugo#3700.

This is just overdoing things. When you make the HTML size so big (remember what you see is with gzip) there is extra delay before anything else starts downloading. For small files it's OK but this is too much.

@budparr
Copy link

budparr commented Aug 6, 2017

I'm quite happy with Tachyons, thank you. It gzips to around 15kb. If you want submit a PR with critical CSS or the other things, that'd be great, but I don't think it moves the needle on a 15kb CSS file.

@rdwatters
Copy link
Contributor

rdwatters commented Aug 7, 2017

@XhmikosR I don't believe we had an argument, but FWIW, it seems like a lot of people on the forums and in GH threads are very passionate about matters of performance. To be honest, I usually just defer to @jhabdas on these matters.

But that's not why I'm responding to this thread, which has quickly grown from a minor suggestion into an unsolicited, discursive performance lecture that could have been handled more effectively via simple PR. (For example, you add async here). EDIT: Bud just made a good point that this actually isn't needed.

As @budparr mentioned, we had multiple conversations about performance. The Logo SVG was optimized using SVGOMG and we inlined selected svgs for flexibility and reduced HTTP requests. @budparr has been very mindful about keeping our asset sizes to a minimum, so I assure you perf has been a hot topic since Bud was nice enough to start with this redesign. For context, the previous design included an animated SVG on the homepage, which was only possible inline since you can't hit it with CSS via <img> tag.

That said, I don't think this thread is about inline SVGs. I think it's an opportunity for you to engage in a spirited performance debate about http2 and multiplexing, so I'll do my best to keep you happy as a user of Hugo, but I have to tell you that I think people who consider "speed" as the only metric of a good website are generally just nincompoops. It seems you've fallen into the camp that thinks of http2 as panacea. From everything I've read by many people much smarter than me, it's not. Multiplexing is a wonderful advancement, but it's a consideration and not a singular solution to every use case.

You seem to contradict yourself by saying that we should reduce the HTML page size because of the advantages of http2 but then later say we should focus on CRP, which would make HTML pages larger, no?

It might be worth your time looking into how gzipping works in more detail, but basically it's a lossless compression algorithm that replaces repeat data with jump and length values. I'm not here to expand on gzipping--I'm a content strategist and not an engineer--but it's important to realize that such compression algorithms tend to be more effective on larger files since, obviously, larger files have more repeat values. This is also a consideration of stylesheets in critical render path--another one of your suggestions unrelated to SVGS--that you may not have heretofore considered.

So, when do you minify/concatenate/bundle vs compress multiple individual files vs inline vs make external calls to static assets?

The true, considerate, professional answer is as follows:

It depends.

As Bud already mentioned, Tachyons is pretty damn tiny as it is, and most of the pages on gohugo.io are very small. We're also distributing the entire site from a global CDN, which helps for reasons I hope you already know. Not that it's part of performance, but my favorite part is that now content is more manageable because we treat content as discreet entries rather than standalone pages and keep it all in the world's most amazing version control.

In addition to your suggestions, do you know what else we could do?

  • Optimize that PNG on the homepage, which I agree with Bud is probably the real killer.
  • Set cache controls so that end users have the entire homepage for at least a month. I mean, it's the one page that's going to change the least frequently, right? But I'm sure you've taken content strategy and user behavior into your unequivocal performance advice.
  • We could create AMPs, since those would technically be faster than any of the recommendations you've made above.
  • We could refactor all the CSS and use different pre- and post-processing so that not a single CSS rule goes unused
  • Blah blah blah...

OR we could do the following:

Stop talking about ideal states and make decisions in the light of constraints, including, but not limited to, staffing, support, maintainability, manageability, architecture, analytics/usage, etc.

This is what I meant before when I said "considerations":

  • This is a content site, so where should we prioritize our time and energy? With limited staffing, maybe we should make sure the documentation is accurate, easy, and meaningful to our readers? Maybe we make small concessions on what a machine wants (e.g, your speed score) vs what a human needs for a meaningful experience (e.g., a larger JS bundle that makes it easier to get people what they need after the page has loaded). Compared to the previous designs, trust me, Bud is killing it in this department 😄
  • I don't have an opinion about Tachyons, but a lot of people do. Pay attention to that second clause: "a lot of people do." So maybe we should weight the efficiencies of a shared language or framework against total number of HTTP requests in the event that Bud has other priorities in the future and we want to make this site easy to improve and not just access.
  • Mobile users might suffer from the extra call to an external stylesheet, but we can also cache our styles for longer when not embedding as part of CRP. Our analytics tell us visitors are roughly split down the middle in terms of return vs new visitors, and the average session includes 4+ pages, so maybe the best way to make HTML pages smaller is cache the hell out of that stylesheet and look to other facets like DNS prefetch/prerender based on the user's behavior flow. The analytics also tell us that the overwhelming majority of users (86%) are not accessing the site on mobile, btw.
  • We could obsess about every performance tweak in the gohugo.io theme, but maybe the official Hugo theme should be about more than just performance for our end user. Maybe it should demonstrate the project's rich templating capabilities (e.g., the ability to use dict with partials of any text file, including svg, as Bud did in the theme, which was my original response to our "argument").
  • As far as performance, every developer installs the docs as part of the binary. To my knowledge, there are few distances shorter than zero, and there's nothing faster than "completely available offline."
  • Bud and I are both pretty passionate about accessibility, and as I'm sure you know, inline SVGs are more predictable than <img> because their source is directly in the DOM and therefore more readily available to the accessibility APIs used by assistive technologies (AT). That said, I appreciate the serendipitous finding you created by filing this issue, since it compelled me to create my own: ref Inline SVGs should be improved for accessibility considerations... #66.
  • There's more, but I think you get the picture.

With respect to your other comments, please open up individual issues for each (e.g, switching to CRP, async JS, changing the SVG partials) if you think they're worth the effort.

Thank you for your continued support of Hugo, and I'll make sure to reach out to Netlify to find out about the gzipping.

@ghost
Copy link

ghost commented Aug 7, 2017

@XhmikosR thanks for contributing. @rdwatters thanks for the ping.

Lots of chat here. Anything immediately actionable on this awesome creation from @budparr? Always room for improvement of course. Personally I try to rely on the following guiding principles before deciding to take action on my websites:

  1. Pareto principle. Is the improvement worth it now given broader context?
  2. DMAIC. How to approach the improvment/change in a methodical nature. I've written a little about DMAIC here if you're interested: https://habd.as/talks/screaming-fast-wordpress-redis-vultr/

Re: HTTP/2 here's a recent article you may be interested in with some good comments forming from the dev community: https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/

I'm not sure how @budparr what's to handle this issue. But I personally tend to try and keep my issues pruned in my repos unless there's an immediate and pressing need of the majority.

Cheers!

@ghost
Copy link

ghost commented Aug 7, 2017

@XhmikosR almost forgot to mention, you may find the following of interest for your own sites and prototypes using http/2 and server push: http://caddyserver.com/

@XhmikosR
Copy link
Contributor Author

XhmikosR commented Aug 7, 2017

Just to clarify: I'm not here to educate, nor did I claim I know everything about performance.

That being said, there was a reason that I didn't CC @rdwatters, and it's this one exactly. I don't have the time to engage is such discussions; I just wanted to bring to your attention the issues of your current implementation.

Now, back to the topic:

  1. I never claimed http/2 is panacea. BUT it's what it does by its nature, read about http/2 multiplexing
  2. That being said, what you have now is wrong, and try to understand why I say the HTML size is big. Read about TTFB, First meaningful paint etc

lighthouse

This is my last post on the subject. No hard feelings, it's your issue, I just wanted to bring it to your attention and not having to go through all this.

@ghost
Copy link

ghost commented Aug 7, 2017

It's all good @XhmikosR. Thanks for the info.

@rdwatters @budparr I strongly advocate a DMAIC approach to performance and UX. In order to do so it becomes necessary to start measuring performance. Snapshots of performance are almost never useful—it'd be like looking at a stock price on any given day and trying to determine how well the stock is doing.

If you'd like to measure ongoing performance of the site I wrote an article on how to use Speed Tracker (FOSS tool using Web Page Test API and Lighthouse) to measure perf: https://habd.as/monitor-pwa-website-performance/

I used it to monitor the theme site and, if you look here, you'll see what happened after I added image lazyloading: http://speed.habd.as/themes_gohugo_io-cable/?period=year

@budparr
Copy link

budparr commented Aug 7, 2017

Thanks for the tips, everyone. And thanks @XhmikosR for reporting the issue. The issue around gzipping is of far more concern to me, tbh.

When we initially had the discussion I compared the performance of requesting the image or inlining , and the inline outperformed (this was done considering that not all users are going to have browsers that support http2, thus making a concession that wasn't a factor). So I made a decision based on that test. @jhabdas has subsequently brought up a technique of lazy-loading images even when they're in the initial viewport view, so that's something I may look at, because no matter how it's loaded the first thing on that page is >100kb image.

As far as these other things...

You should also look into hooking uncss or at least purifycss. And after that critical-css and make sure you async load all JS.

Can you show me how they would improve performance in concrete terms? I tested critical CSS on the site. For a ~15kb gzipped CSS file it did nothing to improve performance and created a slightly less good user experience. Perhaps you were making the decision thinking that our CSS file was much larger, I don't know, but critical css, as in all performance rules, need to be used in context and not as absolutes, in my view.

My JS is loaded at the bottom of the page. Can you point me to a test or something to show that it needs to be asynced there to improve performance? I truly don't understand.

I think these other things you suggested strip out unused css(?) So what you don't know is that we don't build our CSS/JS with every site build. It's not necessary. The nature of Tachyons is that a developer can come in and make changes without writing any CSS, necessarily. Believe me, I agonized over the size of the uncompressed file in terms of its efficiency before I decided to use it, but I think it's a good trade-off, and the way it's written is such that it compresses quite nicely. I don't think uncss is appropriate here. I'd be happy to be proven wrong. Webpack has some features that might help in this regard, but I haven't been down that road and again, not sure its worth it.

At any rate thanks again everyone for your concern and taking the time to write here.

@rdwatters
Copy link
Contributor

Hey guys. Thanks for the intel to both @XhmikosR and @jhabdas. (And @budparr.) As an FYI reached out to Netlify and Jessica was nice enough to check on the gzipping issue for me and wrote a few follow-up email. Man, they have great customer service.

Also, I learned of this site, which is probably old hat at this point for everyone on this thread and great if you're too lazy to check dev tools 😆:

https://checkgzipcompression.com/?url=https%3A%2F%2Fgohugo.io

Anyways, our compressed size at ≈43k. That's still a single packet, no? Wouldn't making this an external call to an tag add an additional 1400 byte header? (This is a serious question for @jhabdas).

Have we looked at other pages? Most are coming in between 24k and 32k for the HTML. Seems like perf geeks (not an insult) love to check the homepage, which stands to reason since it's the single most frequently accessed page, but it only accounts for ≈18% of traffic for 2017.

@ghost
Copy link

ghost commented Aug 9, 2017

@rdwatters I know that I know nothing, except that I know one thing—we can use DMAIC. To introduce the methodology here is very simple and does not require deep knowledge of any particular browser, the users... none of that. All we gotta do is:

  • Add a PR here to introduce the new site: https://github.com/jhabdas/speedtracker modeling from one of the exiting commits (should be clear).
  • In the PR add the front page and a landing page or pages so we're testing a typical flow and then simply start changing stuff to see what works and what doesn't based on the users, their technologies and things which may not be obvious without measurement over time.

If this seems worthwhile I can take a PR or work up a commit to add the site, and investigate Netlify's h2 (http2) options to run a RUM test.

@rdwatters
Copy link
Contributor

Awesome, I'm a supporter of DMAIC (although hearing "Six Sigma" gives me the howling fantods but only because of personal corporate reasons). I'm deferring to @budparr since he is the guru of the theme itself. Cheers and thanks again @jhabdas!

@ghost
Copy link

ghost commented Aug 9, 2017

howling fantods

I'm stealing that. ><

@rdwatters
Copy link
Contributor

I'm stealing that. ><

I did too:

https://www.brainpickings.org/2012/09/04/words-david-foster-wallace-mom-invented/

You're a fellow Land-o-Lincoln-er, yeah? RIP, DFW 😢

@budparr
Copy link

budparr commented Aug 9, 2017

@rdwatters Interesting about the gzip. I think there's a gap in my knowledge about what the headers look like with http2/push. Webpage test gives me a warning about gzip not working, yet now I see in dev tools that the initiator is push/otherso I guess it's not the issue I thought it was. Thanks for checking with them.

I'm going to close this issue so folks who don't want the notifications don't get them. I'm happy to chat about performance on gitter or slack.

@budparr budparr closed this as completed Aug 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants