Cherian Thomas

Friend | Co-founder of Cucumbertown| Cook | I make beautiful things that keep me up all day & night
27 December 2012

Page speed really does matter

At Cucumbertown we use different strategies to ensure page loads in 2 second range. At the most under 3 seconds. And we are fanatical about this.

Naturally we have quite a few alerts if it hits upper thresholds of 3 seconds plus.

Couple of days before Chris Zacharias wrote about how page weight matters and how YouTube dealt with this. So when a mails pop up in your inbox with this alert from Google Analytics with over 20 seconds in page load, you stop everything to find what’s happening.

Page speed 20 sec alert

Usually a delay in page load is often picked up immediately after a production push mostly by random testing or from our highly engaged users. But this didn’t happen and the alert came a day later.

That freaks you out. Unknown problems for which you haven’t yet figured out root cause IMO is greater danger than a big bug where you know the issue.

We started probing and that’s when we saw results like this.

Site Speed Page Timings  Google Analytics

And of course Google Analytics was averaging the time and the average was skewing the results.

Correlating this with other results and the jig saw started fell into place. Cucumbertown was picked up by a food channel in Nigeria and a prominent blogger in Thailand and this was bringing in the crowds. But the page load in these countries as you can see is ridiculously low.

Cucumbertown is an asset heavy website and there is a significant cost associated in loading the basic scripts even thought we delay everything through requireJS and dynamically load JavaScript based on need. Even the basic DOM load is taking time.

Corroborate this with the load time in the US at 2.5 seconds, the speed of light and latency at 43ms for DSL devices across the globe and it’s time to start thinking about CDN’ing assets.

At Zynga we had initially relied on Akamai but later switched to LimeLight and they look like a prime candidate to rely on.

But the recent activity by CloudFlare on HackerNews and the umbrella features they offer seemed tempting to explore. And I took a dip in the water to test CloudFlare.

Right now this blog is served via CloudFlare off a free plan that includes CDNing the assets. The experience is good when there is a rush of requests but if the site is not accessed consistently then it seems like the cache gets evicted. The page load is worse when there is a first time hit after inactivity. But then the follow up requests page loads at 1.5-2 seconds.

Gigpeppers | 1

I have always thought of CDN’s as an enterprisy expensive word. But here we are; a startup serving the globe and this solution looks imperative.

Do you have any advice? How’s your experience been?

  • sageofdata

    You have two problems to solve. The easyist is distribution of static resources. You can take advantage of one or more CDNs and distribute images/css/js and other static files through those CDNs. AWS has Cloudfront, which makes this easy. However, even with a CDN, you still may have countries which are slow do to distribution of data centers and fiber links.

    Your other problem is dynamic, server generated content. This problem is much harder to solve as it would require distributing your application and data layers. You may be able to hack something together using Vanish which would cache most of the page, but request dynamic parts from the application layer (Something that could benefit you anyway to drive US page load times down).

    Now, in decideding how much effort you want to put into solving this problem, you need to look at what the traffic from places like Indonesia are worth to your site. Indonesia has limited fiber links to the rest of the world (Mostly going to Singapore, and considering the growing size of the internet audience there, the links probably do not have enough capacity). If a country like Indonesia is important to you, then it would be worth investing in some computing resources hosted within Indonesia.

  • sureshkumarpeters

    i don’t recommend cloudflare at all , Since we need to change the DNS of the website and if the cloudfare DNS goes down then our site will be down. Everything else is perfect.

    • What are the odds of Cloudflare going down versus you going down? Keep in mind that they have thousands of customers who specifically use Cloudflare to keep their sites up and speedy. So if uptime is the core of their business, I’m guessing it will be higher than your own uptime.

  • Two to three seconds is *slow*. DNS can’t really be improved (besides using split horizon, which I assume you’re doing), so that’s pretty much out of the scope. Not counting DNS lookup, my blog loads in 250ms for me. I don’t have enough traffic that a CDN is worth setting up, so it’ll probably load rather slowly outside western Europe, but a one-second loading time is probably a good goal.

    • JS

      I totally agree with this – 2 to 3 seconds is way too slow for a web app. Our target load time is 500ms average, and we usually hit it. It takes careful optimization, caching and DB management, but it’s definitely possible for nearly any kind of app….

  • JohnE

    Why not cloudfront?

  • Ryan Chazen

    The discussion actually misses two critical aspects – network topology and device usage in 3rd world countries.

    Where is the speed issue coming into play? There are 3 areas of concern:

    1) App server and backbone speed
    – This what the discussion is centering on. CDN is good.
    2) Connection speed between client and ISP
    – This is a critical issue when dealing with performance in
    3rd world countries. CDN will NOT help here!
    – As an example, there is likely a 200-300ms roundtrip from your servers to the CDN and customer ISP. By using a CDN, you remove this 200-300ms. If the customer is using GRPS with 5000ms latency, your CDN has done almost nothing!
    3) Customer’s processing speed
    – If your customer is using an old blackberry phone, and your page is using complicated javascript, it could take 30+ seconds for the page to render for the user, even if the
    content were local.

    A way to get a grasp of the situation is not to filter your page load
    speed by country, but to drill down further and also use the user agent.
    Filter by country + user agent should let you know if the slowdown is
    caused by old Nokia phones, or if it’s across Firefox and Chrome also.

    In many cases, the solution is often to have a mobile WAP-alike site that
    has no fancy resources and just text and html fields. You can use
    javascript to detect if the page is loading slowly (15+ secs), and pop
    up a link to the special wap site.

  • Don’t use cloudflare. It has horrendous issues with latency.

    You are much better off using Google PageSpeed Service. I’ve been testing it the past week or so and it kills Cloudflare.

  • There are a lot of issues with using GA for collecting performance data. Like you mentioned, their use of averages leads to a lot of problems. They also heavily sample & only collect timing data from browsers that support Nav Timing. Have you thought about using something like Torbit Insight ( to get more accurate performance data?

    • Thanks for this Josh. Will definetly take a look at this.
      GA was good for us until now.

      • GA will give you histograms, there’s some limitations compared to Torbit but it’s where your data is now.

        Before worrying too much how big is the sample size for the slow regions?

        Few questions/comments…

        Why are you sharding across three domains? I’d serve the CSS from main domain as the DNS has been resolved and some browsers will have pre-emptively opened a secodn TCP connection.

        Have you tried flushing after so that the browser can start requesting the resources in the head earlier?

        You have a vary:cookie directive on the HTML page which you may as well remove an mark the page private via cache-control

        Suspect you’ve got too much CSS/JS although it compresses over the wire, the browser still needs to compress it before parsing / executing.

        Use webpagetest and Page Insights Critical Path Explorer to understand what’s actually happening on the page during load.

        Look at what could be defered until after onload e.g. UserVoice widget, and other JS

        Try to find a CDN that allows you to serve the dynamic page through them, as it will remove some of the TCP latency issues on first request.

        • This is a beautiful analysis Andy.

          I’ll look into the domain sharding.

          The sample size is not significant right now, but I fear the
          lack of our preparedness will affect potential growth.

          I’ll check the flushing.

          JS is fairly optimized though there is a lot of CSS
          refactoring that needs to happen. This is one of the size effect of the growth
          we are experiencing.

          All third party widgets and most JS is executed post DOM

          I am looking for a CDN with low price points – value that
          serves dynamic pages.

          Would love to hear more from you. Can you connect via my
          Facebook icon left bottom?

  • Give New Relic a try … it was day/night for us!

  • After a quick peek at your website with Chrome DevTools, I noticed you did a great job with minifying the resources, enabling compression, etc. Now, I think you should focus on the REST calls. It seems these are the ones increasing your page load time, significantly. At least, I got 1.2 s response time for most of these. 😉

  • HTML5 offline? Instant page loads bro.

    Sure it’s a beast to get a good working setup with html5 offline, and you may need to adjust tooling to fit your needs exactly, but the benefits are well worth it, especially for someone very performance minded as yourself.