Pantheon Community

Tips for making site traffic stats more accurate

Thank you so much for trying to find a solution to this, @sparklingrobots!

I feel like there’s a fundamental problem with Pantheon charging based on traffic but not giving us any way to see the traffic you measured. It can’t just be a black box where we’re supposed to trust you to decide how much to charge us, right? That just doesn’t seem like a good way to do business.

Without access to this we have no way of knowing if someone is hitting our site with malicious scans or something else that might not only be inflating our metrics but also possibly endangering the rest of your customers as well as us.

7 Likes

Your point about “no way of knowing” is excellent. One concern is that perhaps there’s something in the code that’s being exploited that hard to discover without access to complete logs.

4 Likes

In my previous go-around with this a suggestion was that I turn off caching for the entire site temporarily to allow me to see more in my nginx-access.log. Take with a grain of salt.

1 Like

Just wanted to provide a short update: I’ve included all of the folks commenting here on an internal feature request to allow access to the Global CDN logs.

While I can’t say when or if this will come to pass, thanks go to each of you for your honest feedback. I hear your frustration. <3

1 Like

I’d love to find a solution to this. Not accessing the logs. I can do that via FTP. I need to find a solution to Pantheon’s site traffic metrics being radically higher than Google Analytics. Pantheon has already increased our monthly charge, and the new plan is not tenable. We will have to move to another hosting platform if we can’t figure out what is going on. Any advice would be appreciated, because I’d hate to leave Pantheon, but we cannot afford what we are currently being charged. Help!

3 Likes

In lieu of some way to see what Pantheon’s metrics are based on (I agree with @johndubo that log files ain’t it, even if we could get them more easily), it seems like there should at least be a way to appeal when those stats are having a financial impact on us. And in the case of an appeal, I think it should be Pantheon’s responsibility to investigate for potential errors in their data and signs of malicious activity such as crawlers that we can’t do anything about.

I just don’t see how we can be charged for bandwidth that we aren’t actually using in any way that we are aware of or in control of.

4 Likes

Totally agreed that we customers need more information from the traffic metrics, especially since that’s what the pricing is based on. We should be able to get a detailed report of the traffic breakdown for any time period, including all URLs and traffic counts for those URL requests.

4 Likes

I concur 100% with this request.

3 Likes

Is there any movement on this? I installed Cloudflare in front of our site and I don’t think it is going to make much difference. Pantheon’s CDN metrics are about 10x what New Relic and Google Analytics is giving.

There has to be a way for me to figure out why that is. Has to. I have already called Acquia to move over to them because I cannot afford to have to go to a Performance XL plan for my little site that, according to Google Analytics, had 60,000 pageviews in the month of November. Pantheon logged 1.49 million. What the hell? What. The. Hell?!?!? I am so frustrated.

5 Likes

I’m also very interested in getting better visibility into the CDN metrics. The Pantheon metrics report multiples times what Google Analytics reports.

2 Likes

Hey Ruby - we’re definitely open to appeals, and where we can identify bots, crawlers, and status checkers we do exclude those from the metrics. We do this proactively, but also can’t catch everything, so having customers surface issues is definitely helpful.

However, there’s also just a lot of traffic that won’t show up in Google (see the docs for examples) which we do need to account for in the measurements, so it’s never going to be an exact match. Depending on the site and its usage pattern it can unfortunately differ by quite a lot.

I know that’s difficult when it impacts a budget. Over time we’ll be able to show more and more visibility in the metrics UI, but we do have to be consistent and fair with how our pricing works.

1 Like

CloudFlare isn’t necessarily a good tool to reduce your usage of Pantheon. I’ve seen them actually increase the amount of pages served from Pantheon due to pre-fetching.

I know it’s no fun to have a shocker traffic bill. It sounds pretty unusual to have 60k “pageviews” tracked in GA, but 1.5M “pages served” from Pantheon, but there could be a high volume of API calls, or clients that don’t report back to GA, or a number of other causes.

Unfortunately from our end we have to be consistent and fair with what we measure and how we charge. In the future we’ll be able to provide increasing detail and insight in the metrics area, but it will never be perfect, and it also cannot possibly match Google since we’re fundamentally measuring different things.

1 Like

Thanks Josh. This is a Drupal platform we are using. Could things like having, in the sidebar on every page, a mini calendar and a quicktab (module) with the most popular pages and most recent Disqus comments be driving up our number of “pages served”? I’ve wondered if that has more to do with this issue than any sort of malicious traffic or bots.

1 Like

If your site makes AJAX or other types of requests that will run up the traffic count.

Disqus probably not specifically since those requests will be to them (not Pantheon), but I’ve definitely seen cases where something “programatic” like this results in off-the-charts kind of numbers.

1 Like

Both of those blocks I mention use AJAX. Would this run up the numbers on Cloudflare, too? Because the numbers there are sky high as well.

Thanks for the help Josh. I really appreciate it.

1 Like

Yesterday we were notified that Pantheon is moving us from Performance Small to Large, increasing our monthly cost by 260% based on your secret metrics that we have no evidence to believe are accurate.

We will appeal and request an audit, but that kind of individual solution is not an effective way to address a problem that is clearly impacting many customers. Charging us based on mysterious black box statistics is an irresponsible business practice, and failing to offer transparency into what is behind these statistics is potentially a large security hole.

I’ve always been such a fan of Pantheon and have been referring people to you for years and years. I just can’t express how disappointed I am about this.

5 Likes

@ruby - the metrics are definitely accurate. Our pricing is based on the amount of traffic served by the platform. Every request to the platform is logged and this is the source of the numbers. This does not (and won’t ever) align with other data sources that attempt to measure views. It’s apples and oranges.

@johndubo - Yes, the AJAX requests are likely a big part of your numbers. As per the above, AJAX requests (as well as RSS, XML, JSON, etc) are all going to show up in our traffic stats because they are requests we serve. However, they’re not going to be in any “pageview” metrics like Google Analytics.

Thanks for everyone’s feedback on this threat. I understand that this is difficult for some customers, and I especially empathize with folks who are operating under budget constraints, especially long-time customers for whom this is coming as a surprise. It’ll frankly be a lot easier for new customers who find out right away that they’re on the wrong plan.

Even though it’s difficult, our pricing structure has to be enforced or else it’s not really meaningful. Our goal is to do this consistently. Happy to take additional feedback on what you think would make the process more fair or equitable.

1 Like

Is there an ETA on when we’ll be able to see these metrics on a request by request basis? Does Fastly provide raw access logs?

@jfoust - more detailed logging is going to take some time. The data is based on the raw logs, but those are platform-wide, which is billions of requests a day. Aggregating and storing those on a site-by-site basis for all sites for all days would be quite costly in terms of both compute and storage, so we need to build a data pipeline that can produce reports that are per-site on demand. It’s a little complicated.

Hey everyone, the Sales Engineering Team at Pantheon has been working with several of our contract customers to help them understand what’s going on with their traffic patterns and we wanted to share some of our findings with you.

First, some clarity
Before we dig into types of requests that count towards your metrics, let’s first explain how we define a request, how requests are calculated, and what types of requests we count.

  • There are two types of measurements, visits and pages served.
    • A visit is a unique combination of IP address and User Agent within a 24-hour period. For example, if you are at home and visit your website from your laptop, and then again from your phone, those are counted as two unique visits. Alternative scenario, 10 users in a campus computer lab using the same browser, could register as 1 unique visit.
    • Pages served are considered a single request against your site, which could be a standard HTML response, an API endpoint, or an RSS feed.
  • We don’t count the common bots (or any that identify as crawlers) that regularly hit your site, considering bots like GoogleBot, Yahoo, Bing, SEMRush, etc. We also don’t count bots that identify as uptime monitors like Pingdom or New Relic.
  • We don’t count static assets that are not generated by PHP. For example, icons, documents, or images stored in a theme folder or uploads (wp-content/uploads, sites/default/files, etc.) If the asset is dynamically generated, such as using Responsive Images in Drupal to render various images styles (the first time), then we do count that call.
  • We don’t count redirects or errors (301/302, 4xx, 5xx).

Outside of these basic rules, the rest of the traffic is just part of your standard internet traffic, which we’ll dive into some of the differences below.

Non-human Traffic
This topic has been discussed previously but warrants a mention here.

If you’ve set up an API endpoint or custom scripts on your site that other pages are calling, that’s going to be counted in your Pages Served. A few examples include:

  • RSS feeds
  • JSON feeds
  • API endpoints
  • PHP scripts
  • Modules or plugins with embedded endpoints / direct scripts
  • AJAX calls

A great example for modules or plugins that count against your pages served would be the Statistics module in Drupal core. Every node visit will additionally call /core/modules/statistics/statistics.php, which will double the number of requests per view.

These pages will never show up in your Google Analytics reports unless you have done something special with the GA API and Virtual Pageviews.

Bots that are falsifying user agents are also going to count heavily to your Pages Served but not necessarily to your visits since they are commonly coming from a single IP and user agent. If you look into your logs, you may see some old versions of Chrome (as old as 31.x) requesting a single page from one or two IPs.

Common WordPress Patterns
WordPress sites have two common vectors that bad actors will always try out first.

xmlrpc.php
The first is the XML-RPC page, WordPress introduced that a while back for offline content creation that would sync back to your site when you came online. Most of you aren’t using this and shutting it down would be a very good thing for the safety of your site. There are plugins out there that will do that for you, but we also offer protected paths on the platform for you to lock it down.

wp-login.php
The second one is the login page. As a platform, we don’t have anything built in to stop your content owners and developers from logging in to your site but it does garner a lot of attention that can be less desirable. For our contract customers, we can help out with a customized CDN configuration that will white (or black) list IPs, regions, or even implement a full WAF implementation. Other techniques could include changing the URL of the login page, whitelisting the pages in PHP, and limiting the number of login attempts. You also want to consider enforcing strong passwords and multi-factor authentication.

Common Drupal Patterns
Drupal sites don’t have the same obvious pointy bits that WordPress sites do but we are commonly seeing huge spikes in traffic over search results. This was a common technique to cripple the database in the days before SOLR and Redis, but is rarely of concern in this day and age in terms of site stability. It is, however, probably still malicious and should be addressed. Mitigation is a lot harder in these instances and would probably rely on blacklisting suspicious IPs.

To a lesser degree, we do see some probing going on with the Drupal login pages so it’s worth mentioning, mitigation techniques would be the same as for WordPress.


Overall, there are many ways to identify, reduce, and deflect these requests to reduce any impact to your site, but these are still requests that are coming into your site - whether or not they’re being served a cached response from the Global CDN. The only way to reliably offset and reduce the number of malicious requests is through a site specific layer of protection, such as an additional CDN layer or WAF.