How cloud-only synthetic monitoring leads to blind spots

Many of the world’s top DevOps teams at leading companies like Microsoft, SpaceX, CoreLogic, and many others, choose Uptrends Synthetic Monitoring because of its reliability and ease at which it simulates end-user experiences accurately and transparently.

Employing a network of 229 check locations across the globe, Uptrends Synthetic Monitoring provides visibility into application performance no matter where you are in relation to your users, allowing you to render webpages locally the same way your users do. DevOps teams can quickly be alerted, pinpoint and address performance issues before they start affecting end-user experience.

Lately, a few problematic trends are gaining in popularity whereby some companies are migrating to synthetic monitoring solutions having checkpoints located inside cloud providers like AWS and Azure. For example, 61% of businesses migrated their workloads to AWS as of 2020, accounting for 76% of enterprise cloud usage.

Oftentimes, these organizations believe the result will bring about noticeable performance and stability improvements or they perceive that they will be passed on a cost savings of some sort, which may or may not materialize.

But there are a number of reasons why migration to cloud-based synthetic monitoring solutions may not produce the best results either in long-term in cost-savings or in providing accurate views of user experience. Here’s why.

Cloud-based synthetic monitoring creates blind spots

Synthetic monitoring solutions employing cloud-only approaches often create problems for DevOps teams. A cloud-only approach often hinders these teams in monitoring digital end-user experiences accurately and in identifying performance issues before they get noticed by actual users.

The reason for this is that cloud-based synthetic monitoring creates more noise and weaker signals than if it was run on backbone nodes in traditional datacenters and connected to leading internet backbone providers. Simply put, data collected from checks running on AWS nodes, for example, tends to indicate faster performance metrics than those run on backbone nodes, which is not really an accurate representation of end-user experience because:

Applications monitored and hosted on AWS, from AWS, doesn’t provide realistic results usually because of the existence of some kind of dedicated network connection between data centers.

Network operators can tweak routing policy to send and receive traffic through adjacent autonomous systems by using a practice is known as  .  BGP (Border Gateway Protocol) is a protocol facilitating the global routing system of the internet.

BGP provides network stability be ensuring that routers can adapt to route failures. For example, if one pathway goes down, BGP makes routing decisions to find a new path based on rules or network policies set by network admins.

When routing happens within the same entity — AWS for example — it’s usually completely under network operator control. But if the goal is to gain clarity into the performance, reliability, and availability from an end-user perspective, monitoring needs to be done from locations in the path of service delivery — backbone, mobile, enterprise locations, etc.

Everything can’t be monitored from the cloud

But it’s not just in cases directly related to the end users’ experience where monitoring checkpoints from backbone, broadband, or enterprise locations as opposed to the cloud is necessary . Other scenarios include:

  • Monitoring Service Level Agreement (SLA) measurements for services delivered to end users
  • Monitoring SLA measurements for third-party providers or vendors in the delivery supply chain — DNS, CDN, cloud providers, etc.
  • Measuring and validating the performance of CDNs
  • Competitive benchmarking for consumer service delivery products
  • Discovery and monitoring of network and ISP peering and connectivity
  • Availability, performance, and validation of DNS based on geographic locations

The primary issue with using cloud nodes for end-user experience monitoring is that end users don’t access websites or applications from within those environments.

AWS, Azure, Google, etc. are not ISPs, nor do they have the footprint to simulate the latency between geographies and highlight the complexity of the network layer powering the internet.

Furthermore, monitoring from cloud locations won’t help you identify issues with ISPs or see how your application performs for end users.

The takeaway

The primary issue with using cloud nodes for end-user experience monitoring is that end users don’t access websites or applications from within those environments. Leading cloud provider like AWS and Azure are not ISPs, therefore, monitoring from cloud locations will not help you identify issues with ISPs or see how your application performs for end users.

Synthetic monitoring from the cloud can provide some insights if your goal is to determine availability and performance related to an application or service from within the cloud infrastructure environment. Uptrends has over 229 global checkpoint locations simulating the end user experience.

Testing from the same geographic locations as your  end users is key to successful monitoring. Because errors may only affect some users, the more granular the testing, the more likely the monitoring system will capture a regionalized error.

Unlike some cloud-based monitoring solutions employing much smaller geographic footprints, the size and distribution of the checkpoint network becomes increasingly more important with a global audience.

To learn more about the differences between using Uptrends checkpoints versus cloud-based checkpoints, sign up for a quick expert-led demo.