Real User Monitoring (RUM) reports data from real user sessions to paint a clear picture of a site’s performance compared against baseline data. RUM reports on end user metrics that breakdown:
This guide assumes you are logged into Uptime.com and attempting to add RUM. For RUM: Legacy, see our documentation here.
- Setting Up RUM
- Required Fields
- Optional Fields
- Basic RUM Tips
Setting Up a RUM Check
Basic RUM setup involves filling in a domain, retrieving a code snippet, and placing that code snippet before any scripts you wish to track with RUM (usually within the <head> attributes). Without placing the snippet in your site’s code we are unable to retrieve data.
To add RUM check, Click Monitoring>Checks followed by Add New, and select Real User Monitoring from the Check Type dropdown. Once you click Save, Uptime.com will provide the RUM code snippet.
About RUM Requests
RUM can monitor one domain per 100,000 requests e.g., 5 domains would require 500,000 RUM requests allocated to your account. A single domain can also generate more than 100,000 RUM requests but will stop generating data once the account’s requests are fully used.
An account owner can add additional requests at any time via Billing>Subscription.
See your Account Usage page for limits.
Placing the RUM Code Snippet
Here is an example of the RUM code snippet that must be placed before any scripts you wish to track with RUM:
Here is an example where the snippet has been placed correctly within the <head> attributes, before all scripts we want to track:
RUM loads asynchronously and has a near zero impact on site performance (comparable to a snippet such as the one used for Google Analytics).
The following fields are required before saving RUM.
Provide the domain name that you will monitor with RUM.
Subdomains are tracked by default as long as the subdomain URL you wish to track has a RUM code snippet. In the event of RUM tracking for a subdomain only, enter the root domain name in the Domain field and place the code snippet on only the subdomain pages. No further URL grouping is required for subdomain monitoring, although URL grouping is always recommended.
It is possible to track third party domains as an Optional parameter.
By default, we aggregate reports based on the median of your data set. The median of a set of numbers is the middle number in the set (after the numbers have been arranged from least to greatest), so it helps provide a snapshot of a data series that is less sensitive to outliers that might otherwise skew results.
Aggregation also affects reporting, and can provide deeper insights into the user experience for certain segments of users.
Apdex stands for Application performance index, which is an open standard developed to measure the performance of software applications. It is designed to convert raw measurements into more specific insights about user satisfaction with the state of your applications.
Apdex has three designated user expectations:
- Satisfied - Satisfied users are fully productive, AKA not impeded by response time.
Satisfied is less than or equal to the designated threshold.
- Tolerating - Tolerating users notice performance dips, but will usually continue
The Tolerating score is assigned when the value is less than or equal to 4 multiplied by the threshold set for this check
- Frustrated - Frustrated users find performance unacceptable, and may abandon
Frustrated is greater than 4 multiplied by the designated value of the threshold
The Apdex formula takes the total number of satisfied users, adds half of the tolerating users plus none of the frustrated samples, divided by all the samples. Below is an example of how Apdex is calculated:
Image courtesy of apdex.org.
Customizing your Apdex Threshold
Changing the Apdex threshold from its default of 4000ms manually overrides the threshold for Satisfied, and the thresholds for Tolerating and Frustrated adjust accordingly.
These fields are optional, but can improve the precision of RUM if any of these use cases apply to your application. We recommend reviewing these optional fields during check setup, as changes to Allowed External Domains after initial setup will require an update to your code snippet.
Bot traffic, and other automated sources of traffic, can generate data in unexpected or unintended ways. For example, outside tracking used on a marketing landing page or event tracking for analytics purposes. Such bot traffic could potentially reflect inaccurate bounce rates or affect the precision of load time metrics.
To exclude user-agents, list them one per line. Uptime.com uses substring matching, so it is possible to exclude a user-agent simply by using its name, one per line:
If all of your user-agents share a common term, such as a particular domain name, you can exclude them by identifying that domain:
Can all be excluded with: robot.com.
RUM excludes most frequently used bots by default, some notable examples being:
External domains can affect AJAX calls or other scripts that affect Time to Interactive (TTI), such as external calls made to a third-party service to retrieve raw data. Such examples might be considered “necessary” for a page to be considered fully interactive, and RUM has the capability to include data on performance of these external domains.
Note: Fonts and other static assets are tracked by default automatically in corresponding metrics regardless of whether or not they are hosted on the current domain or an external one.
To allow these external domains, list them one domain name per line.
Please note that allowing external domains will require you to change and reapply your RUM snippet.
URL Groups are useful when you are reporting on RUM data for a domain. For example, monitoring all service pages versus all marketing or all blog pages. When URLs match a specific pattern, you can group them together to make reporting more accessible and presentable.
URL grouping allows for regex, so complex matching is possible. Here are some examples:
You are able to exclude URL parameters from RUM to avoid tracking sorted or filtered content as unique. Add in the parameter value without the (?) question mark or (=) equal sign. Some examples are below:
- To filter ecommerce.com/products/53627tg7/most_reviewed=true use “most_reviewed”
- To filter myservice.com/service?device_type=mobile use “device_type”
- To filter myURL.com?client=firefox use “client”
RUM is useful for insights into the user experience, and as a result works well with synthetic monitoring. We highly recommend using Transaction monitoring in combination with RUM checks for several reasons:
- Know when critical transaction pathways fail (RUM will only show an increase in 4xx, 5xx, or JS errors, Transaction checks will alert you when a pathway fails)
- Insights into performance over time (The Transaction check’s performance metrics are the total of all steps, where RUM provides unique insights into each URL’s performance from server response to interactivity)
- Inform application management (RUM data combined with transaction uptime % can inform you team where to focus efforts in development and allocation of resources)
We highly recommend examining RUM data by adjusting data aggregation, as it can help to zero in on more than just the median or average and find out how many users may be affected by performance issues.
We also suggest utilizing URL groups as applicable to help manage the flow of data and make it easier to drill down into services or facets of your site.
Build RUM into your deployment process so that new URLs will always receive the snippet, and data is generated automatically.