The RootMetrics 2nd Half 2014 Mobile Network Performance Review - FAQ

We’ve included below some of the most common questions about what we test, how we test, and why we test. For even more information, please visit our methodology page.

What’s a RootScore Report?

RootScore Reports are part of a broad suite of free tools that RootMetrics offers to help you make more informed mobile decisions and improve the quality of your mobile experience. RootScore Reports provide an in-depth, independent, and consumer-focused look at network performance in the United States, Canada, and the UK.

Why do you create RootScore Reports?

We create RootScore Reports to improve mobile networks for you, the consumer. We’re a consumer-first company that believes better mobile decisions and improved network performance are built from accurate, unbiased, and consumer-focused measurements of how you actually experience a mobile network on a daily basis. To learn more about how we’ve set the standard for mobile performance testing, visit our standards page.

What exactly is a RootScore?

We rely on our smartphones just as much as you do, and we like easy-to-understand marks of performance. That’s why we do all the heavy, in-depth testing and then distill everything down in the simplest ways possible. RootScores offer a simple way to translate thousands or millions of complex data points into clear and easy-to-understand marks of performance. RootScores are designed to reflect a consumer’s experience of network performance. It’s simple: the higher the score, the better the performance.

A good Overall RootScore means a good user experience. It’s that simple. Using an educational analogy, think of RootScores like you would a final grade in a semester-long course: scores approaching the upper limit (100) indicate extraordinary performance, like receiving an “A” grade at the end of the semester. Scores approaching the lower limit (0) represent network performance that would be clearly unacceptable to everyday consumer usage, similar to receiving a poor grade at the end of the semester.

Just as a final grade in a semester-long course is a function of performance across multiple exams, no single test determines RootScore results for any performance category; RootScores are calculated from multiple tests that are weighted according to their impact on a user’s experience. RootScore Reports give you a detailed look at how the networks compare across six categories that are important to the consumer mobile experience:

Overall Performance (a combination of results from data, call, and text testing)

Network Reliability (a holistic look at reliability across all of our testing)

Network Speed (a holistic look at speed across data and text testing)

Data Performance

Call Performance

Text Performance

Keep in mind that not all mobile users are created equally: Do you use your smartphone mainly for uploading pictures or streaming music? Our Data RootScore might be more important to you than the other categories. After all, everyone is different, with different mobile needs. That’s why we provide test summaries across multiple categories and across multiple areas of your daily life (see next question).

How do you measure network reliability and speed?

Reliability and speed are the most important aspects of your mobile experience and have always been the two fundamental components of our RootScore. To show you how the networks perform in these key areas of mobile usage, we’ve created reliability and speed indices that offer a clear, summary view of network performance across all test categories (data, call, and text). These indices illuminate network coverage across the entirety of the mobile experience. To learn more about how we’ve set the standard for mobile performance testing, visit our standards page.

Can I just roll up results from your metro testing to determine which network is best at the state or national level?

That’s a good, important question. Each type of RootScore Report requires its own unique test sampling scheme. After all, California is more than Los Angeles and San Francisco. Washington is more than Seattle. Massachusetts is more than Boston. You get the idea

To paint a complete picture of mobile network performance at the state level, we test much more than just the major urban areas; we also collect test samples from non-urban locations to provide a complete look at performance for the broader area. The same theory applies to our national results: each area of testing (Metro, State, and National) must include samples representative of that entire level.

What’s this mean? Although a network might excel in our Metro RootScore testing, it does not necessarily mean that the same network will do well at the state or national level. Think about it this way: It’s possible for one network to win the Overall RootScore Award in every Metro RootScore Report within a particular state, yet not win the State Overall RootScore Award.

The bottom line: The different areas we test provide complementary—but not identical—looks at mobile network performance. We believe that rather than just give you a look at mobile network performance across one of these levels, you deserve the full picture. After all, you are part of a metro, a state, and the nation. You should know how the networks perform across each of these areas. Using our Metro, State, and National RootScore Reports together gives you a comprehensive view of mobile network performance. For more information about how and where we test network performance, visit our Methodology page.

What do you test?

We test the activities that you use your smartphone for on a daily basis, like making calls, sending email, browsing webpages, using apps, and sending texts. The most important aspects of your mobile experience are reliability and speed. Our tests look at how reliably you can connect to a network and how reliably you can stay connected to the network once a connection is established. We also test how quickly you can connect to a network and how quickly you can complete your tasks once that connection is established. We apply this testing framework to benchmark network performance across thousands of data, call, and text test samples.

Our testing encompasses a wide array of real-world situations that people might experience while using their mobile devices. Examples include high and low network load situations, variations in speed from stationary to freeway, poor to excellent coverage, and indoor to open-air signal situations. We test competing networks head-to-head in these situations to remove bias. For more information about how we test network performance, visit our Methodology page.

How does your testing actually reflect a consumer’s mobile experience?

“Consumer experience” is a hot topic in mobile performance reporting. We think that’s great. Other reports like to claim they reflect the “consumer experience.” But for us, consumer experience is more than a buzzword; it’s the guiding star for everything we do. Among other things, truly capturing the consumer experience means that we:

Test with the same smartphones you use:

We only use unmodified, off-the-shelf smartphones purchased from mobile network operator stores.

We only use unmodified, off-the-shelf smartphones purchased from mobile network operator stores. Test in the same places you use your phone:

We test indoors, outdoors, while driving, in small towns and in major metro areas.

We test indoors, outdoors, while driving, in small towns and in major metro areas. Test the same ways you use your smartphone:

We test network reliability and speed during data, call, and text performance.

We test network reliability and speed during data, call, and text performance. Test at the same times you use your smartphone:

We test 24/7, weighting results for periods when consumer usage is typically at its highest.

How many tests do you perform?

We’re thorough with our testing. For the 2014 Mobile Network Performance Year in Review, we collected over 5,700,000 test samples.

Where do you test and how do you decide where to test?

Everything we do is based on objectivity. The boundaries of the areas we test are defined by governments and official agencies—not by RootMetrics. For our State and National RootScore studies, we test data, call, and text performance indoors and while driving in United States Census Places in all 50 states; census places are used to designate areas within each state where people live. Samples are collected in the 125 most populous metropolitan markets across the United States, as defined by the U.S. Census Bureau, and we also test network data performance within the 50 busiest U.S. Airports, as designated by the FAA. It’s important to note that our State and National RootScore results are weighted by population size, so the larger, more populous cities do contribute more to our calculations than less populated, more rural towns

When do you test?

We conduct tests nearly every week during the year and visit each area we test twice per year. Within each location we test, we ensure that testing is as comprehensive as possible. Since you use your phones at different times and in different locations, RootMetrics tests across all hours of the day and night, with samples collected indoors, outdoors, and while driving between locations.

Again, everything we do is designed with objectivity in mind. To prevent bias in our sample collection, RootMetrics utilizes a sampling methodology that randomly selects the indoor locations used for testing; drive testing takes place during travel between these random indoor locations. To measure network performance at the State and National levels, test locations are randomly selected within each state.

Which mobile network operators do you test?

We test AT&T, Sprint, T-Mobile, and Verizon.

What “test equipment” do you use to test the mobile network operators?

That’s easy: we use only unmodified smartphones that are purchased off the shelf at mobile network operator stores, just as you would. We never alter the phones with external antennas or use any other non-standard equipment. You don’t use those things, and neither do we. We also never “root,” jailbreak, or modify the phone’s software in any way. We regularly select from leading Android-based smartphones currently available to consumers. Why Android? It offers the most flexible platform for testing. For more information about how we test network performance, visit our Methodology page.

How you decide which phones to use?

We select leading Android-based smartphones for each network available at the time of selection. During the selection process, RootMetrics benchmarks device models to determine the best commercially available phone model from each network in order to capture the best possible consumer experience on each particular network. Benchmarking models before testing helps remove limitations that can be caused by specific model/network interactions.

Our device selection process mirrors our RootScore Report testing: We use off-the-shelf handsets obtained directly from mobile network operators’ stores; we test in multiple geographic locations covering indoor, outdoor, and driving; and we test data, call, and text performance. We analyze our benchmark results in the same fashion as that of our RootScore analysis in order to select the model for each network that will be used for our next round of reports. For more information about how we test network performance, visit our Methodology page.

Do you use different phones to test a network’s data, call, or text performance?

Absolutely not. We test data, call, and text performance with the same device. That’s what happens in real life, so that’s how we evaluate performance. In other words, we don’t use one phone to test a network’s data performance and another phone to test the same network’s call or text performance. That might seem like a strange thing to point out, but other companies test performance with separate devices for each category. We know that’s a bad idea—our testing has found reliability problems when the phone has to move directly from data to call and text services versus isolating these tests. Again, this is how you use your phone in real life, so that’s how we test. For more information about how we test network performance, visit our Methodology page.

A flexible, evolutionary framework

RootMetrics continually re-examines our testing and scoring methodologies to assure that they continue to reflect your experience as accurately as possible. When advances in mobile technology alter the landscape or consumer behavior changes markedly, we adjust our methodologies and scoring accordingly. Changes are made so that we continue to capture the true consumer experience.