In a recent blog post, I wrote about four fundamental metrics for system performance analysis. These are throughput, residence time, “weighted time” (the sum of all residence times in the observation period — the terminology is mine for lack of a better name), and concurrency.
I derived all of these metrics from two “even more fundamental” performance metrics, which I’ll call the “basic” metrics. That’s what this post is about. But it doesn’t stop there, because my list of four fundamental metrics isn’t the only thing you can derive from the two basic metrics. In fact, these two basic metrics encode a tremendous amount of information, from which you can compute lots of other things. And by the way, it’s a good thing the two basic metrics really are basic, because beyond “fundamental” and “basic,” I am about out of descriptive words.
The metrics I explained in the last post were all about observing events and noting the point in time at which they occurred. That’s the essence of performance: understanding events at points in time. And in a system that accepts, processes, and replies to requests (queries), the most basic events we want to timestamp are the arrival and completion of the requests. So that’s it. Ladies and gentlemen (drum roll please), the two basic performance metrics:
If we know the instant in time at which every request arrives and completes, and we observe the system over an interval (the observation period), we can compute response times of individual queries. This is the completion time minus the arrival time (a duration). We can also compute the average concurrency, throughput, response time, and utilization during the observation period: see the previous post for how to compute these practically for free.
However, we can also compute a number of other very interesting things, because we have the atomic data, not just the aggregated data that I showed how to maintain cheaply in the previous post. Here are some things we can do:
I could actually go on. There’s just so much you can do with arrival and completion times. Some things require you to know which arrivals and completions belong in pairs, but you can do a lot even without knowing that. Maybe some people can post their favorite techniques in the comments.
I’ll show you one technique that is interesting, and doesn’t even require that you know which arrivals and completions are paired up. This is item #2 in my numbered list above. Let’s suppose that arrivals are uncorrelated and randomly distributed, and service times are exponentially distributed. This is typical. In other words, there will be a steady but variable “white noise” of incoming requests, and most requests will take about the same time to process and send back the response, with a “long tail” of requests that take longer than normal to complete. Suppose that the typical response time is under a millisecond — maybe it’s 50 microseconds, maybe a bit longer. This is common in MySQL servers I work with when they’re running a simple read-mostly workload with most data in memory.
If we slice the arrivals and completions into buckets, a second at a time, and count how many there are in each bucket, we should see roughly the same number of arrivals and completions, would you agree? Suppose the server handles 20k requests per second. A few of the requests will arrive in one second and complete in the next second, so at the end of the interval we’ll see a couple of extra arrivals, but at the start of the interval we’ll also see about the same number of extra completions from the previous interval. And anyway, the number of requests that span the transition between seconds will be small.
So if we count the arrivals and completions in each second, and plot them as curves in a time series, the two curves should match pretty closely. What if we narrow the observation windows and aggregate every tenth of a second? It’ll probably look pretty similar. But if we continue to aggregate at a finer and finer level, at some point two things will happen: 1) the curves will become jaggedly different from each other due to the random differences in completion times, caused by differences in response times; and 2) the curves should still look roughly like each other, but the completions will be sort of an echo of the arrivals, lagged by about the amount of the typical response time (which will usually be clustered tightly, given that this is an exponential distribution).
And now for the interesting part: what if there are parts of the data where this isn’t the pattern we see?
Let’s look at some real data and see what this looks like. I’ve grabbed some TCP packets from a real server, and merely classified the packets as inbound or outbound based on the IP address. I haven’t cleansed them, or ensured that I have a single outbound packet for every inbound one, or anything like that. All I’m working with here are packet counts and directions, and the timestamps. I’ll pretend that an inbound packet is an arrival, and that an outbound packet is a completion. This isn’t really true, but after you see the plots, you’ll agree that it’s close enough for government work. First, I’ll aggregate arrivals and completions in one-second buckets:
It’s a little hard to see because of the pixelation, but the curves follow each other pretty closely, with the red peeking out from behind the green here and there. If I aggregate this data in tenths of a second, I get the following plot:
That’s getting pretty crowded, but just trust me when I say that you can zoom into it and the curves still match pretty closely. If I aggregate it by hundredths of a second, the plot is completely unreadable unless I zoom in on it. I’ll spare you the zoomed-out look and simply zoom into one particular region that I found to have something interesting:
You can clearly see that there is a spike of inbound packets, which by itself might not be noteworthy. What’s interesting is that the outbound packets don’t follow the same spike. Instead, there’s a spike of outbound packets a few hundredths of a second later. This is the “echo” effect that I mentioned before.
What are we looking at? It might seem that this is a small server stall due to a lock or some other queue, followed by the logjam clearing out, but that’s not really what happened. The server didn’t stall, it merely had a number of queries with longer service times arrive in a clump. If it had stalled due to some lock, the outbound packets would have actually dropped off sharply, leaving a big notch in the green curve, followed by a spike in the green curve. If the stall was long enough, the red curve (inbound) would have also dropped off sharply, because all the application servers that would normally be sending queries to the server would be stopped, waiting for existing queries to complete. That didn’t happen.
What probably happened here was something like a small cache stampede. The application sent a burst of extra queries (not just the normal number of queries), and these ran longer than usual. This is not evidence of a misbehaving server, but rather looks more typical of a misbehaving application.
We see this echo effect because the aggregation interval is a lot shorter than the response time of those queries. And we don’t see it in the rest of the curve because the typical response time is a lot shorter than a hundredth of a second. We’re straddling the two worlds with the aggregation interval. So as we narrow the aggregation interval, we’ll start to find the place where we get some “static”, sort of like de-tuning the radio just a little bit (if you remember analog radios… ah, when you could build your own from a Radio Shack kit).
What does this mean? It means that I need to go back to the customer and ask whether they care But in general, most applications are going to show this kind of pattern if you look closely enough. Most workloads aren’t completely uniform, and as you slice it more finely, you see jagged edges. Most servers have micro-stalls (which I didn’t show here), and all you have to do is look closely enough to see them. One question is whether it’s bad enough to matter. Real systems have queueing — it’s just the way the world is. Arrivals clump up, response times vary, and queueing happens. You can certainly obsess over this too much if you want to. On the other hand, maybe there are some interesting questions to ask. For example, can we discover these aberrations with a program, instead of by looking at a graph? This is often harder than it seems. But if we can do that, is there some benefit in tracking the roughness and echo-effect of the graph over time, as a performance early warning? Perhaps there is.
I won’t dig deeply into it in this blog post, because it’s too long already. My intention was to show just one of the things you can do with arrival and completion data (actually, just an approximation of that data, not even the real thing), without even doing simple things such as figuring out which arrivals and completions are paired and belong together. And I wanted to show that a huge amount of information can be extracted from these simple input metrics.
I would be remiss if I didn’t also mention that there’s a lot of information we’re still lacking, which we’d often like to have. What was the query, for example? With the text of the SQL in hand, we might be able to answer questions about what’s happening in that little spike. We could also classify queries together and analyze the classes of queries, rather than lumping all requests into one bucket. These are the types of tasks that mk-query-digest is good at doing, and there is merit in gathering more data and analyzing a richer dataset. But it’s nice to know that even with a very simple set of data, we can learn a lot of complex and useful things about a system. Arrival and completion events are the only two things you need to know to derive pretty much everything else about a system’s performance (via measuring response times), scalability (via the four fundamental metrics I mentioned in the previous blog post), and much more.
If you come to Percona Live in New York in a few weeks, I’ll be speaking about this — and I’ll go far beyond where I left off in this blog post.
Percona’s widely read Percona Data Performance blog highlights our expertise in enterprise-class software, support, consulting and managed services solutions for both MySQL® and MongoDB® across traditional and cloud-based platforms. The decades of experience represented by our consultants is found daily in numerous and relevant blog posts.
Besides specific database help, the blog also provides notices on upcoming events and webinars.
Want to get weekly updates listing the latest blog posts? Subscribe to our blog now! Submit your email address below and we’ll send you an update every Friday at 1pm ET.