Welcome to my blog series on Azure SQL DB performance. In this blog I’ll write about my findings with the Azure SQL DB Standard tier.
Click here for information about the set-up
Click here for the scripts I’ve used
Intro
This tier is more expensive than the basic, but starting at 12 euro’s per month up to 3723 euro’s you have a wider range of spending your money and with that a wider performance range. The standard tier is suited for general purpose workloads and can be compared with the general purpose tier whereas the latter works with cores. Standard tier works with DTU’s. The concept of a DTU is a difficult one as the documentation states it’s a blend of CPU, Memory, reads and writes. If you hit a limitation, you’ll be throttled. You can read more about the DTU model here.
Limits
The standard tier starts at 10 DTU’s and goes up to 3000. DTU’s are made up from a magic mix of CPU, memory, read iops and write iops. An iop (Input Output oPeration) should be a 4kb (disk cluster size) read or write. 10 DTU translates to 40 Kb/sec up to 12.000 Kb/sec. As we’re used to datapages which are 8Kb in size, you could say these databases are able to pull 5 to 1.500 pages per second from disk. When there are simultaneous writes, you’ll share the performance. At least that’s my interpretation of the IOP. For the DTU part, I’m still struggling to get a good grip on what it exactly is, beyond the magic mix.
The maximum storage starts at 250 GB and when you get to S3 goes up to 1 TB. Standard, together with basic, are the only tiers that have limits on concurrent sessions. Basic can’t handle more than 300 concurrent sessions, standard starts at 600 and goes up to the maximum of 30000 you can find in all other database tiers.
Resources provided
I’ve deployed all the standard options and got the following results (this is the first script in the script file where it checks the hardware):
SKU | GB Memory | CPU Cores | Max workers | Schedulers | Scheduler total count |
Standard S0 | 1 | 1 | 12 | 1 | 10 |
Standard S1 | 4 | 2 | 32 | 2 | 9 |
Standard S2 | 4 | 2 | 32 | 2 | 9 |
Standard S3 | 9 | 2 | 32 | 2 | 9 |
Standard S4 | 11 | 2 | 12 | 2 | 9 |
Standard S6 | 16 | 3 | 12 | 3 | 10 |
Standard S7 | 32 | 6 | 12 | 6 | 14 |
Standard S9 | 97 | 24 | 26 | 24 | 32 |
Standard S12 | 152 | 104 | 26 | 44 | 115 |
If you’re looking at these numbers, there are quite a few weird things happening.
Memory and CPU start to increase slowly at the bottom but go wild at the top end. The amount of schedulers, that in my opinion determines the amount of work the database can do, has a much more balanced growth. Every scheduler is ‘bound’ to a cpu core. It should be that the amount of schedulers represents the amount of cores. Yes, there are hidden schedulers but these should be hidden from the used view as well.
The amount of workers on the other hand is a bit wonky. From 12 to 32 back to 12 and then to 26. When we look at query performance, there might be a relation between performance and amount of workers. The workers are the ones doing the work for the schedulers. But with less workers, the schedulers seem to be done. If my measurements are correct, you can get a lot of cores with less schedulers and even less workers.
One way of checking if the the workers and schedulers can change in your favour is by changing the SKU as far up as it can to see if you can get it to move to another node. Then change back down to your preferred SKU and check the results. It isn’t pretty but it might be quicker than submitting a call to Azure support.
Running query’s
We’re not using a database to sit around idle, we need to insert, read, delete and update data (in whatever order you fancy). When you look into the code post, you’ll find the scripts I’ve used to do just that. Insert a million rows, try and read a lot of stuff, delete 300.000 records and update 300.000. The reads, updates and deletes are randomised. Because of the large number, I think the results are usable and we can compare them between tiers and SKU’s.
Insert 1 million records
I’ve insert the records on all SKU’s within this tier. At S0, this took 2 hours, at S12 it was done in a little under 4 minutes. The more DTU’s you add, the more performance you get but you pay more as well.

Wait types you’ll encounter most:
- PAGEIOLATCH_EX
- IO_QUEUE_LIMIT
- WRITELOG
- LOG_RATE_GOVERNOR
- PAGELATCH_SH
- PAGEIOLATCH_SH
To summarize these waits, the database had a hard time reading from disk, ran into it’s IO limit, needed time to write to the log file, ran into the DTU limits, waiting for access to a datapage in memory to read it and waiting for a page read to complete to read the structure.
The read and write latency was all over the place, but never above 100 ms.
Select data
After inserting data, it was time to select some. The script that I’ve used is intentionally horrible to make the worst of the performance. I’m sure your scripts are much better. The S0 SKU took 419 hours to complete and has been omitted from the graph. Just like the inserts, you can see enormous increases at the low end but the more you increase, you get less extra performance. The S12 SKU finished in just over 40 minutes.

The wait types:
- PAGEIOLATCH_EX
- SOS_SCHEDULER_YIELD
- IO_QUEUE_LIMIT
- PAGEIOLATCH_SH
- RESOURCE_GOVERNOR_IDLE
- PWAIT_EXTENSIBILITY_CLEANUP_TASK
- CXSYNC_PORT
- CXPACKET
- WRITELOG
- PAGELATCH_EX
- PAGELATCH_SH
Again a lot of waits on reading from disk or memory, limitations from different elements and even some Machinelearning (the PWAIT waittype). Because there was no database configuration at all, the CX parallelism waits were to be expected, but only from S7 and higher.
Again latency was all over the place but never over the 100 ms.
Delete data
After selecting data I decided I had enough of the large amount and had to trim some of. So I deleted 300.000 records. This process is agonizingly slow at the low tiers, from S6 it’s under a minute and the S12 finishes off in 17 seconds.

The wait types:
- RESOURCE_GOVERNOR_IDLE
- PWAIT_EXTENSIBILITY_CLEANUP_TASK
- PVS_PREALLOCATE
- PAGELATCH_EX
- SOS_SCHEDULER_YIELD
- LOG_RATE_GOVERNOR
- WRITELOG
You’re going to wait on the DTU limits most of the time really. Yes you have to wait on access to the pages or version store because accelerated database recovery is enabled. And the log, always the log.
The disk latency was identical to the previous one.
Update data
The last job I let rip on the database was an update task. I ran a script to update 300.000 records. The S0 took almost 25 minutes, S4 and up went under a minute and S9 was the quickest with 19 seconds.

Let’s take a final look at the waits
- PAGEIOLATCH_EX
- IO_QUEUE_LIMIT
- PAGEIOLATCH_SH
- RESOURCE_GOVERNOR_IDLE
- PAGEIOLATCH_SH
- LOG_RATE_GOVERNOR
Again mostly waiting on access to the data pages and the database DTU limitations One thing to take into account is that most processes finished quite quick meaning some wait types might be over- or underrepresented for your use case.
Pricing
The weird thing is that pricing is shown in an inconsistent way. When you look at the documentation, it states that you pay for the DTU’s allocated to your database. But the Azure Calculator gives off a price for the DTU that you can set for x amount of hours. To create some kind of consistent overview, I’ve set the pricing to usage for a whole month non-stop.

You can see the curve being the exact opposite of the performance curves, more performance requests a larger credit card. This is the point where you need to determine if all the performance from this standard tier is worth the cost. To keep this under control I’d advise you to use the cost center and it’s alerts. Not only the actual costs, you can create cost alerts on predicted costs as well to prevent surprises. We’ve all been there.
Finally
Would I use this one for my use cases? Repository databases can be stored in the S2 without major problems I think. It has good enough performance without breaking the bank. For larger data solutions like a data warehouse it hasn’t got enough data-space available and will cost way too much. But that’s why there are other solutions available within the PaaS stack.
Thanks for reading! Next week, Premium.
One thought on “Azure SQL Database Performance comparison part 2 of 9: Standard”