开发者

Create a large dataset for speed testing

开发者 https://www.devze.com 2022-12-19 05:12 出处:网络
I need a Microsoft SQL Server 2005 or above stored procedure that will cr开发者_StackOverflow中文版eate a large number of rows (example: one million) so that I can then try various things like seeing

I need a Microsoft SQL Server 2005 or above stored procedure that will cr开发者_StackOverflow中文版eate a large number of rows (example: one million) so that I can then try various things like seeing how much slower SELECT * is compared to selecting each individual field name, or selecting from a view that selects from another view rather than selecting directly from the tables. Does that make sense?


If it is just the number of rows you want and you don't mind having the same content in each row, then you can do this in SQL Server Management Studio easily. Write your insert statement to insert a single row, then use:

GO 1000000

This will execute the batch the number of times specified after the GO statement.

If you need different data per row (or cannot have duplicate data because of indices, etc..), then there are tools such as SQL Data Generator will help. They enable you to define the type of data that gets generated so that the tool generates realistic data.


I can tell you right now how much slower it is to perform SELECT * instead of SELECT specific_column_names. If the columns you are selecting are not covered by any index, it will make hardly any difference at all; if the columns you would normally be selecting are covered by an index, and the table contains any significant amount of data, it will be an order of magnitude slower, maybe worse.

Here's a quick and dirty example. First create the test schema and data:

CREATE TABLE #TestTable
(
    ID int NOT NULL IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
    Name varchar(50) NOT NULL,
    Age int NOT NULL
)

INSERT #TestTable (Name, Age)
    SELECT 'John', s1.number % 10 + 25
    FROM master.dbo.spt_values s1
    CROSS JOIN master.dbo.spt_values s2
    WHERE s1.type = 'P' AND s2.type = 'P'
    AND s2.number < 20

CREATE INDEX IX_#TestTable_Age ON #TestTable (Age)

Now run this query in SSMS and turn on the actual execution plan:

SELECT ID
FROM #TestTable
WHERE Age = 30

SELECT *
FROM #TestTable
WHERE Age = 30

The first SELECT is executed as an index seek, which on my machine is 7% of the total cost. On the second query, the optimizer decides that the IX_#TestTable_Age index isn't worth it and does a clustered index scan instead, using up 93% of the total cost, or 13 times as expensive as the non-SELECT * version.

If we force a nested loop key lookup, to mimic the absence of a clustered index or a very large clustered index, it gets even worse:

SELECT *
FROM #TestTable
WITH (INDEX(IX_#TestTable_Age))
WHERE Age = 30

This takes more than 100 times as long as the covering query. Compared to the very first query, the cost is simply astronomical.


Why I bothered to write all that information:

Before you start going out and "testing" things, you need to shake off the common misconception that the exact order in which you write your query statements, or irrelevant factors like views selecting from other views, actually makes any appreciable difference if your database is even remotely optimized.

Indexing is the first thing that matters in the area of database performance. How you use them is the second thing that matters. The way in which you write your query may matter - such as performing a SELECT * when your WHERE condition is on anything other than the clustered index, or using non-sargable functions like DATEPART in your WHERE condition, but for the most part, chucking a bunch of random data into a table without seriously thinking about how the table will actually be used is going to give you mostly-meaningless results in terms of performance.

Data generators are useful when you are planning a large project and need to perform scalability tests. If you are simply experimenting, trying to understand performance differences between different types of queries in an abstract sense, then I would have to say that you'll be better off just grabbing a copy of the Northwind or AdventureWorks database and banging around on that one - it's already normalized and indexed and you'll be able to glean meaningful information about query performance in an actual production database.

But even more importantly than that, before you even start to think about performance in a SQL database, you need to actually start reading about performance and understand what factors affect performance. As I mentioned, the number one factor is indexing. Other factors including sort orders, selectivity, join types, cursor types, plan caching, and so on. Don't just go and start fooling around, thinking you'll learn how best to optimize a database.

Educate yourself before fumbling around. I would start with the slightly-dated but still comprehensive Improving SQL Server Performance article from Microsoft Patterns and Practices. Also read about Indexing Basics and Covering Indexes. Then go to sites like SQL Server Performance and try to absorb whatever you can from the articles.

Then, and only then, should you start playing around with large-scale test data. If you're still not completely sure why a SELECT * can hurt performance then it is way too early to be running tests.


Take a look at http://databene.org/databene-benerator. It's free, quick, provides realistic data and you have the option of using your own plugins.

0

精彩评论

暂无评论...
验证码 换一张
取 消