开发者

using SQL server to store data

开发者 https://www.devze.com 2023-02-11 21:26 出处:网络
I\'m right now using SQL server 2008 in my project to store and fetch data . this is going perfect till now . 开发者_如何学PythonI can fetch 20000 records in less than 50ms (JSON) . but facing a probl

I'm right now using SQL server 2008 in my project to store and fetch data . this is going perfect till now . 开发者_如何学PythonI can fetch 20000 records in less than 50ms (JSON) . but facing a problem with inserts stuff . in my project I need to be able to insert something like 100000 records every minute . and this is seems to be very slow with SQL server .

I've tried to use another database (NOSQL DB) like mongoDB which are very fast in storing data (5s) comparing to SQLServer(270s) but not fast as sql in fetching data(20000 => 180ms) .

So I'm asking here if there any way to make SQL faster in storing . or to make mongoDB faster in fetching ( I'm not an expert in mongoDB I know the very basic things about it ) .

public static void ExecuteNonQuery(string sql)
{
    SqlConnection con = GetConnection();
    con.Open();
    SqlCommand cmd = new SqlCommand(sql, con);
    try
    {
        cmd.ExecuteNonQuery();
    }
    finally
    {
        con.Close();
    }
}

SQL's Insert function

public IEnumerable<T> GetRecords<T>(System.Linq.Expressions.Expression<Func<T, bool>> expression, int from, int to) where T : class, new()
{
    return _db.GetCollection<T>(collectionName).Find<T>(expression).Skip(from).Limit(to).Documents;
}

Mongo's Select function ( MongoDB 1.6 )

Update : data structure : (int) Id , (string) Data


I guess that you are executing each insert in a transaction of its own (an implicit transaction might have been created if you do not provide one explicitly). As SQL server needs to ensure that the transaction is committed to the hard drive each transaction has a overhead that is very significant.

To get things to go faster, try to perform many inserts (try with a thousand or so) in a single ExecuteNonQuery() call. Also do not open and close, but keep the connection open (thus being in the same transaction) for several inserts.


You should have a look at the SqlBulkCopy Class http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlbulkcopy.aspx


MongoDB is very fast on reads and writes. 50k reads and writes per second is doable on commodity hardware - depending on the data size. In addition to that you always have the option to scale out with sharding and replica sets but as said: 20k operations per seconds with MongoDB is nothing.


Generally the speed on inserting data into the database is a function on the complexity of the operation.

If your inserts are significantly slow, then it points to optimisation problems with the inserts. Identify exaxtly what SQL insert statements your program is generating and then use the database EXPLAIN function to figure out what operations the underlying database is using. This often gives you a clue as to how you need to change your setup to increase the speed of these operations.

It might mean you have to change your database, or it might mean batching your inserts into a single call rather than inserting each item separately.

I see you are setting up and closing the connection each time.. this takes a significant time in itself. Try using a persistent connection.

0

精彩评论

暂无评论...
验证码 换一张
取 消