开发者

LINQ to SQL: Reusing DataContext

开发者 https://www.devze.com 2023-01-30 18:21 出处:网络
I have a number of static methods that perform simple operations like insert or delete a record. All these methods follow this template of using:

I have a number of static methods that perform simple operations like insert or delete a record. All these methods follow this template of using:

public static UserDataModel FromEmail(string email)
{
    using (var db = new MyWebApp开发者_如何学CDataContext()) 
    {
        db.ObjectTrackingEnabled = false;
        return (from u in db.UserDataModels
                where u.Email == email
                select u).Single();
    }
}

I also have a few methods that need to perform multiple operations that use a DataContext:

public static UserPreferencesDataModel Preferences(string email)
{
    return UserDataModel.Preferences(UserDataModel.FromEmail(email));
}

private static UserPreferencesViewModel Preferences(UserDataModel user)
{
    using(var db = new MyWebAppDataContext()) 
    {
        var preferences = (from u in db.UserDataModels
                          where u == user
                          select u.Preferences).Single();

        return new UserPreferencesViewModel(preferences);
    }
}

I like that I can divide simple operations into faux-stored procedures in my data models with static methods like FromEmail(), but I'm concerned about the cost of having Preferences() invoking two connections (right?) via the two using DataContext statements.

Do I need to be? Is what I'm doing less efficient than using a single using(var db = new MyWebAppDataContext()) statement?


If you examine those "two" operations, you might see that they could be performed in 1 database roundtrip. Minimizing database roundtrips is a major performance objective (second to minimizing database io).

If you have multiple datacontexts, they view the same record differently. Normally, ObjectTracking requires that the same instance is always used to represent a single record. If you have 2 DataContexts, they each do their own object tracking on their own instances.

Suppose the record changes between DC1 observing it and and DC2 observing it. In this case, the record will not only have 2 different instances, but those different instances will have different values. It can be very challenging to express business logic against such a moving target.

You should definately retire the DataContext after the UnitOfWork, to protect yourself from stale instances of records.


Normally you should use one context for one logical unit of work. So have a look at the unit of work pattern, ex. http://dotnet.dzone.com/news/using-unit-work-pattern-entity


Of cause there is some overhead in creating a new DataContext each time. But its a good practice to do as Ludwig stated: One context per unit of work.

Its using connection pooling so its not a too expensive operation.


I also think creating a new DataContext each time is the correct way but this link explains different approaches for handling the data context. Linq to SQL DataContext Lifetime Management


I developed a wrapper component that uses an interface like:

public interface IContextCacher {
    DataContext GetFromCache();
    void SaveToCache(DataContext ctx);
}

And use a wrapper to instantiate the context; if it exists in cache, it's pulled from there, otherwise, a new instance is created and pushed to the Save method, and all future implementations would get the value from the getter.

Depending on the type of application would be the actual caching mechanism. Say for instance, an ASP.NET web application. This could store the context in the items collection, so its alive for the request only. For a windows app, it could pull it from some singleton collection. It could be whatever you wanted under the scenes.

0

精彩评论

暂无评论...
验证码 换一张
取 消