(note: I have labelled this C# but I am using Monotouch, so it might behave differently, I'm not too sure)
Here is my scenario: I have a List which persists throughout my app, this holds onto an entire list of objects. I then filter this data (via user choices in the app) and display the appropriate items.
The list gets updated by a call to the webservice, the way I do this is as follows:
HandleWebServiceComplete(object sender, ItemRetreivedEventArgs e)
{
// snip - error handling above this
if (e.Result != null)
{
foreach (var item in e.Result)
{
if (!mainList.Contains(item))
mainList.Add(item);
}
RefreshDisplayList(mainList);
}
}
So obviously this checks every single item returned by the web service, to check that it doesn't already exist in the list (there are situations where it could) - so my question is, would开发者_开发问答 it be better to use List.AddRange()
then check for duplicates in the list afterwards or just carry on the way I'm going already?
I definitely would not AddRange and then remove duplicates, as each removal will trigger a rebuild of the backing array. The way you are doing it is fine. I presume you are not actually having performance issues currently.
If you're not interested into the order of the items you can just use an HashSet<> instead of List<>. The UnionWith()
method accept a "range" and will do just what you want.
精彩评论