Hello I stumbled following question You given unsorted doubly linked list.You should find and delete duplicates from Doubly linked list.
What is the be开发者_如何转开发st way to do it with minimum algorithmic complexity?
Thank you.
If the space is abundance and you have to really optimize this with time, perhaps you can use a Hashset
(or equivalent in C++). You read each element and push it to the hashset. If the hashset reports a duplicate, it means that there is a duplicate. You simply would delete that node.
The complexity is O(n)
Think of it as two singly linked lists instead of one doubly linked list, with one set of links going first to last and another set going last to first. You can sort the second list with a merge sort, which will be O(n log n). Now traverse the list using the first link. For each node, check if (node.back)->key==node.key
and if so remove it from the list. Restore the back pointer during this traversal so that the list is properly doubly linked again.
This isn't necessarily the fastest method, but it doesn't use any extra space.
Assuming that the potential employer believes in the C++ library:
// untested O(n*log(n))
temlate <class T>
void DeDup(std::list<T>& l) {
std::set<T> s(l.begin(), l.end());
std::list<T>(s.begin(), s.end()).swap(l);
}
With minimum complexity? Simply traverse the list up to X times (where X is the number of items), starting at the head and then delete (and reassign pointers) down the list. O(n log n) (I believe) time at worse case, and really easy to code.
精彩评论