开发者

Remove duplicate data in a file

开发者 https://www.devze.com 2023-03-07 20:20 出处:网络
I have a problem coming up with an algorithm. Will you, guys, help me out here? I have a file which is huge and thus c开发者_Go百科an not be loaded at once. There exists duplicate data (generic data,

I have a problem coming up with an algorithm. Will you, guys, help me out here?

I have a file which is huge and thus c开发者_Go百科an not be loaded at once. There exists duplicate data (generic data, might be strings). I need to remove duplicates.


One easy but slow solution is read 1st Gigabite in HashSet. Read sequential rest of the file and remove duplicit Strings, that are in file. Than read 2nd gigabite in memory(hashset) and remove duplicit in files and again, and again... Its quite easy to program and if you want to do it only once it could be enough.


you can calculate a hash for each record and keep that in a Map>

read in the file building the map and if you find the HashKey exists in the map you seek to position to double check (and if not equal add the location to the mapped set)


Second solution:

  1. Create new file, where you write pairs <String, Position in original file>
  2. Than you will use classic sorting for big files according to String (Sorting big files = Sort small parts of file in memory, and than merge them together) - during this you will remove duplicits
  3. And than rebuild original order = you will sort it again but according to "Position in original file"


Depending on how the input is placed in the file; if each line can be represented by row data;

Another way is to use a database server, insert your data into a database table with a unique value column, read from file and insert into database. At the end database will contain all unique lines/rows.

0

精彩评论

暂无评论...
验证码 换一张
取 消