Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this questionThis question is related to Tools for matching name/address data. There is a number commercial tools provided by SAS, Oracle, Microsoft, etc., that allow to de-duplicate or merging names of individuals or companies coming from multiple sources.
However, after reading the answers to the question mentioned before, I wondered why a seemingly interesting problem didn't receive any answers mentioning open source projects that could tackle the problem.
Are you aware of any open source projects or algorithms to implement the so called "record linking", "record merging", or "clustering"?
I'd recommend Google Refine as an open source (New BSD license) tool for parsing and fixing crufty data. It also allows clustering and reconciling of duplicate data, as well as having data-mining features.
I've used it to import and fix a lot of data in various formats, .csv, .tsv, .xls, .xml, .json, .rdf etc. with success. It can be used in-house without sending any data externally, which seemed to be a concern of the question "tools for matching name/address data"
NB. Google Refine was previously called Freebase Gridworks.
I stumble upon the following article: "Merge/Purge and Duplicate Detection".
By looking at http://www.semaphorecorp.com I found some extremely low prices.
This is not what I'm looking for, but at least is a bit of help, and a step on the right direction.
Try OSDQ open source data quality and profiling project on sourceforge
精彩评论