I have a Database storing details of products which are taken from many sites, and gathered through the individual sites API's. When I call the feed, the details are stored in a database table.
The problem I'm having is that because the exact same product is listed on many sites by the seller I end up having duplicate it开发者_如何学Cems in my database, and then when I display them on a web page there are many duplicates.
The problem is that the item doesn't have any obvious unique identifier, it has specific details of the item (of which there could be many), and then a description of the item from the seller.
What I would like is for the item to show up once, and then give the user details of where else the item is listed.
How would I identify the duplicates that have come in, without slowing down the entire database? How would I also then pick one advert from all the duplicates, and then store what other sites the advert is displayed on.
Thanks for any help.
The problem is two-fold, and both are on your side. When you figure out how to deal with that, writing the code into a program (Java or SQL will be easy). I'll name them first and then identify the solutions.
For some unknown reason, you have assumed that collecting product descriptions from mulitple sites will not collect the same product.
You are used to the common and nonsensical
Id
column, which is fine when you are working with spreadsheets prototyping functionality; but it is nowhere near what is required for a database or Development-level functionality. Your users (or boss) have naturally expected database capability from the database, and you did not provide any. (And no, it does not require fuzzy string logic or magic of any kind.)
Solution
This is a condensed version of the IDEF1X Standard for modelling Relational Databases; the portion re Identifiers.
You need to think in database terms, and think about the database tables you need to perform your function, which means you are not allowed to use an auto-increment
Id
column. That column gives a spreadsheet aRowId
, but it does not imply anything about the content of the table, or the columns that identify a product.And you cannot simply rip data off another website, you need to think about what your website requires for products. What does your company understand a product to be, and how does it identify a product ?
Identify all the columns and datatypes for the columns.
Identify which columns are mandatory and which are optional.
Identify which are strong Identifiers. Eg.
Manufacturer
andModel
; the shortProduct Name
, not the longDescription
(or may be for your company, the long description is an Identifier). Work with your users, and work that out.You will find you actually have a small cluster of tables around
Product
, such asManufacturer
,ProductType
, perhapsVendor
, etc.Organise those tables, and Normalise them, so that you are not duplicating data.
Make sure you treat those Identifiers with a bit of respect. Choose which will be unique. Those are Candidate Keys. You need at least one per table, and there will be more than one in
Product
. All the Identifiers that will be searched on will need to be indexed (Unique or not). Note that Unique Indices cannot be Nullable, so you cannot choose an optional column.What makes a single Unique Identifier for
Product
may not be a single column. That's ok, we can evaluate multiple columns for keys in databases; they are called Compound Keys.Take the best, most stable (one which will not change) Unique Identifier, one of the Candidate Keys, and make that the Primary Key.
If, and only if, the Unique Identifier, the Primary Key, which may be a Compound Key, is very long, and therefore unsuitable for a Primary Key, which is migrated to the child tables, then add a Surrogate Key. That will be the
Id
column. Note that that is an additional column and additional Index. It is not a substitute for the Identifiers ofProduct
, the Candidate Keys; they cannot be removed.
So far we have a Product database on your companies side of the web, that is meaningful to it. Now we are in a position to evaluate products from the other side of the web; and when we do, we have a framework on our side that is strong, against which we can measure the rubbish that we get from the other side of the web.
Feeds
You need a
WebSite
table to manage the feeds.There will be an Associative table (many-to-many) between
Product
andWebSite
. Let's call itProductSite
. It will contain only ourProductId
, and theWebSiteCode. It may contain
Price`. The contents are valid for a single feed cycle.Load each feed into a staging database or schema, an incoming
ProductIn
table, maybe one per source website. This is just the flat file from the external source. Add a columnIsValid
and set the Default to true.Then write some SQL that compares that
ProductIn
table, with its loose and floppy contents, with ourProduct
table with its strong Identifiers.The way I would do it is, several waves of separate checks, each marking the rows that fail, with
IsValid
to false. At the end Insert theIsValid
rows into ourProductSite
.You might be lucky, and get away with an optimistic approach. That is, as long as you find a match on a few important columns, the match is valid. (reverse the Default and update of the
IsValid
boolean).This is the proc that will require some back-and-forth work, until it settles down. That is why you need to work with your users re the Indentifiers. The goal is to exclude no external products, but your starting point will exclude many. That will include going back to our
Product
table and improving the content (values in the rows) of the Identifiers, and other relevant columns that you use to identify matching rows.
Repeat for each WebSite.
Now populate our website from our
Product
table, using information that we are confident about, and show which sites have the product for sale fromProductSite
.
I don't think this is a code or database problem (yet). You say:
The problem is that the item doesn't have any obvious unique identifier
You need to work out what that uniqeness is before you can ask a computer to do that for you. It sounds like you need some sort of fuzzy, string similarity algorithm.
Some examples of data that you consider duplicates might help.
精彩评论