开发者

PHP MYSQL to XML - Efficient File Generation

开发者 https://www.devze.com 2022-12-20 02:23 出处:网络
I run a price comparison data engine, and as we are collecting so much data im running into pretty serious performance issues. We generate various XML files, one per product and within the product dat

I run a price comparison data engine, and as we are collecting so much data im running into pretty serious performance issues. We generate various XML files, one per product and within the product data is each Online shop we grab data from, with their price, link, description, etc.

We have multiple feed parsers/scrapers which collect the price information for each product. The product data is uploaded to a MySQL db, then a PHP file sits on the server and generates the XML for every product.

The problem we are running into, is that for 10,000 products, the XML generation is taking almost 25 minutes! The DB is completely normalised and i am producing the XML via PHP Dom.

The XML generation process doesnt take into consideration whether any of the data has actually changed and this is the problem i am facing. What is the most efficient way of skipping generation of XML files which do not have any data changes?

Do i use a flag system? But doesnt this result in more db look ups which may increase the the db overheads? The current queries only take ~0.1 seconds per product.

Also, what happens if only 1 price for 1 shop changes within an XML file, it seems a waste to write the entire file again because of this, 开发者_高级运维but surely a preg_replace would be just as time consuming?

Thanks for you time, really appreciated!


When an entry is posted into your database MD5 hash the contents into another field. Then when you poll for an update compare the MD5 from the database to a hash of the file on the server. If they match don't do anything and if they differ then do your update information.

Whenever I can I make the filename on the server the MD5 hash so I have to do even less server work--I just compare the filename to the DB hash.

As for the internal updating you probably will need to use some sort of REGEX, but you will be doing the replacement less often since you will know when something changes in the file.

One other thing. In doing quite a bit of flat file caching I have benchmarked a few different ways of storing the data and it looks like it is almost always faster to gzencode() the files before storage and then decode them when you need to read them. It saves server space and has been faster in my benchmarks (do your own though since hardware and storage needs differ)

EDIT:

In re-reading your post it sounds like you would be hashing the data from your scrapers to compare to the DB. Still the same basic idea but I wanted to clarify that I think it would still work. Your query overhead should be lite still since you would only be pulling 32 characters from the DB in a very specific query--with indexes set correctly it should be VERY fast.

Also, though I have never used it--look into something like simplexml that is native in PHP--that may give you a quick and easy way to change data in well formed XML without having to use REGEX and write it yourself.


A preg_replace is going to be much worse. You might want to move away from DOMDocument to to SimpleXML which i think has less overhead, but at the same time if you need to remove nodes then you have to bring DOMDocument into the mix in order to preserve your sanity.

I also second Shane's suggestion about comparing hashes from the scraped data to the db data. It seems like a good way to weed out the changes and then you can process with the DOM library of your choice.


10000 files written in 25mins is about 6 files per second. Even though your HD may support xGB/sec, you cannot write X gigs of data in a second in multiple files, there is overhead involved in creating a new file in the FAT index.

Imho, the core issue is you're dealing with static files, which is a poor choice in regards to your performance. The smartest solution is to stop using these static files as they obviously don't perform as well as database queries. If something is directly parsing these files, perhaps you should look into using MOD_REWRITE for Apache and instead of writing actual XML files, have the url run a live database query and output the file on demand. This way you don't have to manually generate all the XML files.

But if you continue with this sub-optimal method, you will have to create a separate dedicated server/storage for this. By chance, you're not housing the database and the web server on the same box? If so, you have to separate them. You might need a separate server or NAS to store these XML files, probably in a high performance raid 0 setup.

In summary, I highly doubt your database is the bottleneck, it's the act of saving all these tiny files.

0

精彩评论

暂无评论...
验证码 换一张
取 消