I am using a python web crawler for various social networking sites and am trying to determine the best way to store large amounts of data (mostly xml/text data) that I screen scrape. Could you suggest any databases that would be appropriate and easily accessible. Something that works well with python would be nice. Additionally, I would want to go back and parse the data at a later date.
Best Way to Store Data from Large Web Crawl [closed]
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references,or expertise, but this question will likely solicit debate, a
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
开发者_开发问答
Closed 11 years ago.
0
0
0
精彩评论