Looking at http://www.nearmap.com/,
开发者_StackOverflow中文版Just wondering if you can approximate how much storage is needed to store the images? (NearMap’s monthly city PhotoMaps are captured at 3cm, 5cm, 7.5cm, or 10cm resolution)
And what kind of systems/architecture is suitable to deliver those data/images? (say you are not Google, and want to implement this from scratch, what would you do? )
ie. would you store the images in Hadoop, and use apache/php/memcache to deliver etc ?
It's pretty hard to estimate how much space is required without being able to determine the compression ratio. Simply put, if aerial photographs of houses compress well, then it can significantly change how much data needs to be stored.
But, in the interests of math we can try to figure out what is required.
So, if each pixel measures 3cm by 3cm they cover 9cm^2. A quick wikipedia search tells us that London is about 1700km^2, and at 10 billion cm^2 per km^2, is 17,000,000,000,000 cm^2. This mean that we need 1,888,888,888,888 pixels to cover London at a resolution of 3cm. Putting this into bytes, at 4 bytes per pixel, is about 7000 GiB. If you get 50% compression, that drops it down to 3500GiB for London. Multiply this out by every city you want to cover to get an idea for what kind of data storage you will need.
Delivering the content is simple compared to gathering it. Since this is an embarrassingly parallel solution a share-nothing cluster with an appropriate front-end to route traffic to the right nodes would probably be easiest way to implement it. This is because the nodes don't have to maintain state or communicate with each other. The ideal method would depend on how much data you are pushing through, if you do push enough data it might be worthwhile to implement your own webserver that just responds to HTTP GETs.
I'm not sure a distributed FS would be the best way to distribute things since you'd have to spend a significant amount of time trying to pull data from somewhere else in the cluster.
精彩评论