I am d开发者_如何学Goeveloping an application which will be integrated with thousands of sensors sending information at every 15 minute interval. Let's assume that the format of the data for all sensors is same. What is the best strategy of storing this data so that every thing is archived (is accessible) and does not have a negative impact due to large size of growing data.
Th question is related to general database design I suppose, but I would like to mention that I am using Hibernate (with Spring Roo) so perhaps there is some thing already out there addressing it.
Edit: sensors are dumb, and off the shelf. It is not possible to extend them. In the case of a network outage all information is lost. Since the sensors work on GPRS this scenario will be some what unlikely (as the GPRS provider is a rather good one here in Sweden, but yes it can go down and one can do nothing about it).
A queuing mechanism was foremost in consideration and spring roo provides easy to work with prototype code based on ACTIVEMQ.
I'd have a couple of concerns about this design:
- Hibernate is an ORM tool. It demands an object model on one side and a relational one on the other. Do you have an object representation? If not, I'd say that Hibernate isn't the way to go. If it's a simple table mapping mechanism you'll be fine.
- Your situation sounds like war: long periods of boredom surrounded by instants of sheer terror. I don't know if your design uses asynchronous mechanisms between the receipt of the sensor data and the back end, but I'd want to have some kind of persistent queuing mechanism to guarantee delivery of all the data and an orderly line while they were waiting to be persisted. As long as you don't need to access the data in real time, a queue will guarantee delivery and make sure you don't have thousands of requests showing up at a bottleneck at the same time.
- How are you time stamping the sensor items as they come in? You might want to use a column that goes down to nanoseconds to get these right.
Are the sensors event-driven or timed?
Sounds like a great problem. Good luck.
Let's assume you have 10,000 sensor sending information every 15 minutes. To have better performance on database side you may have to partition your database possibly by date/time, sensor type or category or some other factor. This also depend on how you will be query your data.
http://en.wikipedia.org/wiki/Partition_(database)
Other bottle neck would be your Java/Java EE application itself. This depends on your business like, are all 150,000 sensors gonna send information at same time? and what architecture your java application gonna follow. You will have to read articles on high scalablity and performance.
Here is my recommendation for Java/Java EE solution.
Instead of single, have a cluster of applications receiving the data.
Have a controller application that controls link between which sensor sends data to which instance of application in the cluster. Application instance may pull data from sensor or sensor can push data to an application instance but controller is the one who will control which application instance is linked to which set of sensors. This controller must be dynamic such that sensors can be added or removed or updated as well application instances can join or leave cluster at any time. Make sure that you have some fail over capability into your controller.
So if you have 10,000 sensors and 10 instances of application in cluster, you have 1000 sensors linked to an application at any given time. If you still want better performance, you can have say 20 instances of application in cluster and you will have 500 sensors linked to an application instance.
Application instances can be hosted on same or multiple machines so that vertical as well as horizontal scalability is achieved. Each application instance will be multi threaded and have a local persistence. This will avoid bottle neck on to main database server and decrease your transaction response time. This local persistence can be a SAN file(s) or local RDBMS (like Java DB) or even MQ. If you persist locally in database, then you can use Hibernate for same.
Asynchronously move data from local persistence to main database. This depends on how have you persisted data locally. If you use file based persistence, you need a separate thread that reads data from file and inserts in main database repository. If you use a local database then this thread can use Hibernate to read data locally and insert it on main database repository. If you use MQ, you can have thread or separate application to move data from queue to main database repository.
Drawback to this solution is that there will be some lag between sensor having reported some data and that data appearing in main database.
Advantage in this solution is that it will give you high performance, scalability, and fail-over.
This means you are going to get about 1 record/second multiplied by how many thousand sensors you have, or about 2.5 million rows/month multiplied by how many thousand sensors you have.
Postgres has inheritance and partitioning. That would make it practical to have tables like:
- sensordata_current
- sensordata_2010_01
- sensordata_2009_12
- sensordata_2009_11
- sensordata_2009_10
- .
- .
- .
each table containing measurements for one month. Then a parent table sensordata can be created that "consists" of these child tables, meaning queries against sensordata would automatically go through the child tables, but only the ones which the planner deduces can contain data for that query. So if you say partitioned your data by months (which is a date range), and you expressed that wish with a date constraint on each child table, and you query by date range, then the planner - based on the child table constraints - will be able to exclude those child tables from execution of the query which do not contain rows satisfying the date range.
When a month is complete (say 2010 Jan just turned 2010 Feb), you would rename sensordata_current to the just completed month (2010_01), create a new sensordata_current, move over any rows from 2010_01 into the newly created sensordata_current that have a timestamp in Feb, add finally a constraint to 2010_01 that expresses that it only has data in 2010 Jan. Also drop unneeded indices on 2010_01. In Postgres this all can be made atomic by enclosing it into a transaction.
Alternatively, you might need to leave _current alone, and create a new 2010_01 and move over all January rows into it from _current (then optionally vacuum _current to immediately reclaim the space - though if your rows are consant size, with recent Postgres versions there is not much point in doing that). Your move (SELECT INTO / DELETE) will take longer in this case, but you won't have to write code to recreate indices, and this would also preserve other details (referential integrity, etc.).
With this setup removing old data is as quick and efficient as dropping child tables. And migrating away old data is efficient too since child tables are also accessible directly.
For more details see Postgres data partitioning.
Is it a requirement that these sensors connect directly to an application to upload their data? And this application is responsible for writing the data to the database?
I would consider having the sensors write data to a message queue instead, and having your "write to DB" application be responsible for picking up new data from the queue and writing it to the database. This seems like a pretty classic example of "message producers" and "message consumers", i.e. the sensors and the application, respectively.
This way, the sensors are not affected if your "write to DB" application has any downtime, or if it has any performance issues or slowdowns from the database, etc. This would also allow you to scale up the number of message consumers in the future without affecting the sensors at all.
It might seem like this type of solution simply moves the possible point of failure from your consumer application to a message queue, but there are several options for making the queue fault-reliant - clustering, persistent message storage, etc.
Apache MQ is a popular message queue system in the Java world.
精彩评论