I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database.
In this system, an alert is an entity that must be acknowledged, acted upon, etc.
Initially I related readings to alerts like this (very cut down) : -
[Location]
LocationId
[Sensor]
SensorId
LocationId
UpperLimitValue
LowerLimitValue
[SensorReading]
SensorReadingId
Value
Status
Timestamp
[SensorAlert]
SensorAlertId
[SensorAlertReading]
SensorAlertId
Sen开发者_StackOverflow中文版sorReadingId
The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not.
The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?).
I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this:
[Location]
LocationId
[Sensor]
SensorId
LocationId
[SensorReading]
SensorReadingId
SensorId
Value
Status
Timestamp
[SensorAlert]
SensorAlertId
SensorId
Timestamp
[SensorAlertEnd]
SensorAlertId
Timestamp
Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do.
Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary.
Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable?
Is there perhaps another way of doing this that I may be missing?
Thanks.
EDIT: Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : -
[Location]
LocationId
[Sensor]
SensorId
LocationId
[SensorState]
SensorStateId
SensorId
Timestamp
Status
IsInAlert
[SensorReading]
SensorReadingId
SensorStateId
Value
Timestamp
There must be an elegant solution to this!
Revised 01 Jan 11 21:50 UTC
Data Model
I think your Data Model should look like this:▶Sensor Data Model◀. (Page 2 relates to your other question re History).
Readers who are unfamiliar with the Relational Modelling Standard may find ▶IDEF1X Notation◀ useful.
Business (Rules Developed in the Commentary)
I did identify some early business Rules, which are now obsolete, so I have deleted them
These can be "read" in the Relations (read adjacent to the Data Model). The Business Rules and all implied Referential and Data Integrity can be implemented in, and thus guaranteed by, RULES, CHECK Constraints, in any ISO SQL database. This is a demonstration of IDEF1X, in the development of both the Relational keys, and the Entities and Relations. Note the Verb Phrases are more than mere flourish.
Apart from three Reference tables, the only static, Identifying entities are Location, NetworkSlave, and User. Sensor is central to the system, so I ahve given it its own heading.
Location
- A
Location
contains one-to-manySensors
- A
Location
may have one Logger
NetworkSlave
- A NetworkSlave collects Readings for one-to-many NetworkSensors
User
- An
User
may maintain zero-to-manyLocations
- An
User
may maintain zero-to-manySensors
- An
User
may maintain zero-to-manyNetworkSlaves
- An
User
may perform zero-to-manyDownloads
- An
User
may make zero-to-manyAcknowledgements
, each on oneAlert
- An
User
may take zero-to-manyActions
, each of oneActionType
Sensor
A
SensorType
is installed as zero-to-manySensors
A
Logger
(houses and) collectsReadings
for oneLoggerSensor
A
Sensor
is either oneNetworkSensor
or oneLoggerSensor
- A
NetworkSensor
recordsReadings
collected by oneNetworkSlave
.
- A
- A
Logger
is periodicallyDownloaded
one-to-many times- A
LoggerSensor
recordsReadings
collected by oneLogger
.
- A
- A
Reading
may be deemed inAlert
, of oneAlertType
- An
AlertType
may happen on zero-to-manyReadings
.
- An
- An
Alert
may be oneAcknowledgement
, by one User . - An
Acknowledgement
may be closed by oneAction
, of oneActionType
, by oneUser
- An
ActionType
may be taken on zero-to-manyActions
- An
Responses to Comments
Sticking
Id
columns on everything that moves, interferes with the determination of Identifiers, the natural Relational keys that give your database relational "power". They are Surrogate Keys, which means an additional Key and Index, and it hinders that relational power; which results in more joins than otherwise necessary. Therefore I use them only when the Relational key becomes too cumbersome to migrate to the child tables (and accept the imposed extra join).Nullable keys are a classic symptom of an Unnormalised database. Nulls in the database is bad news for performance; but Nulls in FKs means each table is doing too many things, has too many meanings, and results is very poor code. Good for people who like to "refactor" their databases; completely unnecessary for a Relational database.
Resolved: An
Alert
may beAcknowledged
; AnAcknowledgement
may beActioned
.The columns above the line are the Primary Key (refer Notation document).
SensorNo
is a sequential number withinLocationId
; refer Business Rules, it is meaningless outside aLocation
; the two columns together form the PK. When you are ready to INSERT a Sensor (after you have checked that the attempt is valid, etc), it is derived as follows. This excludes LoggerSensors, which are zero:INSERT Sensor VALUES ( @LocationId, SensorNo = ( SELECT ISNULL(MAX(SensorNo), 0) + 1 FROM Sensor WHERE LocationId = @LocationId ) @SensorCode )
For accuracy or improved meaning, I have changed
NetworkSlave monitors NetworkSensor
toNetworkSlave collects Readings from NetworkSensor
.Check Constraints. The
NetworkSensor
andLoggerSensor
are exclusive subtypes ofSensor
, and their integrity can be set by CHECK constraints.Alerts, Acknowledgements
andActions
are not subtypes, but their integrity is set by the same method, so I will list them together.Every Relation in the Data Model is implemented as a CONSTRAINT in the child (or subtype) as FOREIGN KEY (child_FK_columns) REFERENCES Parent (PK_columns)
A Discriminator is required to identify which subtype a
Sensor
is. This isSensorNo = 0
forLoggerSensors
; and non-zero forNetworkSensors
.- The existence of
NetworkSensors
andLoggerSensors
are constrained by the FK CONSTRAINTS toNetworkSlave
andLogger
, respectively; as well as to Sensor. - In
NetworkSensor
, include a CHECK constraint to ensureSensorNo
is non-zero In
LoggerSensor
, include a CHECK constraint to ensureSensorNo
is zeroThe existence of
Acknowledgements
andActions
are constrained by the identified FK CONSTRAINTS (AnAcknowledgement
cannot exist without anAlert
; anAction
cannot exist without anAcknowledgement
). Conversely, anAlert
with noAcknowledgement
is in an unacknowledged state; anAlert
with andAcknowledgement
but noAction
is in an acknowledged but un-actioned state. .
Alerts. The concept in a design for this kind of (live monitoring and alert) application is many small programs, running independently; all using the database as the single version of the truth. Some programs insert rows (
Readings, Alerts
); other programs poll the db for existence of such rows (and send SMS messages, etc; or hand-held units pick up Alerts relevant to the unit only). In that sense, the db is a may be described as an message box (one program puts rows in, which another program reads and actions).The assumption is,
Readings
forSensors
are being recorded "live" by theNetworkSlave
, and every minute or so, a new set ofReadings
is inserted. A background process executes periodically (every minute or whatever), this is the main "monitor" program, it will have many functions within its loop. One such function will be to monitorReadings
and produceAlerts
that have occurred since the last iteration (of the program loop).The following code segment will be executed within the loop, one for each AlertType. It is a classic Projection:
So an-- Assume @LoopDateTime contains the DateTime of the last iteration INSERT Alert SELECT LocationId, SensorNo, ReadingDtm, "L" -- AlertType "Low" FROM Sensor s, Reading r WHERE s.LocationId = r.LocationId AND s.SensorNo = r.SensorNo AND r.ReadingDtm > @LoopDtm AND r.Value < s.LowerLimit INSERT Alert SELECT LocationId, SensorNo, ReadingDtm, "H" -- AlertType "High" FROM Sensor s, Reading r WHERE s.LocationId = r.LocationId AND s.SensorNo = r.SensorNo AND r.ReadingDtm > @LoopDtm AND r.Value > s.UpperLimit
Alert
is definitely a fact, that exists as a row in the database. Subsequently that may beAcknowledged
by anUser
(another row/fact), andActioned
with anActionType
by anUser
.Other that this (the creation by Projection act), ie. the general and unvarying case, I would refer to
Alert
only as a row inAlert
; a static object after creation.Concerns re Changing
Users
. That is taken care of already, as follows. At the top of my (revised yesterday) Answer, I state that the major Identifying elements are static. I have re-sequenced the Business Rules to improve clarity.For the reasons you mention,
User.Name
is not a good PK forUser
, although it remains an Alternate Key (Unique) and the one that is used for human interaction.User.Name
cannot be duplicated, there cannot be more than oneFred
; there can be in terms ofFirstName-LastName
; twoFred Bloggs
, but not in terms ofUser.Name
. Our second Fred needs to choose anotherUser.Name
. Note the identified Indices.UserId
is the permanent record, and it is already the PK. Never deleteUser
, it has historical significance. In fact the FK constraints will stop you (never use CASCADE in a real database, that is pure insanity). No need for code or triggers, etc.Alternately (to delete
Users
who never did anything, and thus releaseUser.Name
for use) allow Delete as long as there are no FK violations (ie.UserId
is not referenced inDownload, Acknowledgement, Action
).
To ensure that only
Users
who are Current performActions
, add anIsObsolete
boolean in User (DM Updated), and check that column when that table is interrogated for any function (except reports) You can implement a ViewUserCurrent
which returns only thoseUsers
.Same goes for
Location
andNetworkSlave
. If you need to differentiate current vs historical, let me know, I will addIsObsolete
to them as well.I don't know: you may purge the database of ancient Historical data periodically, delete rows that are (eg) over 10 years old. That has to be done from the bottom (tables) first, working up the Relations.
Feel free to ask Questions.
Note the IDEF1 Notation document has been expanded.
Here are my two cents on the problem.
AlertType table holds all possible types of alerts. AlertName
may be something like high temperate, low pressure, low water level, etc.
AlertSetup table allows for setup of alert thresholds from a sensor for a specific alert type.
For example, TresholdLevel
= 100 and TresholdType
= 'HI' should trigger alert for readings over 100.
Reading table holds sensor readings as they are streamed into the server (application).
Alert table holds all alerts. It keeps links to the first reading that triggered the alert and the last one that finished it (FirstReadingId
, LastReadingId
). IsActive
is true if there is an active alert for the (SensorId
, AlertTypeId
) combination. IsActive
can be set to false only by reading going below the alert threshold. IsAcknowledged
means that an operator has acknowledged the alert.
The application layer inserts the new reading into the Reading table, captures the
ReadingId
.Then application checks the reading against alert setups for each (
SensorId
,AlertTypeId
) combination. At this point a collection of objects{SensorId, AlertTypeId, ReadingId, IsAlert}
is created and theIsAlert
flag is set for each object.The Alert table is then checked for active alerts for each object
{SensorId, AlertTypeId, ReadingId, IsAlert}
from the collection.If the
IsAlert
is TRUE and there are no active alerts for the (SensorId
,AlertTypeId
) combination, a new row is added to the Alert table with theFirstReadingID
pointing to the currentReadingId
. TheIsActive
is set to TRUE, theIsAcknowledged
to FALSE.If the
IsAlert
is TRUE and there is an active alert for the (SensorId
,AlertTypeId
) combination, that row is updated by setting theLastReadingID
pointing to the currentReadingId
.If the
IsAlert
is FALSE and there is an active alert for the (SensorId
,AlertTypeId
) combination, that row is updated by setting theIsActive
FALSE.If the
IsAlert
is FALSE and there are no active alerts for the (SensorId
,AlertTypeId
) combination, the Alert table is not modified.
The main "triangle" you have to deal with here is Sensor, [Sensor]Reading, and Alert. Presuming you have to track activity as it is occuring (as opposed to a "load it all at once" design), your third solution is similar to something we did recently. A few tweaks and it would look like:
[Location]
LocationId
[Sensor]
SensorId
LocationId
CurrentSensorState -- Denormalized data!
[SensorReading]
SensorReadingId
SensorState
Value
Timestamp
[SensorStateLog]
SensorId
Timestamp
SensorState
Status -- Does what?
IsInAlert
(Primary key is {SensorId, Timestamp})
"SensorState" could be SensorStateId, with an associated lookup table listing (and constraining) all possible states.
The idea is, you Sensor contains one row per sensor and shows it's current state. SensorReading is updated continuously with sensor readings. If and when a given sensors current state changes (i.e. new Reading's state differs from Sensor's current state), you change the current state and add a row to the SensorStateLog showing the change in state. (Optionally, you could update the "prior" entry for that sensor with a "state ended" timestamp, but that's fussy code to write.)
CurrentSensorState in the Sensor table is denormalized data, but if properly maintained (and if you have millions of rows) it will make querying current state vastly more efficient and so may be worth the effort.
The obvious downside of all this is that Alerts are no longer an entity, and they become that much harder to track and identify. If these must be readily and immediately identifiable and usable, your third scheme won't do what you need it to do.
精彩评论