I have to provide data integrity for a large database table. So, if a crafty admin manually changes the table (not via UI) I want to be able to detect it.
My idea is to have HMAC for each record and calculate incremental HMAC for the table when a user change it via UI:
- Calculate HMAC for first record - HMAC_Current.
- Calculate HMAC for a new record - HMAC_i
- Calculate new HMAC for the table as HMAC_Current = HMAC(HMAC_Current + HMAC_i).
Pros:
- there is no need to calculate HMAC for entire table each time when a user adds a rec开发者_如何学JAVAord via UI.
Cons:
- When a user deletes or changes a record I have to recalculate HMAC for the table from this record to the end of the table.
- When I want to check data integrity I have to check HMAC for each record. Then calculate HMAC for entire table from top to bottom and compare it with HMAC_Current.
Is there a better way to do it?
I see a number of problems with this approach:
If your sysdba has access to all the data, what's stopping them from messing with the HMACs as well? eg: They revert all changes to the table made in the last month. Then they put back the HMAC from last month. Is data integrity "preserved" in this case?
What stops them from subverting the application to mess with the HMACs? eg: If they don't have access to the application, they change the password for a user, and accesses the application as that user to mess with records.
Even if you can get this to work, what's it good for? Say you find a HMAC mismatch. Now who do you hold responsible? An admin? A user? Data corruption?
The better solution is to use auditing. You can set up all kinds of auditing on Oracle, and have the audits saved somewhere even the dba can't touch. Additionally, there's a huge advantage in using auditing: you can know who changed what. With your scheme, you can't possibly know that.
You can even set up FGA (fine-grained auditing) so that it'll only audit specific columns and also know what the values were before and after a change, which isn't possible with standard auditing.
Reference: Configuring and Administering Auditing
Well the first issue is that you don't trust your admins. If so why are they still there? Admins need full rights to prod databases, so they must be trustworthy.
If the issue is that there are occasional disputes about who made changes, then set up audit tables with triggers. Trustworthy admins will not bypass the triggers (even though they can). Only admins should have delete rights to audit tables.
Audit tables are a requirement for most enterprise systems. If you did not set rights through strored procs, it is likely that many internal users have the rights they need to affect the database directly which makes it easier for people to comit fraud. It may not be the admins at all who are affecting the data. Make sure you record information about the user who made the change and at what time as well as recording the change.
SQL Server also has a way to audit structural changes to the db. I don't know if Oracle does as well, but this is also a handly thing to audit.
Are the triggers available for your solution? If so, you can Write Managed Triggers using C#, and add any logic you want for this code.
This approach to 'integrity' is not really an approach to integrity - this is more like security patchwork.
So, first of all try to accomplish the same with better security model.
In case of your scenario, you have to calculate, store and check the HMAC. If check fails, you have to escalate.
If you setup your security properly (almost always it is possible that no admin needs direct write access on your tables) - then you don't have to check.
Moving as much of your business logic to the database will allow you to make stored procedures that could be the only interface to changing the data, so in this case you would have integrity guaranteed.
精彩评论