This hot take comes from: Logging to the Danger Zone: Race Condition Attacks and Defenses on System Audit Frameworks which I was introduced to by Faster Yet Safer: Logging System Via Fixed-Key Blockcipher.

The core concept from these papers is how to create tamper evident logging. Proper logging is essential to detecting what happened during a cyber incident. By reviewing the logs one can determine malicious intent as well as what occured. The problem is logs are saved, often in userspace, and are vulnerable to tampering. A related problem is an attacker whom delibertly filters out messages that would have otherwise been recorded during their attack, thus filtering out the evidence of the attack occuring.

The point of time in which these attacks can occur is referred to in the paper as the “danger zone” and the proposed solution is KennyLoggings, which is a play on his song “Highway to the Danger Zone”. Terrible naming conventions that collide with celebrities on Google, oy vey.

The core algorithm of KennyLoggings is that each log message has an authentication tag attached to it. If the tag does not match the message, you know that the log was tampered with. This tag is generated by having the kernel store a current key K in the kernel. K is used as the secret key in an HMAC function to hash the log. Thus:

HMAC(K, log_message) = authentication_tag

The key K is then moved to the next value via a second function, such as a hash function. Thus


and the full algorithm looks like:

K is initialized at a value known to authenticator and kernel
for each log_message:
	HMAC(K, log_message) = authentication_tag
	add authentication_tag to log_message
	HASH(K) = K

This is assumed to take place in the kernel, with K kept secure from userspace. The authentication_tag calculation and association must also happen at the same time as the log message is generated by the kernel.

There are some details around securely erasing the prior K, pre-computing values, which hash and hmac functions to use, performance, etc, that I am leaving out and are in the full paper. What this stripped-down description serves to show is that with this solution in place every log message now has a tag. Verifying the logs involves starting at the initial K value and calculating what the expected authentication_tag should be for every log along the way. This mechanism has a side effect that logs cannot be filtered out once the kernel has output them, every log seen has to be saved to prove that nothing was filtered by an attacker.

If an attacker attempts to tamper with the logs, they will hit the following issues:

  • Remove/Filter logs: This will cause the K value to diverge from what is expected, causing the authentication_tag check to fail.
  • Change prior log messages: This will cause the authentication_tag check to fail due to altering the input to the HMAC function.
  • Add log messages in the past: Even if the attacker gains the current K value they cannot turn back the hash function to generate prior K values, and therefore cannot add new logs in the past.
    • This does require that the initial K value be securely stored. Loss of control of this value means that any logs associated with the value cannot be validated.

Overall this is a novel scheme which shows evidence of log tampering without requiring specialized hardware. It is worth noting that this does nothing to prevent logs from being removed from a system. If an attacker is able to delete logs, normally stored in userspace, those will still be gone. But gaps in a system can now be detected.

The papers mention other alternative approaches involving using novel data structures to show other means of implementing tamper evident logging. One worth noting is “Efficient Data Structures for Tamper-Evident Logging” since the Merkle tree construction described shows how some data can be removed over time in a manner which allows for only authorized removals while not breaking subsequent verifications of the data.

One unfortunate issue with the scheme is that K must be known to the validator. This can create scenarios where the value is lost or used to create false logs by an untrustworthy validator. There are some alternative schemes which rely on the use of assymetric cryptography to generate the authentication_tag. These solve the issue with untrustworthy validators but are also much slower to generate each tag, thus impacting the overall logging throughput of the system.