2016-07-11

Honeytokens to detect, prevent, and remediate insider threats

I expect my audience to know what a honeypot is, but the term Honeytokens is new to me, likely others, even if the concept is not new.

Corporations have been great at securing the perimeter with layers of routers, firewalls, default denies, intrusion detection, etc.  However, many have done so well that the threat is no longer even bothering with such front line security and is looking to insider attacks.  These can take many forms: profit motivated insider, blackmailed insider, spear phishing for a relatively low privilege inside account, and plenty of other non-enumerate attacks.  Finding even the most trivial inside account and leveraging it for pivots or other further attacks is just another way to create dwell time that no organization can possibly trace 200+ days of unauthorized activity in full by the time they detect the attack. 

The ideas of categorizing threat actors into organized groups, nation states, etc. is a quaint idea, but seems more like a wistful attempt to bring order to a truth that most are unwilling to accept: a single dedicated attacker with little to no finances from anywhere in the world can create the most damaging attacks ever seen.  I don't dismiss the value of threat intelligence groups - good TI makes us analytics types have better positions to work from, but they act only as a supplement in a world that grows, splits, and becomes more diverse faster than most security can respond to.

I further focus this post not on direct assault, but simply upon methods of removing confidential company data to a third party.  I do not attempt to explore the various motives in such an insider or outsider or mixed attack, but only look to explore the damage of the leak of proprietary company data.

In general most humans are fairly trusting and we expect a certain level of trust and honesty from our fellow man.  This is the fundamental model that even makes all forms of commerce even possible.  However, this trust model breaks down with a single bad actor and instead of being stupefied that we were taken advantage of, we have to be pro-active.  I can enumerate examples such as: locking your house, locking your car, having an alarm, etc. that show most of us fall more into the trust but verify model.

In the ideal model we decrease dwell time to seconds or minutes or at worst hours (oh assuming we can't detect and prevent 100% of attacks which is an ideal model that is so far from our current capabilities as to warrant ridicule).  Until we reach the ideal model, there a number of stop-gaps that are worth exploring and that is exactly where I will focus the rest of this post - whether the insider account is operated by the intended user or someone else.

This post is mostly inspired by a publication the ACM Transactions on Information System Security: "Behavioral Study of Users When Interacting with Active Honeytokens", https://dx.doi.org/10.1145/2854152.  I apologize to those readers without some form of access to the publication, but I've maintained my professional ACM membership at 100% personal cost and assuming your company cannot be convinced at the benefit of access, then I highly recommend you consider personal membership.  As such I can at most paraphrase, but not directly quote per copyright.

The basic scenario is setting up the study group to maximize their profit and some are aware of counter-measures and others are not.  In any case the basic scenario are people as bank loan officers who get a commission and can potentially make more money by violating rules, regulations, and law.  There is a slight incentive for getting away with it and not getting caught.  There is no incentive to getting caught.  Out of 173 participants only *4* were not in some way detected violating policy, law, etc.  The use of honeytokens at 20% rates basically resulted in 100% detection and at 10% had fairly good success rates, but no where near as good.  Of course there is cost, maintenance, and expense to injecting false records to detect malicious behavior.  While a limited study, it should serve as a giant warning to organizations who base pay on commission.  The pay off for cheating is great, as long as you don't get caught.

Potential specific lessons?  Strongly distrust anyone who profits from commission and inject honeytokens to check their behavior.  As with nearly everything in this world: follow the money.

Does this study or my impression indicate that commission based pay is guaranteed to create impropriety?  No.  Does it have a strong potential bias to create it? Yes.  The application of honeytokens is not free and certainly even small studies like this can help focus on where the risk and cost outweighs the countermeasures.

2016-07-02

Protection [or the lack of] cyrptogrpahic primatives in a public cloud environment

As I watch the market mature and see organizations attempting to deploy their own private cloud solutions, generally citing security and note cost, I find an opening to a potentially receptive audience on the risks of public cloud security.  All solutions have trade offs ranging from cost, security, risk acceptance, risk mitigation, etc.

Unfortunately, technological innovation almost always out-paces appropriate security measures.  This all circles back to the never ending problem of forging ahead clueless of security and then having to patch security instead of baking it into the development cycle from the start.

Most of us know that the end result is an insecure product that can never hope to get the security aspect correct and we shake our heads and hope that the next iteration sysadmins and developers might actually include security from the start and "bake it in".  The day where that becomes the norm is on our door-steps and rapidly the secure after you deliver the product to market model is going to become extinct.  This last remark is not delusional euphoria or hopeless optimism.  Apparently companies are slowly learning that the lack of baking in security from the beginning may actually cost them more in the long term.  Consumers are actually paying attention and demanding better security than we are all used to (especially true for banking in the US compared to Europe).

The cloud panacea is starting to show increased fractures such as https://eprint.iacr.org/2016/596.  This paper is another iteration in how to attack multi-tenant issues inside a single physical host.  The game changer is that we're talking about consistent 2048-bit RSA key recovery via nothing more than another guest VM once you can determine it co-locates with your target host/cpu, which is easy to determine and establish.  The attack described is *with all fully patched OpenSSL that has protections against these sorts of attacks*.

You cannot bolt on security after the fact and expect good results.  However, even in cloud computing that is exactly the model currently being used.  If we cannot secure cryptographic keys in the public cloud environment, then there can be no guarantees *PERIOD*.  Well, that's unfair there is a guarantee that the security will fail and be completely undermined.