I expect my audience to know what a honeypot is, but the term Honeytokens is new to me, likely others, even if the concept is not new.
Corporations have been great at securing the perimeter with layers of routers, firewalls, default denies, intrusion detection, etc. However, many have done so well that the threat is no longer even bothering with such front line security and is looking to insider attacks. These can take many forms: profit motivated insider, blackmailed insider, spear phishing for a relatively low privilege inside account, and plenty of other non-enumerate attacks. Finding even the most trivial inside account and leveraging it for pivots or other further attacks is just another way to create dwell time that no organization can possibly trace 200+ days of unauthorized activity in full by the time they detect the attack.
The ideas of categorizing threat actors into organized groups, nation states, etc. is a quaint idea, but seems more like a wistful attempt to bring order to a truth that most are unwilling to accept: a single dedicated attacker with little to no finances from anywhere in the world can create the most damaging attacks ever seen. I don't dismiss the value of threat intelligence groups - good TI makes us analytics types have better positions to work from, but they act only as a supplement in a world that grows, splits, and becomes more diverse faster than most security can respond to.
I further focus this post not on direct assault, but simply upon methods of removing confidential company data to a third party. I do not attempt to explore the various motives in such an insider or outsider or mixed attack, but only look to explore the damage of the leak of proprietary company data.
In general most humans are fairly trusting and we expect a certain level of trust and honesty from our fellow man. This is the fundamental model that even makes all forms of commerce even possible. However, this trust model breaks down with a single bad actor and instead of being stupefied that we were taken advantage of, we have to be pro-active. I can enumerate examples such as: locking your house, locking your car, having an alarm, etc. that show most of us fall more into the trust but verify model.
In the ideal model we decrease dwell time to seconds or minutes or at worst hours (oh assuming we can't detect and prevent 100% of attacks which is an ideal model that is so far from our current capabilities as to warrant ridicule). Until we reach the ideal model, there a number of stop-gaps that are worth exploring and that is exactly where I will focus the rest of this post - whether the insider account is operated by the intended user or someone else.
This post is mostly inspired by a publication the ACM Transactions on Information System Security: "Behavioral Study of Users When Interacting with Active Honeytokens", https://dx.doi.org/10.1145/2854152. I apologize to those readers without some form of access to the publication, but I've maintained my professional ACM membership at 100% personal cost and assuming your company cannot be convinced at the benefit of access, then I highly recommend you consider personal membership. As such I can at most paraphrase, but not directly quote per copyright.
The basic scenario is setting up the study group to maximize their profit and some are aware of counter-measures and others are not. In any case the basic scenario are people as bank loan officers who get a commission and can potentially make more money by violating rules, regulations, and law. There is a slight incentive for getting away with it and not getting caught. There is no incentive to getting caught. Out of 173 participants only *4* were not in some way detected violating policy, law, etc. The use of honeytokens at 20% rates basically resulted in 100% detection and at 10% had fairly good success rates, but no where near as good. Of course there is cost, maintenance, and expense to injecting false records to detect malicious behavior. While a limited study, it should serve as a giant warning to organizations who base pay on commission. The pay off for cheating is great, as long as you don't get caught.
Potential specific lessons? Strongly distrust anyone who profits from commission and inject honeytokens to check their behavior. As with nearly everything in this world: follow the money.
Does this study or my impression indicate that commission based pay is guaranteed to create impropriety? No. Does it have a strong potential bias to create it? Yes. The application of honeytokens is not free and certainly even small studies like this can help focus on where the risk and cost outweighs the countermeasures.
2016-07-11
2016-07-02
Protection [or the lack of] cyrptogrpahic primatives in a public cloud environment
As I watch the market mature and see organizations attempting to deploy their own private cloud solutions, generally citing security and note cost, I find an opening to a potentially receptive audience on the risks of public cloud security. All solutions have trade offs ranging from cost, security, risk acceptance, risk mitigation, etc.
Unfortunately, technological innovation almost always out-paces appropriate security measures. This all circles back to the never ending problem of forging ahead clueless of security and then having to patch security instead of baking it into the development cycle from the start.
Most of us know that the end result is an insecure product that can never hope to get the security aspect correct and we shake our heads and hope that the next iteration sysadmins and developers might actually include security from the start and "bake it in". The day where that becomes the norm is on our door-steps and rapidly the secure after you deliver the product to market model is going to become extinct. This last remark is not delusional euphoria or hopeless optimism. Apparently companies are slowly learning that the lack of baking in security from the beginning may actually cost them more in the long term. Consumers are actually paying attention and demanding better security than we are all used to (especially true for banking in the US compared to Europe).
The cloud panacea is starting to show increased fractures such as https://eprint.iacr.org/2016/596. This paper is another iteration in how to attack multi-tenant issues inside a single physical host. The game changer is that we're talking about consistent 2048-bit RSA key recovery via nothing more than another guest VM once you can determine it co-locates with your target host/cpu, which is easy to determine and establish. The attack described is *with all fully patched OpenSSL that has protections against these sorts of attacks*.
You cannot bolt on security after the fact and expect good results. However, even in cloud computing that is exactly the model currently being used. If we cannot secure cryptographic keys in the public cloud environment, then there can be no guarantees *PERIOD*. Well, that's unfair there is a guarantee that the security will fail and be completely undermined.
Unfortunately, technological innovation almost always out-paces appropriate security measures. This all circles back to the never ending problem of forging ahead clueless of security and then having to patch security instead of baking it into the development cycle from the start.
Most of us know that the end result is an insecure product that can never hope to get the security aspect correct and we shake our heads and hope that the next iteration sysadmins and developers might actually include security from the start and "bake it in". The day where that becomes the norm is on our door-steps and rapidly the secure after you deliver the product to market model is going to become extinct. This last remark is not delusional euphoria or hopeless optimism. Apparently companies are slowly learning that the lack of baking in security from the beginning may actually cost them more in the long term. Consumers are actually paying attention and demanding better security than we are all used to (especially true for banking in the US compared to Europe).
The cloud panacea is starting to show increased fractures such as https://eprint.iacr.org/2016/596. This paper is another iteration in how to attack multi-tenant issues inside a single physical host. The game changer is that we're talking about consistent 2048-bit RSA key recovery via nothing more than another guest VM once you can determine it co-locates with your target host/cpu, which is easy to determine and establish. The attack described is *with all fully patched OpenSSL that has protections against these sorts of attacks*.
You cannot bolt on security after the fact and expect good results. However, even in cloud computing that is exactly the model currently being used. If we cannot secure cryptographic keys in the public cloud environment, then there can be no guarantees *PERIOD*. Well, that's unfair there is a guarantee that the security will fail and be completely undermined.
2016-06-17
Offensive security against no hanging fruit
In general I find offensive security to be a bit of a pre-determined successful outcome and not much of a challenge. Given enough time you only need a single successful attack to win the game. As amusing as it is to see remote exploits for windows rolling out faster than the patches, it's a fairly dull path to find an infinite number of ways to reach that goal.
I find spending my time in defensive security to be a much more challenging and worthwhile pursuit. How can I defend against all known and unknown attacks or at least decrease dwell time so remediation is happening within days instead of months to years.
All that said, there is an area where the offensive and defensive side are not entirely glamorous, high publicity, or even paid attention to at all. I am calling this no hanging fruit: it is so bad that anyone spending any time on it would find ways to compromise the entire system. Why is this an under-reserached category? Hardware. We can all emulate the heck out of an OS, but fewer can decipher FCCID specs and the resulting hardware, much less build and use the equipment to intercept the faults. Please keep in mind I don't find hardware hacking to be any harder than software hacking, but it does place a knowledge and cost burden higher than our day to day exploits.
As an amateur radio (ham radio) operator for the past 21 years, I have learned to appreciate the documentation that companies must publicly provide the FCC in the US to gain insight into the hardware. In this post we will take a look at fccid R7PEG1R1S2 for the electrical smart meter used by Oncor across almost all of Texas. I would direct post a link, but unfortunately we are stuck in the land of temporal post documents with no direct links. To follow along, visit https://www.fcc.gov/general/fcc-id-search-page, Grantee Code: R7P, Product Code: EG1R1S2. At the time of this writing I found 10 results, 9 in the 900mhz spectrum and 1 in the 2.4ghz area. You can browse these documents to your heart's content: it's all public from the US government despite the internal company confidential labels. As much as I love frequency counters, it is a lot easier to just grab the frequencies from a public document.
"The Gridstream RF network currently supports use of one encryption key per network. If you enable the FOCUS AX with encryption, the host must have a matching encryption key." No, you didn't mis-read that, the public specs state that in the event that they bother to encrypt on the 900mhz bit, a single symmetric key is used for all nodes in that network.
Just some personal favorite hilarity:
RF Baud Rate 9.6 kbps Min, 115.2 kbps Max, Programmable
In my area, none of the data is encrypted and is broadcast plain-text over the air. The configuration table in the FCC information is sufficient to pattern match and reduce that part of the data frame/packet to usable information without any reverse engineering.
Don't forget that the clocks are only accurate +/- 15 minutes. That is when they "check-in". Oh, part of the device is a permanent MAC, visible on the physical device. Given the clock skew, I can only theorize what might happen if an un-official signal is sent just before the official one. The specifications seem to indicate that the first signal would win *and* clock synchronization would then deem that node the official source.
I have in no way attempted to splice into the hardware or otherwise breach output via data communication over the AC lines. I would think that there would be no reason to broadcast things as critical as billing data usage plain-text over the air if they were using network over AC instead. Of course assuming competence or intent from a large corporation is a fool's errand.
I welcome any additional information in this blog or via email and am happy to supply GPG keys, if they are not readily found by interested respondents.
Happy hardware hacking!
I find spending my time in defensive security to be a much more challenging and worthwhile pursuit. How can I defend against all known and unknown attacks or at least decrease dwell time so remediation is happening within days instead of months to years.
All that said, there is an area where the offensive and defensive side are not entirely glamorous, high publicity, or even paid attention to at all. I am calling this no hanging fruit: it is so bad that anyone spending any time on it would find ways to compromise the entire system. Why is this an under-reserached category? Hardware. We can all emulate the heck out of an OS, but fewer can decipher FCCID specs and the resulting hardware, much less build and use the equipment to intercept the faults. Please keep in mind I don't find hardware hacking to be any harder than software hacking, but it does place a knowledge and cost burden higher than our day to day exploits.
As an amateur radio (ham radio) operator for the past 21 years, I have learned to appreciate the documentation that companies must publicly provide the FCC in the US to gain insight into the hardware. In this post we will take a look at fccid R7PEG1R1S2 for the electrical smart meter used by Oncor across almost all of Texas. I would direct post a link, but unfortunately we are stuck in the land of temporal post documents with no direct links. To follow along, visit https://www.fcc.gov/general/fcc-id-search-page, Grantee Code: R7P, Product Code: EG1R1S2. At the time of this writing I found 10 results, 9 in the 900mhz spectrum and 1 in the 2.4ghz area. You can browse these documents to your heart's content: it's all public from the US government despite the internal company confidential labels. As much as I love frequency counters, it is a lot easier to just grab the frequencies from a public document.
"The Gridstream RF network currently supports use of one encryption key per network. If you enable the FOCUS AX with encryption, the host must have a matching encryption key." No, you didn't mis-read that, the public specs state that in the event that they bother to encrypt on the 900mhz bit, a single symmetric key is used for all nodes in that network.
Just some personal favorite hilarity:
- "GMT Offset: The GMT Offset in 15 minute increments. Signed.
Valid values are -128 (0x80), corresponding to GMT-32hours to +127 (0x7F), corresponding to GMT+ 31hours" - my god must save some bytes!?! It seems like they used some junk 1980s hardware. - It's an FCC Part 15 Class B device and *must* not cause any harmful interference and must *accept* any harmful interference. Most forms of jamming are illegal in this country, but I wouldn't be surprised if an early model 900mhz cordless phone left on continuously might disrupt the system.
RF Baud Rate 9.6 kbps Min, 115.2 kbps Max, Programmable
In my area, none of the data is encrypted and is broadcast plain-text over the air. The configuration table in the FCC information is sufficient to pattern match and reduce that part of the data frame/packet to usable information without any reverse engineering.
Don't forget that the clocks are only accurate +/- 15 minutes. That is when they "check-in". Oh, part of the device is a permanent MAC, visible on the physical device. Given the clock skew, I can only theorize what might happen if an un-official signal is sent just before the official one. The specifications seem to indicate that the first signal would win *and* clock synchronization would then deem that node the official source.
I have in no way attempted to splice into the hardware or otherwise breach output via data communication over the AC lines. I would think that there would be no reason to broadcast things as critical as billing data usage plain-text over the air if they were using network over AC instead. Of course assuming competence or intent from a large corporation is a fool's errand.
I welcome any additional information in this blog or via email and am happy to supply GPG keys, if they are not readily found by interested respondents.
Happy hardware hacking!
Subscribe to:
Posts (Atom)