2017-01-04

Vetting information [at all]

First, the usual disclaimers:
  • Nothing posted here is my work at any employer, past or present.
  • Nothing here should be construed as any statement factual, speculative, or opinion that is in line with any employment past or present.
  • My opinions are my own and you are free to mock me or create good feedback and dialogue.  I am fine with mocking, but appreciate more constructive comments.
  • Nothing discussed here was derived from anything but public resources.
  • You should not base anything solely on this post.
  • I distrust all information, especially from a large entity like a government, until sufficient evidence is brought forth and subject to public scrutiny. 
  • Sorry for the longer than usual disclaimers, but it is sadly more necessary and prudent.
And the unusual:
  • I am a US citizen.
  • I vote in all major elections.
  • I voted in the presidential race.
  • I did not vote for Hilary Clinton or Donald Trump.
  • I live in Texas and the electoral college rather makes my vote moot.
  • We all have personal bias - follow the facts and educate yourself.

As part of the evidence showing that Russian nation-state actors interfered with the US election by hacking and leaking information from:
  • The Democratic National Committee (DNC)
    • "hacked"
  • Hilary Clinton's private email server
    • "hacked"
  • John Podesta's email
    • "spearphishing"
The United States Federal Government issued public information through a joint FBI and DHS release about GIZZLY STEPPE.

It should be noted this information was released as TLP:WHITE, which means it is thoroughly public and allowed for all dissemination.  Something like TLP:AMBER would have allowed organizations to vet this information, report back, and a well vetted public release made.  Sadly, this is one of many things that did not happen.  It is also noteworthy that in the opening prelude it notes that this Joint Action Report (JAR) is the first ever to "attribute malicious cyber activity to specific countries or threat actors."  It is certainly one thing to attribute this type of activity to a threat actor *but* the first time they attribute it to anyone, much less a specific country *and* they are so sure of their findings that it can be pinned on Russia.

Alright, lets ignore their mostly useless PDF remarks and look at their indicators.  Alright we have a list of mostly IPs, some domains, and no context other than they are related to malicious Russian campaigns.  This must be some timely and juicy data!  NO!  For a public release of this magnitude to not disable the C&C (C2 if you prefer) machines in advance or in coordination with the release is completely irresponsible.  A giant chunk of these IPs are based in the United States so the FBI should easily be able to obtain a warrant and take over or shut down the machines.  Could they not obtain a warrant?  Were they too lazy to?  Does even the FBI distrust this information?  We are just getting started on how poorly this was executed, even if we take the US government at face value and no questions.

Wait, but surely the USFG couldn't be so bad at vetting the list that a laptop at a Vermont electricity plant checking their Yahoo! mail would be flagged as Russain attacks?  Sorry, but at least someone posted the correction.

"At least 30 percent of the IP addresses listed were commonly used sites such as public proxy servers used to mask a user’s location, and servers run by Amazon.com and Yahoo."  Wow, impressive vetting for such a short list of IPs.  So great was this vetting of this important release that newspapers and tech types alike had to report corrections after *actually* reviewing the data.  I guess the DHS and FBI working together at great taxpayer cost and many hours cannot even do the most simplistic checks on an IP that hasn't changed ownership in sometimes 10+ years.  Even if they are unable to run whois on their windows machine, Google offers plenty of useful whois and nslookup utilities.

Oh, it gets worse: “No one should be making any attribution conclusions purely from the indicators in the [government] report,” tweeted Dmitri Alperovitch, chief technology officer of CrowdStrike, which investigated the DNC hack and attributed it to the Russian government. “It was all a jumbled mess.’’  Yes, the company hired *by* the DNC and concluded it *was* the Russian government agrees that this report is garbage.  Their methods, motivations, and conclusions are all questionable, but apparently even a collection of: Obama, Clintons, DNC, etc. cannot buy approval into this mess.

I am still awaiting the public release promised of the proof of the so called hacking by a nation-state.  However, I am expecting the proof to be increasingly vague and nothing but a desperate attempt to prove a point that lacks evidence.  I don't care what your political disposition is, but at this point either we get the evidence promised or we have to assume the subterfuge alleged by foreign parties is as believable as what our own government tells us.

2016-07-11

Honeytokens to detect, prevent, and remediate insider threats

I expect my audience to know what a honeypot is, but the term Honeytokens is new to me, likely others, even if the concept is not new.

Corporations have been great at securing the perimeter with layers of routers, firewalls, default denies, intrusion detection, etc.  However, many have done so well that the threat is no longer even bothering with such front line security and is looking to insider attacks.  These can take many forms: profit motivated insider, blackmailed insider, spear phishing for a relatively low privilege inside account, and plenty of other non-enumerate attacks.  Finding even the most trivial inside account and leveraging it for pivots or other further attacks is just another way to create dwell time that no organization can possibly trace 200+ days of unauthorized activity in full by the time they detect the attack. 

The ideas of categorizing threat actors into organized groups, nation states, etc. is a quaint idea, but seems more like a wistful attempt to bring order to a truth that most are unwilling to accept: a single dedicated attacker with little to no finances from anywhere in the world can create the most damaging attacks ever seen.  I don't dismiss the value of threat intelligence groups - good TI makes us analytics types have better positions to work from, but they act only as a supplement in a world that grows, splits, and becomes more diverse faster than most security can respond to.

I further focus this post not on direct assault, but simply upon methods of removing confidential company data to a third party.  I do not attempt to explore the various motives in such an insider or outsider or mixed attack, but only look to explore the damage of the leak of proprietary company data.

In general most humans are fairly trusting and we expect a certain level of trust and honesty from our fellow man.  This is the fundamental model that even makes all forms of commerce even possible.  However, this trust model breaks down with a single bad actor and instead of being stupefied that we were taken advantage of, we have to be pro-active.  I can enumerate examples such as: locking your house, locking your car, having an alarm, etc. that show most of us fall more into the trust but verify model.

In the ideal model we decrease dwell time to seconds or minutes or at worst hours (oh assuming we can't detect and prevent 100% of attacks which is an ideal model that is so far from our current capabilities as to warrant ridicule).  Until we reach the ideal model, there a number of stop-gaps that are worth exploring and that is exactly where I will focus the rest of this post - whether the insider account is operated by the intended user or someone else.

This post is mostly inspired by a publication the ACM Transactions on Information System Security: "Behavioral Study of Users When Interacting with Active Honeytokens", https://dx.doi.org/10.1145/2854152.  I apologize to those readers without some form of access to the publication, but I've maintained my professional ACM membership at 100% personal cost and assuming your company cannot be convinced at the benefit of access, then I highly recommend you consider personal membership.  As such I can at most paraphrase, but not directly quote per copyright.

The basic scenario is setting up the study group to maximize their profit and some are aware of counter-measures and others are not.  In any case the basic scenario are people as bank loan officers who get a commission and can potentially make more money by violating rules, regulations, and law.  There is a slight incentive for getting away with it and not getting caught.  There is no incentive to getting caught.  Out of 173 participants only *4* were not in some way detected violating policy, law, etc.  The use of honeytokens at 20% rates basically resulted in 100% detection and at 10% had fairly good success rates, but no where near as good.  Of course there is cost, maintenance, and expense to injecting false records to detect malicious behavior.  While a limited study, it should serve as a giant warning to organizations who base pay on commission.  The pay off for cheating is great, as long as you don't get caught.

Potential specific lessons?  Strongly distrust anyone who profits from commission and inject honeytokens to check their behavior.  As with nearly everything in this world: follow the money.

Does this study or my impression indicate that commission based pay is guaranteed to create impropriety?  No.  Does it have a strong potential bias to create it? Yes.  The application of honeytokens is not free and certainly even small studies like this can help focus on where the risk and cost outweighs the countermeasures.

2016-07-02

Protection [or the lack of] cyrptogrpahic primatives in a public cloud environment

As I watch the market mature and see organizations attempting to deploy their own private cloud solutions, generally citing security and note cost, I find an opening to a potentially receptive audience on the risks of public cloud security.  All solutions have trade offs ranging from cost, security, risk acceptance, risk mitigation, etc.

Unfortunately, technological innovation almost always out-paces appropriate security measures.  This all circles back to the never ending problem of forging ahead clueless of security and then having to patch security instead of baking it into the development cycle from the start.

Most of us know that the end result is an insecure product that can never hope to get the security aspect correct and we shake our heads and hope that the next iteration sysadmins and developers might actually include security from the start and "bake it in".  The day where that becomes the norm is on our door-steps and rapidly the secure after you deliver the product to market model is going to become extinct.  This last remark is not delusional euphoria or hopeless optimism.  Apparently companies are slowly learning that the lack of baking in security from the beginning may actually cost them more in the long term.  Consumers are actually paying attention and demanding better security than we are all used to (especially true for banking in the US compared to Europe).

The cloud panacea is starting to show increased fractures such as https://eprint.iacr.org/2016/596.  This paper is another iteration in how to attack multi-tenant issues inside a single physical host.  The game changer is that we're talking about consistent 2048-bit RSA key recovery via nothing more than another guest VM once you can determine it co-locates with your target host/cpu, which is easy to determine and establish.  The attack described is *with all fully patched OpenSSL that has protections against these sorts of attacks*.

You cannot bolt on security after the fact and expect good results.  However, even in cloud computing that is exactly the model currently being used.  If we cannot secure cryptographic keys in the public cloud environment, then there can be no guarantees *PERIOD*.  Well, that's unfair there is a guarantee that the security will fail and be completely undermined.

2016-06-17

Offensive security against no hanging fruit

In general I find offensive security to be a bit of a pre-determined successful outcome and not much of a challenge.  Given enough time you only need a single successful attack to win the game.  As amusing as it is to see remote exploits for windows rolling out faster than the patches, it's a fairly dull path to find an infinite number of ways to reach that goal.

I find spending my time in defensive security to be a much more challenging and worthwhile pursuit.  How can I defend against all known and unknown attacks or at least decrease dwell time so remediation is happening within days instead of months to years.

All that said, there is an area where the offensive and defensive side are not entirely glamorous, high publicity, or even paid attention to at all.  I am calling this no hanging fruit: it is so bad that anyone spending any time on it would find ways to compromise the entire system.  Why is this an under-reserached category?  Hardware.  We can all emulate the heck out of an OS, but fewer can decipher FCCID specs and the resulting hardware, much less build and use the equipment to intercept the faults.  Please keep in mind I don't find hardware hacking to be any harder than software hacking, but it does place a knowledge and cost burden higher than our day to day exploits.

As an amateur radio (ham radio) operator for the past 21 years, I have learned to appreciate the documentation that companies must publicly provide the FCC in the US to gain insight into the hardware.  In this post we will take a look at fccid R7PEG1R1S2 for the electrical smart meter used by Oncor across almost all of Texas.  I would direct post a link, but unfortunately we are stuck in the land of temporal post documents with no direct links.  To follow along, visit https://www.fcc.gov/general/fcc-id-search-page, Grantee Code: R7P, Product Code: EG1R1S2.  At the time of this writing I found 10 results, 9 in the 900mhz spectrum and 1 in the 2.4ghz area.  You can browse these documents to your heart's content: it's all public from the US government despite the internal company confidential labels.  As much as I love frequency counters, it is a lot easier to just grab the frequencies from a public document.

"The Gridstream RF network currently supports use of one encryption key per network. If you enable the FOCUS AX with encryption, the host must have a matching encryption key."  No, you didn't mis-read that, the public specs state that in the event that they bother to encrypt on the 900mhz bit, a single symmetric key is used for all nodes in that network.

Just some personal favorite hilarity:
  • "GMT Offset: The GMT Offset in 15 minute increments. Signed.
    Valid values are -128 (0x80), corresponding to GMT-32hours to +127 (0x7F), corresponding to GMT+ 31hours" - my god must save some bytes!?!  It seems like they used some junk 1980s hardware.
  • It's an FCC Part 15 Class B device and *must* not cause any harmful interference and must *accept* any harmful interference.  Most forms of jamming are illegal in this country, but I wouldn't be surprised if an early model 900mhz cordless phone left on continuously might disrupt the system.
RF Frequency Range 902.2 MHz Min, 927.9 MHz Max
RF Baud Rate 9.6 kbps Min, 115.2 kbps Max, Programmable

In my area, none of the data is encrypted and is broadcast plain-text over the air.  The configuration table in the FCC information is sufficient to pattern match and reduce that part of the data frame/packet to usable information without any reverse engineering.

Don't forget that the clocks are only accurate +/- 15 minutes.  That is when they "check-in".  Oh, part of the device is a permanent MAC, visible on the physical device.  Given the clock skew, I can only theorize what might happen if an un-official signal is sent just before the official one.  The specifications seem to indicate that the first signal would win *and* clock synchronization would then deem that node the official source.

I have in no way attempted to splice into the hardware or otherwise breach output via data communication over the AC lines.  I would think that there would be no reason to broadcast things as critical as billing data usage plain-text over the air if they were using network over AC instead.  Of course assuming competence or intent from a large corporation is a fool's errand.

I welcome any additional information in this blog or via email and am happy to supply GPG keys, if they are not readily found by interested respondents.

Happy hardware hacking!

2012-12-10

Data breaches and US law regarding the spoils

First, I would like to note that this post diverges from my original goals for this blog, but this is unfortunately an item that is necessary to explore, discuss, and understand.  I am not a lawyer, do not have legal training, and am giving my thoughts and opinions regarding US law on these matters.  This post entirely stems from https://twitter.com/jspilman/status/278284483990519808 and the linked article.

It is clear that reality and the law do not keep pace.  I am going to cover the items listed in the article as part of the indictment against Brown.  Please note that I am reading and referencing the linked indictment, but this is a scribd.com link and I have not verified the validity or accuracy of it.  In any case my intent is not to make direct comments on this case but to use this as sufficient material to discuss how US law applies.

The first important aspect is one of jurisdiction.  The indictment starts by establishing the jurisdiction on the basis of interstate commerce.  Discard all common sense when it comes to US law and interstate commerce.  There is case law for which the possibility of interstate commerce was sufficient to invoke federal jurisdiction, without the requirement that any intent or interstate commerce was shown.  Right or wrong, this barrier to start a case has such a low bar that it is nothing but a formality.

For the first count, it is unclear if his offense is literally copying an url from one IRC channel to another.  We could theorize that the defendant downloaded it, torrented it, and so on, but for the law simply passing the url around fits the requirement of making it available.  The law makes no remarks here on if the defendant had a copy, sent copies, etc.

The second count addresses "intent to defraud" and intent is almost as laughable as establishing jurisdiction today.  In some laws establishing intent is a high and difficult bar to establish, but in cases of unauthorized access devices the possession itself establishes intent to defraud based on current case law.  Yes, this means that possession and intent are synonymous in this case.

Counts two through twelve simply allege the defendant "knowingly transferred and possessed without lawful authority..." some specific data.  Here the standard of intent is even more of an afterthought.  If the prosecution can prove that the defendant possessed the data, then this is essentially game over.  The defense would have to prove the defendant did not have the data, or did not know he had it, or did not know it was unlawfully obtained, etc.

My primary intent in this post is to put everyone in a guarded and vigilant position when it comes to data leaks and US law.  Most of these laws have no affirmative defense at all and ignorance is certainly not part of them.  Jurisdiction and intent are quaint ideas that are codified but irrelevant to successful defense efforts.

Jeremy Spilman's original question/alluded question is whether or not these laws or similar laws can be used to go after people who have password lists.  The answer is absolutely yes.  Therefore, when it comes to free speech and research in the US, you had better be a full time university student or professor or you should expect zero legal protection.  The discussion of the first amendment, free speech, and research is well out of the scope of this post.

What practical advice should you take from this?  It is not a question of if password lists are next but when the indictments start, assuming they have not already.

2011-10-28

All your RSA token seeds are safe ;)

As many of you are aware RSA, the company not the public key cipher, had some security issues.  In particular it related to an APT (Advanced Persistent Threat), really it's called a Trojan and they are not new, and compromises of SecureID tokens.  Well, it seems that their initial statements that no token seeds were compromised and all the various PR double-speak were outright lies.  Many are already aware of this fact but I was quite surprised to finally see a response from a company that I have such a token from.

For one of my banks I could purchase a Secure ID token for somewhere between $20 - 30 USD for the device and then pay $5 USD/mth for the privilege of using it.  I should note that this token is actually not capable of being part of my actual bank sign on.  It is only part of vSafe which stores things like all your statement history since you opened your account and any documents up to 1 GB that you wish to put there.  They claim it is encrypted, no details available, but that they will turn over all your data to law enforcement if necessary so likely by encrypted they mean the SSL *to them* and not the actual data.

After the RSA Secure ID breach I started sending emails, calling, and so on to demand they replace my now compromised token.  Most people that I encountered did not even know they offered this and the general response was that it might be more secure to terminate my token and use the SMS 24 hour codes instead.  That's right a code sent to you via SMS that is valid for 24 hours *or* your Secure ID token that changes a bit more regularly than that.  Others helpfully offered to disconnect my token and then charge me $20 - 30 USD to have a new one shipped.  Of course maybe I would just get part of the bad batch again.

Much to my surprise I received a FedEx package with this letter and a new token.  I would like to highlight some things like "ongoing security process."  My old token has an expiration of 2013-12-31 and was issued 2010-05.  My new token has an expiration of 2016-12-31.  Really, their ongoing security process includes issuing new tokens less than 1.5 years later and some 2+ years in advance of the expiration?

I also like the fact that the document ends with SSA_L_AllTokenReplacement which seems like some sort of internal document name.  I wonder why there would be a need for AllTokenReplacement?

For those not yet aware of devices such as the Yubikey then I suggest you look into them.  Between cheaper token costs, ability to use their auth server or with Kerberos run your own, etc. they are much more cost effective than the RSA tokens and all details are publicly available.

2011-06-14

GRC haystack 2 - rainbow tables

I did not intend to spend so much time on this topic but @itinsecurity had the time to listen to the 37 minute podcast and the claim is this padding scheme is immune to rainbow tables.  I intentionally did not do rainbow table analysis because I already had a long post.  Also, to even enter this discussion we have to assume the application in question does not store passwords plaintext and instead hashes them.  Additionally, we assume the attacker already has all the hashes.

Lets start with
password: Summer
[A-Za-z][a-z]{5}
52^1 * 26^5 which is roughly 2^29.203

Lets pick some potential padding characters: [., ].  Alright how about the character sets:
[A-Za-z][a-z]{5}[a-z., ][., ]{1,4}
52^1 * 26^5 * 29^1 * 120 which is roughly 2^40.968
length 8 to 11

[A-Za-z][a-z]{5}[a-z., ][., ]{1,8}
52^1 * 26^5 * 29^1 * 9840 which is roughly 2^47.325
length 8 to 15


Perhaps users will cease to append years to the end of the password and add a padding character every password change?  Alright lets go for larger character sets:
[A-Za-z][a-z]{5}[a-z.,<->\*\[\] ][.,<->\*\[\] ]{1,4}
52^1 * 26^5 * 35^1 * 7380 which is roughly 2^47.181
length 8 to 11

Does this mean it is rainbow table proof?  Hardly.  It just means traditional tables may not be useful but plenty of options exist to defeat them: length, unicode, etc.  Sure some users may put padding at the beginning, at the end, make it really long, etc. but this just gets silly.  Using 4 randomly selected words from diceware's list gives you 7776^4 or 2^51.699 and that's purely lower case, no padding, no symbols, etc.  Average length of a word is 4.2 characters.