When previously undisclosed vulnerabilities in the Drupal web content management system used by many large companies to manage their web sites were announced, hackers were busy exploiting those weaknesses within hours. This incident highlights the bind that security people and system administrators are increasingly find themselves in – we need to patch critical vulnerabilities quickly to protect our systems from compromise, but rolling patches out without proper testing can also lead to downtime (witness Microsoft’s recent run of faulty security patches). Having the skills to mitigate vulnerabilities while patches are tested and rolled out is a something we need to cultivate as security pros.
SANS recently published the latest edition of their “OUCH!” security newsletter for end users – this month’s topic is Yes – You Actually ARE a Target! - something that we usually have to remind users about on a regular basis, in spite of the regular coverage of hacks, data breaches and other cyber shenanigans which are always afoot these days.
OUCH is a good (and free) resource to augment your organization’s Security Awareness efforts.
Another attack on the iPhone 5s TouchID sensor… a German security firm has claimed to be able to use an iPhone 4s camera to grab a fingerprint image and then make that image into a fake finger mold. It still takes a bit of effort, but one barrier to entry (hi res camera) has been removed.
In addition, the same company claims to have defeated the Activation Lock feature which cripples lost/stolen phones by:
Getting a good photo of the target’s fingerprint
Making a fake finger mold
Putting the device into airplane mode
Going to another computer and requesting a password reset on the target’s Apple ID
Unlocking the phone with the fake fingerprint
Turning airplane mode off just long enough to receive the password reset email and resetting the password on the account.
Once this is done, the attacker would have the ability to unlock the phone. The key to this attack is getting the phone into airplane mode, which can be done from the lock screen if Siri and/or the Control Center are enabled on the Lock Screen. I would again recommend that 5s users turn off access to Siri and Control Center from the Lock Screen.
The same webpage includes a video showing the fake fingerprint technique used successfully on another phone as well as on a Lenovo laptop.
It is starting to look like fingerprint based authentication on corporate/consumer devices is still a work in progress and CISOs in organizations with BYOD policies need to do a risk analysis to determine whether the convenience of fingerprint authentication is outweighed by the potential risks. This is not a “one size fits all” calculation and really depends on the profile of your attackers. For some organizations, this is easy – I would hope that a defense contractor targeted by nation states would not use fingerprint authentication. For small businesses or consumers who are mostly concerned with device loss and non targeted theft, fingerprints may be good enough (especially if devices were not protected with passcodes in the past. Unfortunately most businesses fall somewhere in the middle of these two cases.
PS – One small positive item I left out from my previous posts on this topic… if you power off your 5s altogether or have not authenticated to the phone for 48 hours, you will be required to enter your passcode to access the phone.
A cautionary tale of cloud computing… apparently, a Google Groups group set up by the Japanese Ministry of the Environment to (internally) share documents and messages regarding negotiations about an international treaty was misconfigured, leaving the information therein world readable. Cloud computing is here to stay folks and governments, companies and other organizations (and their security folks) need to figure out ways to keep confidential data either out of the cloud or, better yet, safe in the cloud. IMHO, we need cloud providers to come up with creative ways to allow organizations to encrypt particularly sensitive data with keys controlled by the data owner.
Some spear phishing wisdom from Security BSides SFO today…
Rohyt Belani of PhishMe told an interesting story highlighting just how much research attackers do when choosing their targets and crafting spear phishing payloads. In an attack on an energy company, employees received an email appearing to be from the company’s HR department offering information on discounted health care premiums for employees with more than 3 children. The only employees to receive the message? The two people at the company with 4 or more children.
This raises two issues for InfoSec professionals…
First, the attackers are doing their homework, people. They are taking the time to craft their social engineering payloads in ways that target very specific targets. This means (IMHO) that they are extremely motivated – most probably by money or ideology.
Second, our coworkers are helping the attackers with their targeting by sharing all sorts of personal information via social networking platforms. We need to educate them about:
+ The fact that their social media profiles are visible not only to friends and family, but also bad guys who will use that information to craft their attacks. The “familiarity cues” which we tend to use to determine whether a message or request is from a friend or a foe just don’t work anymore.
+ Their ability to control who sees their social networking information by using the privacy features offered by Facebook, LinkedIn, and to a lesser extent, Twitter. They need to think about what they are posting and who will see it – not only to protect the company, but to protect the privacy of themselves and their families.
While we put all sorts of technical solutions in place to protect our systems and information from malware, our users are the front line defense against the most serious threats we face. Educating them to be aware of how their actions both inside and outside the office affect the organization’s security is one of the most important tasks we face as InfoSec professionals.
It has been a pretty bad few weeks for Oracle’s Java language – zero day vulns, followed by an out of band patch, with another serving of zero days to top things off. “Uninstall Java – it is dangerous at any speed!” was the message from some security experts.
The things that make Java attractive to web app developers (it’s cross platform compatibility and pretty ubiquitous distribution) are the same things that make it such an attractive target for malware authors. Add to that a seemingly endless supply of critical security vulnerabilities, and you have a recipe for big trouble.
I have pretty much had it up to here (my hand is at neck level) with Java as a web plugin and would love to just uninstall the whole bug infested mess from my users’ computers at the office. (Of course I could say the same thing about Flash) However, some pretty critical parts of our business rely on Java web apps to bring in revenue (some of which goes to pay my salary – nuff said). So, I had to get a bit clever in coming up with a defensive strategy.
After looking at my web proxy logs, I determined that Java usage at my firm pretty much fell into two buckets: a small number of business related apps from trusted business partners and a whole bunch of totally non business related apps accessed during recreational surfing. This made my job pretty easy… I figured out where the business apps came from and created a whitelist. Then I set the web filter to block all .jar and .class file downloads from other locations. In the two or so weeks that this policy has been in place, I have gotten exactly one request to whitelist a new jar file. The result? A much reduced attack surface for the company. My users seem to be OK with the new policies, which I explained in an email blast.
Yes, we will continue to update our Java Runtime Environments – after all, there could be some locally installed software which needs the JRE and using the latest and greatest versions is just good practice. And we’ll continue to implement other good practices (getting rid of unused software, keeping an eye on our log files and network traffic, keeping patches and fixes up to date and the like).
While I can’t say that we are totally protected from Java based attacks, I do feel that we have struck a pretty good balance between security and the need to let the business do business on this one.
It seems that the National Labor Relations Board (NLRB) is continuing to extend its push into the regulation of social media in non unionized work places. According to this Morgan Lewis LawFlash, two recent cases (which may end up in the appellate courts) continue the Board’s assault on workplace social media confidentiality policies.
In the first case, involving Costco, the NLRB found that a whole section of the firm’s social media policy dealing with prohibition of posting confidential information to social media platforms was rendered invalid because it included a ban on posting “payroll information,” which the NLRB felt pertains to protected activity under section 8(a)(1) of the Labor Relations Act.
The second case, involving an auto dealer named Knauz, struck down the employer’s social media policy based on the following language:
[c]ourtesy is the responsibility of every employee. Everyone is expected to be courteous, polite and friendly to our customers, vendors and suppliers, as well as to their fellow employees. No one should be disrespectful or use profanity or any other language which injures the image or reputation of the Dealership.
The Board felt that the language would discourage employees from using social media for activities covered under section 7 of the Labor Relations Act, such as organizing a union or having discussions about work conditions.
The lesson? Make sure that your company’s Social Media policy passes muster with your legal team – and make sure your legal team knows about what the NLRB has been up to in this area. Social media has the potential to be an exfiltration vector for your organization’s confidential information; you don’t want to end up with a policy which is thrown out when you need it most.
For the past few years, the Social Engineering Capture the Flag contest has been a highlight of the Defcon security conference. The report from the 2012 edition of the contest provides some interesting insight into the social engineering threat and what companies need to do to protect themselves.
The targets of this year’s contest were 10 firms in the retail, oil, freight, telecom and technology industries. The oil industry got the highest marks for keeping their information secret, which makes sense to me. Their employees probably have a lot less interaction with the public on a day to day basis, so unusual requests for information would probably stand out from the norm. Retail giants Walmart and Target brought up the rear, giving up the most information.
The theme of this year’s contest was “Battle of the SExes,” pitting male social engineers off against their female counterparts. While the male contestants scored higher than the social engineers of the fairer sex, the small sample size (10 men and 10 women) and the fact that female participants in prior years of the contest were few and far between, makes me wonder if these results are indicative of a trend.
The contest participants were given two weeks to perform “open source intelligence” (the gathering of information about their targets from public sources on the Internet). A number of the companies targeted provided attackers with lots of information during this phase. Some of the more noteworthy information leaks resulted from photos posted on social media, which yielded pictures of employee ID badges and layouts of facilities – either which could help an attacker get physical access to their targets. Other information gathered from social media included ESSIDs of wireless networks and location checkins by employees.
The real fun began when contestants got on the phone. A number of pretexts were used to explain the callers’ requests for information. The trickiest pretext was that the caller was an employee of the targeted organization. Knowing the right jargon and using widely available caller ID spoofing services bolstered these callers in some cases, but maintaining a believable cover story here was difficult. Callers who purported to be “taking a survey” or calling from a vendor did not do too well, since many employees find these types of calls annoying and thus routinely terminate such calls quickly. One more successful pretext was that the caller was a student doing research on the targeted company for a school assignment.
The conclusions in the report were what you would expect:
- Employees need to be better educated against social engineering threats (true, in spite of the report writer’s business in performing such training and social engineering tests).
- Employers need to tighten their social media policies to control the leakage of confidential information to the Internet.
The second finding, while it sounds great, is potentially problematic for US companies. As I have noted in previous posts, US law does not allow companies to place many restrictions which make sense from a corporate security perspective on employees’ personal social media accounts. The regulations are aimed at preventing employers from quashing employees’ rights to discuss their work environment and organize unions, but have the side effect of making it very difficult to write social media policies which both protect the organization and stand up to legal scrutiny. If you haven’t reviewed your social media policies in a while, now is a good time to do so – and include your legal counsel.
The restrictions on social media restrictions make the need for employee education all the more important. The social engineers are out there and they are gunning for your company’s crown jewels. Taking the time to strengthen your Human Firewall is a worthwhile investment.
OK – what are you more afraid of – sharks or cows? Well, according to the folks at Popular Mechanics (via blog Boing Boing), it is the crazed bovine death machines which are the real threat:
Between 2003 and 2008, 108 people died from cattle-induced injuries across the United States, according to the Centers for Disease Control and Prevention. That’s 27 times the whopping four people killed in shark attacks in the United States during the same time period, according to the International Shark Attack File.
It seems to me that information security risks are a lot like sharks and cows. We infosec professionals love to talk about, hunt and defend against sharks, like zero-day vulnerabilities, state sponsored cyber-weapons, and other exotic threats. However, it is the cows of the infosec world, like unpatched software, misconfigured systems and devices, human errors, and users falling for malware laden links or emails, that are much more likely to result in a system compromise.
When making decisions about where to put our limited infosec funds and resources, we need to decide whether the threat we are defending against is a shark or a cow. Let’s take care of the cows first – before they take care of us. Then we can have some fun and hunt the sharks!
A while back, I wrote about how US organizations writing social media policies need to beware of the National Labor Relations Board’s requirements that these policies not interfere with the rights of employees to discuss their working conditions or organize unions. At the time of my original post, the NLRB had released a guidance document which raised more questions than it answered. Since then, they have released additional guidance which includes a number of examples of bad policies and explains the specific problems with each. More importantly, it includes a sample policy which is in compliance with NLRB rules and which can be used as a guide in writing (or updating) your company’s social media policy. It is really worth taking a look at this document – many things that any normal, reasonable infosec professional would expect to be acceptable (ie. “don’t post confidential information to social media sites”) are not.