Some spear phishing wisdom from Security BSides SFO today…
Rohyt Belani of PhishMe told an interesting story highlighting just how much research attackers do when choosing their targets and crafting spear phishing payloads. In an attack on an energy company, employees received an email appearing to be from the company’s HR department offering information on discounted health care premiums for employees with more than 3 children. The only employees to receive the message? The two people at the company with 4 or more children.
This raises two issues for InfoSec professionals…
First, the attackers are doing their homework, people. They are taking the time to craft their social engineering payloads in ways that target very specific targets. This means (IMHO) that they are extremely motivated – most probably by money or ideology.
Second, our coworkers are helping the attackers with their targeting by sharing all sorts of personal information via social networking platforms. We need to educate them about:
+ The fact that their social media profiles are visible not only to friends and family, but also bad guys who will use that information to craft their attacks. The “familiarity cues” which we tend to use to determine whether a message or request is from a friend or a foe just don’t work anymore.
+ Their ability to control who sees their social networking information by using the privacy features offered by Facebook, LinkedIn, and to a lesser extent, Twitter. They need to think about what they are posting and who will see it – not only to protect the company, but to protect the privacy of themselves and their families.
While we put all sorts of technical solutions in place to protect our systems and information from malware, our users are the front line defense against the most serious threats we face. Educating them to be aware of how their actions both inside and outside the office affect the organization’s security is one of the most important tasks we face as InfoSec professionals.
It has been a pretty bad few weeks for Oracle’s Java language – zero day vulns, followed by an out of band patch, with another serving of zero days to top things off. “Uninstall Java – it is dangerous at any speed!” was the message from some security experts.
The things that make Java attractive to web app developers (it’s cross platform compatibility and pretty ubiquitous distribution) are the same things that make it such an attractive target for malware authors. Add to that a seemingly endless supply of critical security vulnerabilities, and you have a recipe for big trouble.
I have pretty much had it up to here (my hand is at neck level) with Java as a web plugin and would love to just uninstall the whole bug infested mess from my users’ computers at the office. (Of course I could say the same thing about Flash) However, some pretty critical parts of our business rely on Java web apps to bring in revenue (some of which goes to pay my salary – nuff said). So, I had to get a bit clever in coming up with a defensive strategy.
After looking at my web proxy logs, I determined that Java usage at my firm pretty much fell into two buckets: a small number of business related apps from trusted business partners and a whole bunch of totally non business related apps accessed during recreational surfing. This made my job pretty easy… I figured out where the business apps came from and created a whitelist. Then I set the web filter to block all .jar and .class file downloads from other locations. In the two or so weeks that this policy has been in place, I have gotten exactly one request to whitelist a new jar file. The result? A much reduced attack surface for the company. My users seem to be OK with the new policies, which I explained in an email blast.
Yes, we will continue to update our Java Runtime Environments – after all, there could be some locally installed software which needs the JRE and using the latest and greatest versions is just good practice. And we’ll continue to implement other good practices (getting rid of unused software, keeping an eye on our log files and network traffic, keeping patches and fixes up to date and the like).
While I can’t say that we are totally protected from Java based attacks, I do feel that we have struck a pretty good balance between security and the need to let the business do business on this one.
We don’t give too much thought to our VOIP phones – they look like regular old landline phones and seem pretty innocuous sitting on our desks. However, a presentation from the recent 29th Chaos Communications Congress held last week in Berlin should be a wakeup call for security professionals. 2 Columbia University researchers demonstrated how they used vulnerabilities in the operating system for Cisco’s VOIP phones in order to take control of the devices and turn them into eavesdropping devices capable of picking up conversations in their vicinity and relaying them to a remote attacker. As a bonus, they showed how to make their hack a permanent part of the phone, preventing patches and upgrades. Definitely worth viewing for security professionals.
What to do about it? Well, when Cisco releases a working patch for this problem, I would definitely suggest upgrading all affected phones’ firmware, I would also give some thought to how your VOIP VLAN is protected and whether having unattended feature phones in public parts of your site is a good idea.
For the past few years, the Social Engineering Capture the Flag contest has been a highlight of the Defcon security conference. The report from the 2012 edition of the contest provides some interesting insight into the social engineering threat and what companies need to do to protect themselves.
The targets of this year’s contest were 10 firms in the retail, oil, freight, telecom and technology industries. The oil industry got the highest marks for keeping their information secret, which makes sense to me. Their employees probably have a lot less interaction with the public on a day to day basis, so unusual requests for information would probably stand out from the norm. Retail giants Walmart and Target brought up the rear, giving up the most information.
The theme of this year’s contest was “Battle of the SExes,” pitting male social engineers off against their female counterparts. While the male contestants scored higher than the social engineers of the fairer sex, the small sample size (10 men and 10 women) and the fact that female participants in prior years of the contest were few and far between, makes me wonder if these results are indicative of a trend.
The contest participants were given two weeks to perform “open source intelligence” (the gathering of information about their targets from public sources on the Internet). A number of the companies targeted provided attackers with lots of information during this phase. Some of the more noteworthy information leaks resulted from photos posted on social media, which yielded pictures of employee ID badges and layouts of facilities – either which could help an attacker get physical access to their targets. Other information gathered from social media included ESSIDs of wireless networks and location checkins by employees.
The real fun began when contestants got on the phone. A number of pretexts were used to explain the callers’ requests for information. The trickiest pretext was that the caller was an employee of the targeted organization. Knowing the right jargon and using widely available caller ID spoofing services bolstered these callers in some cases, but maintaining a believable cover story here was difficult. Callers who purported to be “taking a survey” or calling from a vendor did not do too well, since many employees find these types of calls annoying and thus routinely terminate such calls quickly. One more successful pretext was that the caller was a student doing research on the targeted company for a school assignment.
The conclusions in the report were what you would expect:
- Employees need to be better educated against social engineering threats (true, in spite of the report writer’s business in performing such training and social engineering tests).
- Employers need to tighten their social media policies to control the leakage of confidential information to the Internet.
The second finding, while it sounds great, is potentially problematic for US companies. As I have noted in previous posts, US law does not allow companies to place many restrictions which make sense from a corporate security perspective on employees’ personal social media accounts. The regulations are aimed at preventing employers from quashing employees’ rights to discuss their work environment and organize unions, but have the side effect of making it very difficult to write social media policies which both protect the organization and stand up to legal scrutiny. If you haven’t reviewed your social media policies in a while, now is a good time to do so – and include your legal counsel.
The restrictions on social media restrictions make the need for employee education all the more important. The social engineers are out there and they are gunning for your company’s crown jewels. Taking the time to strengthen your Human Firewall is a worthwhile investment.
Earlier this week, an Australian firm providing billing and support services to web hosting firms found that their web site had been destroyed, their Twitter account hacked, and 1.7G of data (including customer information and hashed passwords and credit card numbers had been posted to the Internets for the world to see.
You’d think that the hackers who went on this rampage must have been really clever and exploited some arcane vulnerability to gain access to all of this valuable data. Or maybe they used some uber-slick piece of malware to get the information. You’d be wrong.
What appears to have happened is that the attackers were able to figure out the answers to the “security questions” for the company’s lead developer and use this information to con the webhost running the company’s web site to provide him with the administration password. It appears that the admin password was also the corporate Twitter account password. Doh!
Lessons we can learn from this:
- Security questions suck as an authentication mechanism. Think about the last few times you had to establish security questions – how easy would it be to guess your answers by looking at your Facebook, LinkedIn, or Twitter accounts? If the information is not there, a quick browse throw people search web sites may yield the information.
- Using the same password for multiple sites is a bad idea. It appears that the same password was used for both the victim company’s server administration and corporate Twitter account.
What you can do to protect yourself and your company:
- Build yourself a legend. Come up with a set of (false) security question answers which cannot be guessed by attackers. For example, your first car could be a “1931 Bugatti Royale Kellner Coupe,” your first school could be “Harvard,” and the town you grew up in could be “Peoria” (or if you are really good, one of these places). Above all, don’t use answers that can be found on your social media profiles or by Googling yourself.
- Don’t use the same password for multiple sites. You don’t want the compromise of one password to lead to an attacker getting access to all of your stuff. Use a password manager like LastPass or Keepass to easily and securely save you (per site) passwords as well as the fake answers to your security questions.
If you are an information professional at a publicly traded company, I would strongly suggest reading a recent blog post by Richard Bejtlich about the SEC’s requirements for the disclosure of cybersecurity breaches. Bejtlich points out that the ramifications of these requirements go well past getting in to hot water with the regulators – they also raise other risks, such as whistleblowing by employees or third parties as well as the potential for shareholder lawsuits when companies do not take the proper steps to secure information (or are perceived as not doing so). Having a conversation about this issue with your General Counsel before an incident occurs makes a lot of sense. All this being said, kudos to the SEC for recognizing the role of cybersecurity in good corporate governance.
This week, the Ninth Circuit US Court of Appeals ruled on a case which has an important impact on us information security types: US vs. Nosal.
Nosal was employed by recruiting firm Korn/Ferry. He left the firm to start his own, competing firm. After he left, he persuaded some of his Korn/Ferry colleagues to access confidential information owned by K/F and provide it to him. The K/F employees had access to the information as part of their work for the company, but were violating company policy in providing confidential information to a third party. When Korn/Ferry discovered the theft of information, they initiated legal proceedings against Nosal. In addition to suing him for civil damages, they filed a criminal complaint stating that he had “aided and abetted” the Korn/Ferry employees in violating the Computer Fraud and Abuse Act of 1984 by encouraging them to “exceed their authorized access to” Korn/Ferry computers.
Let’s stop here for a moment… what Nosal and the Korn/Ferry employees are alleged to have done was clearly wrong, and Korn/Ferry would be entitled to fire the employees and recover civil damages from the whole lot of them (IMHO). The question here is whether Nosal or the employees committed a federal crime which could lead them to jail time.
The Appeals Court did not agree with Korn/Ferry (and the federal prosecutors on the case). In its opinion, the court pointed out that the K/F employees were allowed to access the data in the course of their work, and thus did not “exceed their authorization” and that when they passed on the information to Nosal, they were in breach of their (civil) responsibilities of their employer. The court went further and said that interpreting the CFAA in the broad way advocated by Korn/Ferry and the prosecutors would make many very common behaviors federal crimes.
In particular, the court felt that the wider interpretation would make violation of corporate computer use policies and terms of service for Internet services criminal acts. For example, an employee who spent time shopping, playing games, or reading the sports pages online at a company with a computer usage policy limiting use of corporate systems to business use could find themselves in the “big house.” Now, as a corporate security professional, even I think that this is a bit excessive; corporate policy violations should lead to disciplinary actions and/or termination of employment, but prison time seems just a wee bit excessive to me.
The court also pointed out that criminalizing such a wide range of common behaviors would lead to a situation where the law would be applied inconsistently and arbitrarily.
There was a dissenting opinion, which contended that the ultimate use of the data (theft and providing it to a competitor) in and of itself was “exceeding authorized access.” The dissenting judge used the example of a bank teller’s access to their employer’s cash. The teller is authorized to access the cash in the course of doing their job, but would be exceeding their access should they access the cash to take it for their own use. I am not convinced by this argument, as the taking of the cash is a separate act which is criminal in and of itself.
In any case, this court has said that federal criminal law is not meant to help companies enforce their computer usage policies and that violation of those policies is a civil matter between employer and employee. This seems like a reasonable decision to me.
The court’s decision is worth a read – it was refreshing to read a decision which shows awareness of how the Internet is used in real life.
So, remember a few weeks back, when the tech press got really silly, warning us that hackers could set our HP printers on fire remotely? Well, it turns out that there was a security story about HP printers, but the press really missed the boat on what was actually important. At the 28th Chaos Communications Congress (held in Berlin last week), the Columbia University researchers whose work was totally misconstrued by the press presented their work. No, hackers cannot set your printer on fire – but they can install malware on hundreds of millions HP printers shipped since 2005, either by connecting to the printer and replacing its normal firmware with evil firmware or by getting one of your users to print out a specially crafted document which also carries their nefarious code. Once this hack is done, your printer will become a silent (but deadly) bridgehead into your network.
UPDATE: Here’s a list of all of the printers affected by this vulnerability.
The researchers had two demos. In the first, they caused the infected printer to silently send a copy of every document it printed to an attacker’s printer out on the Internet. Demo two had the infected printer acting looking for internal systems vulnerable to a Windows XP exploit and then acting as a relay for the attacker to control them from outside the firewall. This was pretty scary stuff… let’s say I send a crafted document purporting to contain a 50% off coupon for a local restaurant to your users… how many times (and on how many printers) would this get printed?
This hack is made possible by the fact that some HP printers allow their firmware to be updated without any authentication or digital signature and that all of the code within the printer runs as a super user. It also points out the need for anti malware protections for embedded devices like printers, routers and the like. The guys at Columbia are working on a project to do this.
As an aside, these same researchers scanned the Internet for accessible HP printers – they found over 75,000 of them, located at private companies, governments, educational institutions and in other places. Infecting just a small percentage of these systems would provide someone with a very stealthy botnet that would be extremely difficult to remove. The researchers feel that it may be possible for the attackers to install their code permanently, so that the only ways to get rid of the infection would be by replacing (soldered on surface mount) hardware components or trashing the printer altogether,
So… what to do?
First, update your HP printers’ firmware to the latest (December 2011 or later) firmware version, which can be found over on the HP support website. The new drivers require printer firmware updates to be digitally signed by HP.
Next, make sure that your printers cannot be accessed from the Internet. For most of my readers, I don’t think this will be an issue, but you never know… scan your Internet facing IPs for port 9100, which is used to submit print jobs and firmware updates to HP printers.
Third, limit where your printers can send traffic to… is there any good reason to allow a printer outbound access to the Internet? Not that I can think of. Putting printers on an isolated VLAN which can ONLY talk to the print server limits the damage that can be done using this attack. Of course you really need to make sure that your print servers are patched and properly isolated as well – and when eas the last time you took a look at your print servers?
We’ve all got some work to do, people but more importantly, we need to look at embedded systems like printers, routers, access points, and the like in a new way – as potential malware targets with the computing power to take down our networks and no antivirus protection. I can just about guarantee that the bad guys will be researching this in 2012 – it is just too juicy a target to ignore.
If you are a security pro or are responsible for printers in your organization, I’d recommend spending an hour watching the video of this presentation to get the full story.
Happy New Year, all.
OK… this story is a bit older than that movie… but it is even cooler – hacking 1903 style for the lulz!