May 14

What's the worst that could happen?

Spear phishing has been in the news quite a bit lately – it seems like just about all of the recent high profile hacks began with someone clicking on a link or opening a document.  Here’s a data point which seems to corroborate the innate sense of trust that leads people to do really stupid things. According to an entry in Bruce Schneier’s blog… in Istanbul, police dressed up as doctors, knocking on doors unannounced, were able to persuade 86% of subjects to take a pill.  And this is after a rash of crimes in which people who are not police did the same thing, using powerful sedatives to disable victims and ransack their homes.  My belief in knowledge of human psychology as the most powerful hacking tool remains strong.  Or maybe there is something in the water in Istanbul…

 

 

They Might Be Giants – Istanbul (Not Constantinople) from They Might Be Giants on Vimeo.

Sep 06

Last week, an experiment conducted by Duke University and the European RIPE Network Control Center got a little bit out of hand, interrupting Internet traffic in 60 countries worldwide.  In all, about one percent of Internet traffic was affected by the test gone awry.  One percent of Internet traffic does not sound like a lot – most of that traffic was probably illegal file sharing, lolcats and porn, but what if your Internet based business was affected?  My employer (who shall remain nameless and whose opinions this post does not reflect) is an Internet based business in which the value of each (time sensitive) transaction is probably thousands of times the average for the rest of the net.  We were not affected by the testers’ little oopsie, but had we been, the potential losses would have been significant.  I am sure my company is not the only one in such a situation.

Yes, Cisco did fix the bug which caused this particular outage, but I think that this incident points out some questions that really need to be answered:

Should researchers be conducting experiments on the Internet with potential for widespread negative impact on a shared business resource? If someone ran this type of potentially disruptive testing on my company’s network during business hours, I’d be looking for them to be fired, sued, arrested and forced to listen to this album for the rest of their lives.  Researchers need to realize that the Internet is the planet’s “production network” with no “maintenance window” and that the same best practices we follow in the enterprise (separate test environment, for example) need to be followed when tinkering with its innards.

Had someone experienced significant financial losses due to this experiment, what would its recourse be? No one expects the Internet to be free of glitches and outages, but in this case, a conscious decision was made to do something which could reasonably be expected to cause problems.  Could there be lawsuits here?  Are the researchers exposing their organizations to potentially ginormous liability?  If the damaged party was in, say, Asia, who would have jurisdiction over the case and where would it be tried?

In an era where cyberspace is increasingly recognized as a “battlespace,” could an experiment such as this (on a larger scale) be mistaken for a cyber attack and possibly lead to real world hostilities?

Researchers and governments should take this opportunity to stop and think about the “rules of the road” for the global Internet.  Long ago, we all recognized that the oceans are a common resource and that we need a Law of the Sea to allow us to agree on what is and is not acceptable on the bounding main.  It seems to me that the Internet is the sea of the 21st century and needs a similar set of supranational rules to ensure that it accessible to all.  Are you listening, UN?

Jul 11

Is Microsoft a cyber-Benedict Arnold?

OK, call me a cold war relic, but I find the recent revelation that Microsoft has provided the source code for Windows, SQL Server, and Office to the Russian FSB (the spies formerly known as the KGB) as well as to the Chinese government quite disturbing. As recent events prove, Russia is still actively engaged in espionage against the US public and private sectors.  We know that the Chinese People’s Liberation Army is actively building an offensive cyber capability and that they use technology to suppress free expression in their country.  Microsoft’s disclosures have been going on since 2002, as part of a program under which Microsoft has supplied source code for its products to a number of countries as well as NATO.

It does not take too much imagination to conjure up visions of Russian or Chinese  government security researchers finding zero-day exploits to allow their paymasters to craft undetectable malware which is then placed on US government and private sector computers.  Such an attack would be a cost effective, low risk way to gather more information in a day than the recently unmasked spy ring was able to collect over a decade.   It takes even less imagination to envision the Chinese government using their access to Windows source code to build more efficient tools to monitor and muzzle those who dare to speak out against the Communist Party.

This incident raises a number of  interesting questions.

Is Microsoft (a company born in America, whose success was built on the US market, and which benefits from tax breaks funded by US taxpayers) right to provide access to source code of products which are the underpinnings of all sorts of critical infrastructure to nations which are actively engaged in espionage against the US and whom we may meet on the cyber battlefield of the future?  It seems to me that this is sort of like hiring a company to build a fort and then allowing them sell the plans to your adversaries.

Should Microsoft’s products have some sort of special status which recognizes them as part of the US critical infrastructure?  After all, Microsoft has been allowed to gain what is basically a monopoly in the US market for operating systems and other key software.  Does this engender a responsibility on their part to act in accordance with US national interests?   I think it does.

Microsoft hasn’t done anything illegal here.  It would be nice if they felt a need to protect the critical infrastructure of their country, but as a private entity with no laws or regulations to prevent their actions, they made the logical business decision to share the source code in order to gain better access to the Russian and Chinese markets.   However, their choice is a bum deal for the rest of us, who will have to deal with the repercussions of this decision while Microsoft reaps the profits.  We need to tell our legislators that it is time to take a fresh look at what we ask of companies like Microsoft and Cisco, whom we have allowed to develop monopolies on key parts of the nation’s critical infrastructure.  In the conflicts yet to come, cyberspace will play a key role – and Microsoft has sold the plans for the fort to potential adversaries.

May 31

For most of us, Memorial Day is the unofficial start to summer, or a day off, or a shopping day.  But let’s take a moment to remember what the day is really about – the men and women who have given their lives to protect America and the freedoms it stands for.  While you are at the beach, barbecuing in the backyard or shopping at the mall, take a moment to reflect on their sacrifices as well as on the thousands of Americans who are in harm’s way in foreign lands.  Thank you all…

Apr 25

no fear?

By alberg deep thoughts Comments Off

Another tidbit from Josh Corman’s excellent talk on FUD (Fear, Uncertainty and Doubt) in the information security industry… the following comes from Frank Herbert’s Dune series of scifi novels: 

LITANY AGAINST FEAR 

I must not fear.
Fear is the mind-killer.
Fear is the little-death that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing.
Only I will remain.

Josh asked an important question during his talk – is there any place for fear in information security?
 
My two cents:  Humans (and animals) fear for a good reason; responding to perceived threats in a timely fashion is very handy if your goal in life is to survive.  In the info sec world, I think that fear has some use, as an indicator and a call to action.  However, once the threat causing the fear reaction is identified and evaluated, we need to discard the fear and replace it with a heightened sense of awareness and a sense of the true nature and proportion of the threat.  The fears we face in info sec are not typically existential in nature; once we know and understand our enemy, we need to devote our mental and physical energy to meeting the challenge – fear just gets in the way.  
So, we must not fear (for more than a couple of minutes).

I think this is going on my wall…

 
Except… 

Apr 25

This weekend, I attended the Security B-Sides Boston conference  (which, by the way, I heartily recommend to all info sec types).  My favorite session of the day was Josh Corman‘s “Fsck the FUD” talk… this talk was chock full of security thought leadership goodness and will probably result in a number of blog postings here at Paranoid Prose.

In his talk, Josh asked a really thought provoking question:  When was the last time that the information security community retired a control?  If you take a look at lists of recommended security controls from 10 or even 20 years past, you will see many of the same measures that are found in the latest PCI, COBIT and other prescriptive documents.  Each year, a few new must have controls are added, much to the chagrin of CSOs and security personnel (who then have to spend more of their limited time and resources implementing new controls as well as maintaining existing ones) and to the delight of auditors (who get job security and longer audit checklists to fill out, and thus more billable hours).  This approach of continuous “improvement” of security “standards” is just not scalable, given most organizations’ unwillingness to fund the corresponding infinite growth of security resources (how unreasonable!).

Why is this happening?  Josh’s theory (with which I agree) is that auditors and standards writers tend to be very conservative.  In their minds, once a control is written down, it becomes revealed truth, and having more controls must ensure a higher level of security, right?  As a result, many organizations (especially those in heavily regulated industries like Finance, Health Care and payment card processing) seem to fear their auditors more than the attackers who the security folks are supposed to be fending off.   We have to make sure that we can check all of the boxes and get “good grades” on our audits and assessments, whether or not the controls being tested are relevant and provide real protection.

This model leads to a stifling of innovation in the info sec industry, according to Corman.  Since most info sec spending is concentrated around passing audits and fulfilling regulatory and compliance requirements, we continue to spend most of our time and money on legacy controls which may or may not be very effective at addressing evolving (and quite dangerous) threats.  We get that warm and fuzzy feeling from passing the audit, but that does not necessarily mean that we are well protected.  Security vendors respond to this pattern and concentrate their product offerings in spaces which address the tried and true controls they know that their customers need to meet.  They are simply not incented to come up with new ideas and better products and their marketing departments spend most of their time figuring out how to spread FUD and convince CSOs that their existing products somehow address the mind numbingly scary threat du jour.

A couple of examples come to mind:

Anti malware software - signature based anti malware software is having a harder and harder time keeping up with the threats we expect it to protect against.  More and more evil code is produced from toolkits which generate custom versions that differ from the AV vendors’ signatures just enough to slip by the defenses.  In a number of recent cases, totally customized, highly targeted code has been used to infect machines of interest and extract valuable information.  It seems to me that signatures are becoming less and less effective as controls against malware and that protections based on system behavior make much more sense.  Yet we still buy, deploy, maintain and update lots of signature based AV software, so that we can check the proper audit boxes and vendors don’t have real incentive to come up with new and more effective defensive products.

Passwords - One of the most frequent complaints I get from users at my company is that our password policies (long passwords with different types of characters that need to be changed pretty frequently) are a pain in the posterior.  I feel for them… complicated passwords that are changed frequently do provide protection against some threats, but it seems to me that the main threat to passwords today is malware which grabs the password as it is typed – and it doesn’t matter how long, complicated and frequently changed the password is.  Yet, we still enforce our password policy.  Part of the reason is that the policy does provide a certain level of protection against some threats, but in reality, we have kept the policy mainly because our business partners (customers, regulators, etc.) expect us to have such a policy and would look askance at us if we didn’t.  (In spite of recent research suggesting that the negative economic effects of these policies may exceed their protective benefit).

So… what do we need to do as an industry?  I think we need to start a dialog in which we take a long, hard look at the security controls we “require” and answer some key questions about them:

  • What is the threat that this control addresses?
  • Is the threat we are protecting against still a threat?  If so, has the nature of the threat changed significantly?
  • How can we update the control requirements to better address the threat using currently available technology or processes?
  • What new technology (if any) do we need from vendors in order to address the threat as it stands today?

The big question is how to get this discussion going… conferences like Security B-Sides, Defcon and the like are great places to start talking, but we need to find a way to get the mainstream security media and standards bodies to participate… going to be giving this a bit of thought and would love to hear from you with ideas!

Mar 30

the magnets made me do it!

By alberg deep thoughts Comments Off

Can human morality be manipulated with magnetism?   According to some scientists from some rather august institutions, yes.

It seems that when a particular region of the brains of their test subjects were exposed to powerful magnetic fields, their assessment of how morally correct an action taken by a character in a story shifted from being focused on the morality of the act itself to being more focused on the outcome of the act.

Pretty strange stuff… I wonder how long it is going to take for someone to plead “not guilty by reason of magnetism” in court.  More importantly, does this finding point to a deterministic model of the mind?  Is that hunk of gefilte fish in our heads just a machine that operates using a yet to be discovered program?  Is all of man’s creativity just the “smoke” emitted by that machine?  Will my wife believe that I forgot to take out the garbage due to a fluctuation in the Earth’s magnetic field?  These are the questions which vex me… meanwhile, time for a new hat, just in case…

preload preload preload