Oct 04

elephant repellent

By alberg CSO, deep thoughts Comments Off on elephant repellent

An elephant, a mouse, or a ghost?

Sometimes I feel like I’m selling elephant repellent:

I identify a particular species of elephant (for example, compromise of our networks due to spearphish delivered email).

I find examples of this particular elephant showing up on the networks of similar organizations.

I try to calculate the damage which said elephant would cause (which nearly always includes hard to quantify types of damage to things like “reputation” and “trust.”)

I run some tests to show that, yes, some of our users would in fact happily open the gates of the village to this particular elephant by clicking on just about any link emailed to them.

I then look for some sort of elephant repellent – a policy, a procedure, education, some technology or a combination of the above to keep said pachyderm from rampaging through our village.

Of course, elephant repellent is not free… there is a cost in productivity, usability, share of user attention, or cold hard cash. If the risk to cost ratio seems right, I take action, spraying elephant repellent all around the village. Time passes. No elephants show up, I proudly announce the success of this particular elephant repellent and start looking for the next elephant to repel. Of course, the question remains as to whether the lack of elephantine activity in the village is due to the repellent, well, repelling or whether the elephants never would have shown up at the village gates in the first place. (or whether the elephants will get clever and will show up next week and trample the place in spite of my efforts)

Elephants come in a variety of sizes. Some of them can rampage through the village and leave a wide path of destruction. Other elephants sound scary, but end up being more mouse like in their impact. If you ring the elephant alarm every day, the villagers (in particular, the village elders) are going to pay less attention as time goes on. Elephants are also unpredictable – sometimes they show up, other times, they pass your village by and trample the village next door. You gotta pick your elephants. I guess that is part of the “art” side of infosec (anticipating howls of protest from the quantitative guys on this).

At least Infosec people don’t usually have to deal with elephants which kill people – let’s say, a devastating earthquake. The stakes are, of course, very high in these cases and the village elders can get very angry when these elephants make it through the village gates. In fact, six seismologists and a government official are currently on trial for manslaughter in Italy for failing to predict an earthquake which struck the L’Aquila region in April, 2009. Yes, you read that right… While this episode may be an outlier, it does point out the rising expectations of all sorts of village elders (both corporate and governmental) as to the risk experts’ ability to make very accurate predictions of risks – expectations which may not be possible to achieve. Call it the “CSI effect” – we are used to seeing all sorts of cool technology providing definite answers to questions and we have come to expect that all questions can be answered in this way.

We as Infosec professionals have to strike a balance between the quantitative and qualitative approaches to choosing which elephants to worry about. To add to the problem, some of us (particularly in highly regulated industries like finance) are given a set of elephants which we must repel by regulators and other stakeholders. These “default elephants” may pose less risk to the village than other, less famous, elephants, but we have to divert resources (and repellent) to deal with them in order to stay in business.

So… the takeaway? We need to share best practices for spotting, measuring and evaluating risk from both a qualitative and quantitative point of view. Organizations like the FS-ISAC (and other industry ISACS) where we can share information in confidence with our peers are a great place to do this. We need to up the level of information sharing in these fora – while it is great to get lists of bad IP addresses and URLs, I’d also like to see more (anonymous) sharing of stories about risks and repellents. The more people looking at the elephant and reporting on what it did when it visited their villages the better picture we can put together.

Aug 15

telex to freedom / telex to chaos

By alberg deep thoughts, online security Comments Off on telex to freedom / telex to chaos

The latest in anti censorship tech

When I read about Telex, a research project aimed at making it easier to get past Internet censorship, my “split personality” – lover of freedom and justice versus corporate security guy kicked in right away.  You see, if widely implemented, Telex would make it much easier and safer for people living under repressive regimes to get past said regimes’ censorship of the Internets.  Built on client software, some clever crypto in packet headers and servers hosted by friendly ISPs, Telex would turn the idea of a proxy server inside out, effectively making the entire Internet (it’s a series of tubes, you know) one big proxy.

This would be really great – I would love to see the US government as well as non profit organizations host Telex servers to allow people in China, the middle east, and other places where freedom of expression is curtailed… however, Telex would also make my job as a security professional that much more difficult.  By installing a Telex client, the users on my corporate network might be able to bypass the web filtering we have put in place.  While some of that filtering is aimed at keeping people away from “non work appropriate sites,” there are other reasons to filter Internet access in the workplace as well.  For example, we block access to sites known to host malware.  We block access to sites which would put us in violation of various legal and regulatory mandates.  These are all legitimate things to do in a corporate environment, and our employees have unfettered access to the Internet outside of the office.  Employees using a system like Telex would put our company at risk.

Telex is stil in the proof of concept stage and there needs to be a lot more software and infrastructure development done before it can be a reality on a large scale. As I said, I am 1000% pro Telex as a tool for people to bypass repressive regimes’ Internet censorship.  But I think that corporate Internet censorship (hate that word) is another kettle of fish altogether and we security professionals need to keep an eye on Telex and similar technologies.  I feel like I should be dressing like these guys after writing this…


Aug 13

pulling the plug on 21st century mobs?

By alberg deep thoughts Comments Off on pulling the plug on 21st century mobs?

Are Tweets, BBMs, or Facebook updates weapons of mass mayhem?

OK… I have no problem with police departments (such as those in New York City and London) setting up units to look  at (public) social media sites for signs of impending lawbreaking, whether it be morons rioting, morons flash-robbing, or morons planning other mayhem.  More power to them… I think that if you tweet or Facebook your nefarious plans for the world to see, you should have an additional count of felony stupidity added to your charge sheet.  I also have no problem with the authorities turning off communications facilities when there is a credible and imminent threat to life and limb, such as the possibility of a cell phone triggered improvised explosive device.  But, when I first read about the Bay Area Rapid Transit (BART) police’s move this past Thursday evening rush hour when they disabled cellphone communications on the underground portions of the BART system, I felt very uncomfortable.  This sounds like something that repressive regimes like Egypt, Syria, or Libya would do to their people, not something which could happen in the US.  Then I read BART’s statement about the cellular interruption and got to thinking:

Organizers planning to disrupt BART service on August 11, 2011 stated they would use mobile devices to coordinate their disruptive activities and communicate about the location and number of BART Police. A civil disturbance during commute times at busy downtown San Francisco stations could lead to platform overcrowding and unsafe conditions for BART customers, employees and demonstrators. BART temporarily interrupted service at select BART stations as one of many tactics to ensure the safety of everyone on the platform.

You can find the full statement here.

First of all, BART probably broke the law by doing this.  It is against federal law to interfere with licensed wireless communications.  Even prisons (which, in my opinion should be able to operate cellphone jammers) have been prevented from doing so in the past.  (Yes, I know that BART did not jam the signals, they simply shut down existing cell sites – the result was the same, though).

Now, depending on what kind of information BART had, there may have been a (morally) acceptable reason for taking action.  For example, if the information was very clear in stating that the types and methods of protests were aimed at inducing overcrowding on platforms (a situation dangerous to life and limb) and there was reason to believe that the threat was credible and imminent, I might have been tempted to make the same decision.  But there are some other factors to consider (apart from the legal issue).

First and foremost, what about people already on the BART system who might need access to 9-1-1?  Well, the NYC subway system has no cell service on its underground portion (thankfully) and manages to have a mechanism (call boxes) for getting help in an emergency.  I assume BART is similarly equipped, so the cell service failure did not totally isolate riders from help.  Yes, had someone been on the phone with 9-1-1, their call would have been interrupted, but they could then resort to the call box – not ideal, but workable.

Second… if BART management felt the threat to be credible and that mobile devices were an integral part of the threat, they really only had two choices – shut down cellular service, or shut down stations where they felt the threat was greatest.  The latter option is not a perfect solution (the protestors would just regroup via Twitter) and would inconvenience thousands of innocent commuters.

We are just not yet equipped to make decisions like this and we need to be.

My takeaways from this:

Mobile devices and social media pose new challenges to law enforcement and new potential dangers to the public (as last week’s riots in London seem to have demonstrated).  Getting a mob together and coordinating their actions is a lot easier than it used to be and law enforcement needs tools to deal with this problem in a way which preserves public order but which also respects the rights of the people to peacefully assemble and protest.  This is not an issue to be left to local police departments – we need to do this at the federal level as it is a constitutional issue.

If we decide to allow law enforcement to disrupt communications to preserve public order, we need to have strict standards as to what constitutes a serious and imminent threat to public order and there must be a process to publicly review any such decision after the fact – and consequences for those who make the wrong decision.  The body that makes the (very quick) decision to pull the plug needs to have both law enforcement and civilian members (maybe a constitutional lawyer?).

Other risks need to be considered – for example, using a bunch of social media bots, a miscreant could create a denial of service attack on communications by creating a “virtual” flash mob that exists only in cyberspace, but looks big and scary.  In addition to inconveniencing the public, such an attack could be used as an aid to committing other types of crime.  If these fake flash mobs were to become a regular event, public support for anti flash mob measures could dwindle, leaving us where we are today.

Hopefully our elected officials will take some time out from serving their special interest masters, playing party politics, destroying the economy, and all of the important work that they love so much to take a look at an important issue in a rational way.  Oh, wait…


Jul 01

my brain made me do it!

By alberg deep thoughts Comments Off on my brain made me do it!

I just got done reading an extremely interesting book recently… Incognito: The Secret Lives of the Brain by Baylor University neuroscientist David Eagleman.  Eagleman’s hypothesis is that most of the activity going on in our brains is happening below the level of our consciousness, down in “burned in” subroutines which do most of the heavy cognitive lifting.  Our consciousness is the brain’s “summary” of what is going on both out in the world and inside our heads – the metaphor he uses is that of a newspaper.  While it is impossible to know all of the things going on the world around us, a newspaper gathers up a summary of information we need to know (at least according to the newspaper editors and their corporate masters).  Eagleman theorizes that consciousness is our own newspaper, constructed on a moment to moment basis by the incredible piece of gefilte fish in our heads and that without such a mechanism, we would be overwhelmed by information and sensation and unable to react to the world around us.

The interesting part of the book from a security point of view is Eagleman’s contention that free will is really an illusion and that the decisions we make are determined by organic processes and those “burned in” routines we are not even conscious of.  Neuroscientists have been making great strides in tying brain function to behavior in measurable ways, he says, and as the science gets better, we will be able to better see the connections between antisocial behavior and neural malfunctions.

Of course, this has large ramifications for crime and punishment – if there is an organic basis for criminal behavior, we need a new approach to dealing with criminals, one that protects society by isolating them, but which also focuses on whether future criminal behavior can be prevented through medical intervention.  Eagleman is very clear to say that he does not feel that criminal behavior can be excused by his theory, just that how we deal with criminals needs to change.

This was a fascinating and thought provoking book and is well worth your time.  If you want to get a taste of what Eagleman has to say, The Atlantic has an excerpt from the book on their web site.

Now I am going to go eat a pint of butter pecan ice cream and it isn’t my fault…



May 14

here… have a pill… what’s the worst that could happen?

By alberg deep thoughts, hacks, malware, Paranoid Peeps Comments Off on here… have a pill… what’s the worst that could happen?

What's the worst that could happen?

Spear phishing has been in the news quite a bit lately – it seems like just about all of the recent high profile hacks began with someone clicking on a link or opening a document.  Here’s a data point which seems to corroborate the innate sense of trust that leads people to do really stupid things. According to an entry in Bruce Schneier’s blog… in Istanbul, police dressed up as doctors, knocking on doors unannounced, were able to persuade 86% of subjects to take a pill.  And this is after a rash of crimes in which people who are not police did the same thing, using powerful sedatives to disable victims and ransack their homes.  My belief in knowledge of human psychology as the most powerful hacking tool remains strong.  Or maybe there is something in the water in Istanbul…



They Might Be Giants – Istanbul (Not Constantinople) from They Might Be Giants on Vimeo.

Sep 06

testing, 1, 2, 3, oopsie!

By alberg deep thoughts, online security, systemic risk, worst practices Comments Off on testing, 1, 2, 3, oopsie!

Last week, an experiment conducted by Duke University and the European RIPE Network Control Center got a little bit out of hand, interrupting Internet traffic in 60 countries worldwide.  In all, about one percent of Internet traffic was affected by the test gone awry.  One percent of Internet traffic does not sound like a lot – most of that traffic was probably illegal file sharing, lolcats and porn, but what if your Internet based business was affected?  My employer (who shall remain nameless and whose opinions this post does not reflect) is an Internet based business in which the value of each (time sensitive) transaction is probably thousands of times the average for the rest of the net.  We were not affected by the testers’ little oopsie, but had we been, the potential losses would have been significant.  I am sure my company is not the only one in such a situation.

Yes, Cisco did fix the bug which caused this particular outage, but I think that this incident points out some questions that really need to be answered:

Should researchers be conducting experiments on the Internet with potential for widespread negative impact on a shared business resource? If someone ran this type of potentially disruptive testing on my company’s network during business hours, I’d be looking for them to be fired, sued, arrested and forced to listen to this album for the rest of their lives.  Researchers need to realize that the Internet is the planet’s “production network” with no “maintenance window” and that the same best practices we follow in the enterprise (separate test environment, for example) need to be followed when tinkering with its innards.

Had someone experienced significant financial losses due to this experiment, what would its recourse be? No one expects the Internet to be free of glitches and outages, but in this case, a conscious decision was made to do something which could reasonably be expected to cause problems.  Could there be lawsuits here?  Are the researchers exposing their organizations to potentially ginormous liability?  If the damaged party was in, say, Asia, who would have jurisdiction over the case and where would it be tried?

In an era where cyberspace is increasingly recognized as a “battlespace,” could an experiment such as this (on a larger scale) be mistaken for a cyber attack and possibly lead to real world hostilities?

Researchers and governments should take this opportunity to stop and think about the “rules of the road” for the global Internet.  Long ago, we all recognized that the oceans are a common resource and that we need a Law of the Sea to allow us to agree on what is and is not acceptable on the bounding main.  It seems to me that the Internet is the sea of the 21st century and needs a similar set of supranational rules to ensure that it accessible to all.  Are you listening, UN?

Jul 11

Is Microsoft a cyber-Benedict Arnold?

OK, call me a cold war relic, but I find the recent revelation that Microsoft has provided the source code for Windows, SQL Server, and Office to the Russian FSB (the spies formerly known as the KGB) as well as to the Chinese government quite disturbing. As recent events prove, Russia is still actively engaged in espionage against the US public and private sectors.  We know that the Chinese People’s Liberation Army is actively building an offensive cyber capability and that they use technology to suppress free expression in their country.  Microsoft’s disclosures have been going on since 2002, as part of a program under which Microsoft has supplied source code for its products to a number of countries as well as NATO.

It does not take too much imagination to conjure up visions of Russian or Chinese  government security researchers finding zero-day exploits to allow their paymasters to craft undetectable malware which is then placed on US government and private sector computers.  Such an attack would be a cost effective, low risk way to gather more information in a day than the recently unmasked spy ring was able to collect over a decade.   It takes even less imagination to envision the Chinese government using their access to Windows source code to build more efficient tools to monitor and muzzle those who dare to speak out against the Communist Party.

This incident raises a number of  interesting questions.

Is Microsoft (a company born in America, whose success was built on the US market, and which benefits from tax breaks funded by US taxpayers) right to provide access to source code of products which are the underpinnings of all sorts of critical infrastructure to nations which are actively engaged in espionage against the US and whom we may meet on the cyber battlefield of the future?  It seems to me that this is sort of like hiring a company to build a fort and then allowing them sell the plans to your adversaries.

Should Microsoft’s products have some sort of special status which recognizes them as part of the US critical infrastructure?  After all, Microsoft has been allowed to gain what is basically a monopoly in the US market for operating systems and other key software.  Does this engender a responsibility on their part to act in accordance with US national interests?   I think it does.

Microsoft hasn’t done anything illegal here.  It would be nice if they felt a need to protect the critical infrastructure of their country, but as a private entity with no laws or regulations to prevent their actions, they made the logical business decision to share the source code in order to gain better access to the Russian and Chinese markets.   However, their choice is a bum deal for the rest of us, who will have to deal with the repercussions of this decision while Microsoft reaps the profits.  We need to tell our legislators that it is time to take a fresh look at what we ask of companies like Microsoft and Cisco, whom we have allowed to develop monopolies on key parts of the nation’s critical infrastructure.  In the conflicts yet to come, cyberspace will play a key role – and Microsoft has sold the plans for the fort to potential adversaries.

May 31

some thoughts on Memorial Day

By alberg deep thoughts Comments Off on some thoughts on Memorial Day

For most of us, Memorial Day is the unofficial start to summer, or a day off, or a shopping day.  But let’s take a moment to remember what the day is really about – the men and women who have given their lives to protect America and the freedoms it stands for.  While you are at the beach, barbecuing in the backyard or shopping at the mall, take a moment to reflect on their sacrifices as well as on the thousands of Americans who are in harm’s way in foreign lands.  Thank you all…

Apr 25

no fear?

By alberg deep thoughts Comments Off on no fear?

Another tidbit from Josh Corman’s excellent talk on FUD (Fear, Uncertainty and Doubt) in the information security industry… the following comes from Frank Herbert’s Dune series of scifi novels: 


I must not fear.
Fear is the mind-killer.
Fear is the little-death that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing.
Only I will remain.

Josh asked an important question during his talk – is there any place for fear in information security?
My two cents:  Humans (and animals) fear for a good reason; responding to perceived threats in a timely fashion is very handy if your goal in life is to survive.  In the info sec world, I think that fear has some use, as an indicator and a call to action.  However, once the threat causing the fear reaction is identified and evaluated, we need to discard the fear and replace it with a heightened sense of awareness and a sense of the true nature and proportion of the threat.  The fears we face in info sec are not typically existential in nature; once we know and understand our enemy, we need to devote our mental and physical energy to meeting the challenge – fear just gets in the way.  
So, we must not fear (for more than a couple of minutes).

I think this is going on my wall…


Apr 25

when was the last time we retired a security control?

By alberg CSO, deep thoughts Comments Off on when was the last time we retired a security control?

This weekend, I attended the Security B-Sides Boston conference  (which, by the way, I heartily recommend to all info sec types).  My favorite session of the day was Josh Corman‘s “Fsck the FUD” talk… this talk was chock full of security thought leadership goodness and will probably result in a number of blog postings here at Paranoid Prose.

In his talk, Josh asked a really thought provoking question:  When was the last time that the information security community retired a control?  If you take a look at lists of recommended security controls from 10 or even 20 years past, you will see many of the same measures that are found in the latest PCI, COBIT and other prescriptive documents.  Each year, a few new must have controls are added, much to the chagrin of CSOs and security personnel (who then have to spend more of their limited time and resources implementing new controls as well as maintaining existing ones) and to the delight of auditors (who get job security and longer audit checklists to fill out, and thus more billable hours).  This approach of continuous “improvement” of security “standards” is just not scalable, given most organizations’ unwillingness to fund the corresponding infinite growth of security resources (how unreasonable!).

Why is this happening?  Josh’s theory (with which I agree) is that auditors and standards writers tend to be very conservative.  In their minds, once a control is written down, it becomes revealed truth, and having more controls must ensure a higher level of security, right?  As a result, many organizations (especially those in heavily regulated industries like Finance, Health Care and payment card processing) seem to fear their auditors more than the attackers who the security folks are supposed to be fending off.   We have to make sure that we can check all of the boxes and get “good grades” on our audits and assessments, whether or not the controls being tested are relevant and provide real protection.

This model leads to a stifling of innovation in the info sec industry, according to Corman.  Since most info sec spending is concentrated around passing audits and fulfilling regulatory and compliance requirements, we continue to spend most of our time and money on legacy controls which may or may not be very effective at addressing evolving (and quite dangerous) threats.  We get that warm and fuzzy feeling from passing the audit, but that does not necessarily mean that we are well protected.  Security vendors respond to this pattern and concentrate their product offerings in spaces which address the tried and true controls they know that their customers need to meet.  They are simply not incented to come up with new ideas and better products and their marketing departments spend most of their time figuring out how to spread FUD and convince CSOs that their existing products somehow address the mind numbingly scary threat du jour.

A couple of examples come to mind:

Anti malware software – signature based anti malware software is having a harder and harder time keeping up with the threats we expect it to protect against.  More and more evil code is produced from toolkits which generate custom versions that differ from the AV vendors’ signatures just enough to slip by the defenses.  In a number of recent cases, totally customized, highly targeted code has been used to infect machines of interest and extract valuable information.  It seems to me that signatures are becoming less and less effective as controls against malware and that protections based on system behavior make much more sense.  Yet we still buy, deploy, maintain and update lots of signature based AV software, so that we can check the proper audit boxes and vendors don’t have real incentive to come up with new and more effective defensive products.

Passwords – One of the most frequent complaints I get from users at my company is that our password policies (long passwords with different types of characters that need to be changed pretty frequently) are a pain in the posterior.  I feel for them… complicated passwords that are changed frequently do provide protection against some threats, but it seems to me that the main threat to passwords today is malware which grabs the password as it is typed – and it doesn’t matter how long, complicated and frequently changed the password is.  Yet, we still enforce our password policy.  Part of the reason is that the policy does provide a certain level of protection against some threats, but in reality, we have kept the policy mainly because our business partners (customers, regulators, etc.) expect us to have such a policy and would look askance at us if we didn’t.  (In spite of recent research suggesting that the negative economic effects of these policies may exceed their protective benefit).

So… what do we need to do as an industry?  I think we need to start a dialog in which we take a long, hard look at the security controls we “require” and answer some key questions about them:

  • What is the threat that this control addresses?
  • Is the threat we are protecting against still a threat?  If so, has the nature of the threat changed significantly?
  • How can we update the control requirements to better address the threat using currently available technology or processes?
  • What new technology (if any) do we need from vendors in order to address the threat as it stands today?

The big question is how to get this discussion going… conferences like Security B-Sides, Defcon and the like are great places to start talking, but we need to find a way to get the mainstream security media and standards bodies to participate… going to be giving this a bit of thought and would love to hear from you with ideas!

preload preload preload