Comments

Spaceman Spiff January 29, 2013 1:32 PM

Network Hacker w/ good social engineering skills to Barracuda Networks support engineer:

Help! I messed up my network switch and can’t get in to fix it!

BN to Hacker:

Let me just reset the password to the factory default of 123456. Then you can get in to fix it – just reset the password when you are done.

Joshua Dustin January 29, 2013 1:39 PM

Actually, no need to get Barracuda to actually give you the password. You have physical control of the device, replace the ssh binary with one that grabs the password for you, then call Barracuda for support. Done.

Adam January 29, 2013 1:41 PM

That was a long time ago (a decade, perhaps more). Back then they didn’t know any better.

But, BN said these backdoors are “needed,” so I guess they’ve not yet learned their lesson. (Source: the register article linked above)

Steve January 29, 2013 1:42 PM

When will we stop calling them Best Practices and start calling them what they really are, Best Unimplemented Theories?

Figureitout January 29, 2013 1:52 PM

Developers need them to test/unlock if necessary and likely are on project deadlines; so maybe give them more time. Wasn’t there articles about these being put in with a TLA gun barrel to the head I mean courteous request?

@Spaceman Spiff
–Customer service becomes an attack vector and gets eliminated/neutered then you won’t be able to get any help easily when you need it.

Siegmeyer January 29, 2013 1:53 PM

@Steve: nothing “best” about these practices. If it were accidental, it would be negligence. Since they did it intentionally, it borders on the malicious.

not again January 29, 2013 3:04 PM

Yet another reason to use open source firewalls like open/freebsd

In other news the entire visa network in my city was out all day yesterday and the banks wouldn’t say why. Probably because of this

gloran January 29, 2013 5:45 PM

@Figureitout

for developer purposes? sure

in shipped product? irresponsible enough that I would not purchase their products.

Michael Kohne January 29, 2013 7:14 PM

Anyone find any interesting owners of those public IP ranges? Barracuda doesn’t have them all – any fun owners?

andrew January 29, 2013 9:03 PM

HP (our support provider) insists we continue to use the Nortel VPN, which of course comes from a long bankrupt company, which was apparently hacked for years before they found out. Security by stupidity is not very secure.

nobodyspecial January 29, 2013 9:08 PM

@andrew – it may have been a cunning and subtle plan worthy of the great Baldrick himself.

Hacker: hey this guys use nortel gear for their network. Probably some boring local govt/small time outfit – not worth attacking.

It’s the same reason that most governments appear to be run by idiots

SomewhereInKy January 29, 2013 10:01 PM

Someone in physical control of the device (the client) has to OPEN the SUPPORT tunnel. Not realy any different than granting another user remote access priv to any device in your network..

Steevo January 30, 2013 1:46 AM

There’s a sticker on the top of the unit with pw reset and factory reset instructions. You don’t need a backdoor to access the unit if you have physical access. No Barracuda support engineer would fall for that.

Barracuda is a service, mostly- it’s a managed solution. It’s there for those that lack the skills to provide their own solution.

The hardware is there so they can sell you the service.

They need remote access so they can provide that service.

It’s too bad for them the credentials may be now leaked, but the reason they are there is well known: So your subscription can be fulfilled.

Nick P January 30, 2013 2:12 AM

@ Steevo

“It’s too bad for them the credentials may be now leaked, but the reason they are there is well known: So your subscription can be fulfilled.” (Steevo)

Original article:

“functionality is entirely undocumented”

“only disabled via hidden ‘expert options'”

“username ‘product’ with a ‘very weak’ password”

“if support access is not required”

So, support requires hidden functionality that’s hard to disable and gives control over the machine if attacker cracks a weak password? I don’t think so. It’s too easy to inexpensively do exactly what their support department wants to do without the first 3 quotes being true. I posted a solution to a similar problem for alarm companies in the past. The worst part is much of the enhanced security can be automated using free software and cheap equipment on their end. And these are security appliances!

There’s no excuse for this. They just weren’t doing their job right. They have competitors that aren’t making basic mistakes like this. Potential customers are better off choosing a security appliance vendor that applies basic security precautions. If that sounds like common sense, then people should use it. 😉

examplesPlease January 30, 2013 3:01 AM

@Nick P “And these are security appliances! There’s no excuse for this. They just weren’t doing their job right.”

This and other failures ( http://www.schneier.com/blog/archives/2010/01/fips_140-2_leve.html ) is why you cannot know for sure which are the “security appliance vendor that applies basic security precautions”.

Hence, I was surprised that your article http://www.schneier.com/blog/archives/2013/01/essay_on_fbi-ma.html#c1105156 mentionned closed-source products

Clive Robinson January 30, 2013 6:41 AM

@ examplesPlease,

Hence, I was surprised that your article… mentionned closed-source products

It’s a sad but true fact that you have no choice but to use closed source products when it comes to using computers.

If you look at the computing stack as starting at quantum physics and working up through transistor design through logic gates through gate cell macros that implement functions that make up memory through to CPU’s and very complex interfaces to the real world that make up Systems on a Chip products that then get a BIOS/OS put on them that then get applications put on those which in turn act like OS’s which then run code downloaded from web sites etc you quickly realise you only have a small bit in the middle (OS and Apps) that you have any real control over and can decide to make Open source if you wish.

That is from the CPU down is almost entirely closed source, and much web content uses closed source from the likes of Adobe.

Inherantly there is nothing wrong with closed source it’s quite agnostic in methodology and tools. It’s the humans that use them that have the intent, thus the problem is actually one of trust in that you can not verify what the maker “claims on the tin” unless they chose to let you.

And in turn those closed source products are based on other closed source products. Think for instance a System on a Chip for say a mobile phone. The chip designers use macros provided by other manufactures for the CPU memory voice and RF circuitry. in turn those macros are almost certainly based on other macros etc all the way down to and including the transistors. To get the chips manufactured they need to include other closed source proprietary parts so the basic chip functionality can be tested. These may be added by the chip manufacturer without the designer ever becoming aware of what is added.

So currently chips are closed source and potentialy full of bugs which might be backdoors either accidently or intentionaly.

But what about at the CPU level and up, well likewise it’s not any better. When you write your open source application in a high level language do you know what is in the library calls and wrappers to OS calls?

You might for a GNU tool chain but what if your app has to run on MS or other closed source platform?

How about on Android etc?

Although we chose not to think about it computing is currently based entirely on trust of closed source at some point in the computing stack, and this is finnaly starting to get people worried.

Think back to below the CPU layer, even if you could see the layout on the chip in as much detail as you wanted would you be able to work it out?

The simple answer is no, not even for the US Dept for Defence, which is why they are sponsoring research into how to knock some trust out of the chain and get some verification back in…

But even that won’t work… Because things are way beyond the point where a single human mind can sufficiently comprehend all the knowledge required to do a full secure audit of a system across the entire computing stack from quantum physics to level 8 and beyond. Thus you have to trust what is in other peoples heads, which is not tenable as a solution for various reasons.

So what to do…

Well actually you don’t have to trust or verify the components, just what they do in ways they cannot hide from you and remember it’s got to be a continuous process of testing because past behaviour is not a guarrenty of future behaviour, and in autonomous systems not even indicative (as things always fail).

If you want to know how you need to start with a riddle, about two guards on two doors where you know that one guard always lies and one always tells the truth. You then formulate a question that goes through both guards in a way that negates the effect of the lying guard. Such a riddle is thousands of years old and the question is still the same, “If I ask the other guard which is the wrong door which will he point to?”

It shows that you can negate a person who always gives a false answer, but what about when they sometimes will and sometimes won’t. Well this is when you get into the fun world of probability. It’s an important problem especialy with highly complex low reliability systems that might fail in a spectacular and extreamly news worthy way. It was a problem facing NASA during the 1960’s and the solution was “voting protocols”.

That is you produced a single specification that you gave to three entirely unrelated development teams who do not collude, they produced three entirely independant systems that meet the specification. You run these three systems in parallel where each receives an identical input. You then compare the three outputs and either go with the majority vote or halt the system if there is disagrement.

From these beginings you move forward in what I call “Probabilistic Security”.

You can test an individual part of a system by giving it a question to which you know the answer. If you ask this question of the system then if the answer is not correct then you know something is wrong with the system. If you intersperse this and other known answer test questions through the system when asking non test questions it gives you a degree of confidence that the system is functioning correctly. Obviously if you always ask test questions then you will know exactly when a system has gone wrong. If however you ask a test question every other question then you will know within two question periods. Obviously the ratio of test questions to other questions gives a window of uncertainty but it is better than having a very large number of parallel systems.

The problem is when a system goes intermitently wrong there is a probability that it will go wrong for a period shorter than the window of uncertainty. Thus the test questions produce a probabilistic detection of error.

There are a lot more things that you can do when you trade the impossible idea of absolute security with the idea of probabilistc security. One important realisation is that you don’t get the system to question it’s self you get a second system. That is you have say a two CPU system, one CPU is the work horse that does the actuall work, the second is the one scheduling in the test questions and examing the results. You quickly realise that this second CPU can actually be just a quite simple state machine that is not only fully determanistic but unlike a Turing engine every state it can be in is known and hard coded. If it detects an error it raises an exception and halts.

This can just as with the voting protocols be handeld in a number of ways. In one simple case it also raises an alarm to attract a human operator. In another the human can be replaced with another CPU that acts as a security hypervisor which deals with the issue in any number of pre-decided ways. Whilst the hypervisor is another seperate CPU, it can handle many pairs of workhorse and check CPUs thus making it’s cost shared across the system.

Once you accept the idea of multiple workhorse systems other possibilities arise. One such is if there is redundancy in the system you can actually halt a workhorse CPU without it’s knowledge and the check CPU can scan it’s memory and registers looking for signs that the memory has been tampered with by say the likes of malware being added.

If you want you can build such a system today with multiple motherboards that have a fast DMA access port on them such as Firewire etc. If built into a cluster the hypervisor occasionaly halts a motherboard, scans the memory looking for rougue code etc and then if OK lets it run again.

I built an experimental system that did this as a prototype a little while ago using a modified Linux cluster and the idea works. At some point I intend to develop it to take advantage of other ideas I’ve had. However the motherboard level is insufficiently granular to get the best out of the ideas but resources dictate what you can do in your own “lab” where getting custom chips is a fairly expensive business. I have however also built a system using PIC24 chips as the work horse CPU and PIC18 chips as the checkers and another to make an MMU under the control of the main hypervisor to test other ideas.

I’ve discussed some of these ideas with Nick P in the past on this blog across many threds (and I guess this is another thred to add to the list).

Jonadab January 30, 2013 8:17 AM

It is in fact absolutely necessary to have a guaranteed way to get in when the only dude who knew the password is unavailable.

From a security perspective, the only reasonable way to implement that is with a dedicated physical mechanism (such as a serial port console or a switch on the device that has to be held, and the backdoor stops working any time the switch is released). Switches and routers and firewalls and whatnot are only as secure as their physical location anyway (because someone with unobserved physical access can just replace the device with one the attacker controls), so a “backdoor” that requires physical control of the device has very little impact on security.

The problem is, that adds to the per-unit manufacturing cost, and most customers don’t realize its importance, so the manufacturer who doesn’t do it has a small but tangible advantage. A software-only backdoor doesn’t add to the unit cost (only to the development cost), so even though it’s less secure it’s what usually happens.

phred14 January 30, 2013 8:19 AM

I’ve said this before, and this is a place to say it again…

I’m in the silicon design business, and I’ve put some sort of back-door on every design I’ve ever worked on. This is design at the transistor level, not the ASIC level, which means that things are much more subtle. When first silicon comes in, there’s practically always something wrong, in the design, in the test fixtures, or in the test program. (or all three)

You always have to jump through some hoops to spank the design and bring it to life. The back-doors are part of that process, and quite frequently life on the design is first found through the back-door rather than through the front.

In addition, some of our back-doors are for characterization and testing. We artificially degrade the design, so we can test it harder than real use conditions. That gives it design margin and improves reliability.

So those back-doors for testing have to remain there forever, since they’re part of the product test flow. The back-doors for characterization also remain there forever, because there is never a time to remove them. We only ever hope we know what our “final design” is, but we never know it is the final design until after characterization and testing.

This is a “designer mindset”, not a “security mindset”. There is also not much that can be done about it. Since mask sets cost millions of dollars, there will never be another design rev to remove the back-doors. The ones for test need to stay, and even the ones for characterization may be needed in the future, in case some field problem ever pops up. Nor would you want to kill the design, by thinking you’re only removing the back-door, and something else accidentally happens.

Perhaps the best thing that could be done is some sort of way to kill the back-doors after testing with a fuse or some such. The downside of that is losing the ability to diagnose field fails.

Wim L January 31, 2013 1:35 AM

phred14: Some devices do that with test pins that aren’t connected to anything in the final product, or pads that aren’t even bonded out in the packages customers use. That way you get your test points AND security without having to change the silicon.

Alex January 31, 2013 6:57 PM

@Not again: OpenBSD FTW! I’ve had one where even 3,000+ college kids hanging off of the box didn’t manage to break it.

For those uncomfortable with command lines and conf files, M0n0wall and pfSense are very easy to install (insert CD into computer, hit yes, come back in 20 minutes) and are based on OpenBSD. Plenty of enterprise-level features. Country IP-block blocking is a good place for most people to start and easily implemented with pfSense.

and yes, this sort of thing is shameful for a “security appliance”

Nick P January 31, 2013 8:05 PM

@ Alex

Mono wall and PFsense are based on FreeBSD. That is a different security profile than OpenBSD. They do both have nice features.

Alex February 4, 2013 9:15 AM

@Nick P: Running OpenBSD on my business routers. I recommend PFSense for those who don’t care to learn BSD and the intricacies of PF.

Nick P February 4, 2013 10:38 AM

@ Alex

Both are good choices. With basic firewalls/routers, you will at some point cross a line that divides those that handle things well and those that don’t. Once you figure out which do well, the specific one doesn’t matter so much from a security standpoint because only very sophisticated attackers are going to breach it directly. A bypass like spearfishing becomes more likely.

So, I tell people not to worry too much about the firewalls. Look at one with a good base OS and/or low CVE record. Make sure it has the right features. Make sure it passed independent review with flying colors. If possible, test it yourself. For best results, use a different (top-notch) firewall to guard the internal network and have a monitoring device watching what happens in the middle.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.