Entries Tagged "security policies"

Page 8 of 8

Did Hezbollah Crack Israeli Secure Radio?

According to Newsday:

Hezbollah guerrillas were able to hack into Israeli radio communications during last month’s battles in south Lebanon, an intelligence breakthrough that helped them thwart Israeli tank assaults, according to Hezbollah and Lebanese officials.

Using technology most likely supplied by Iran, special Hezbollah teams monitored the constantly changing radio frequencies of Israeli troops on the ground. That gave guerrillas a picture of Israeli movements, casualty reports and supply routes. It also allowed Hezbollah anti-tank units to more effectively target advancing Israeli armor, according to the officials.

Read the article. Basically, the problem is operational error:

With frequency-hopping and encryption, most radio communications become very difficult to hack. But troops in the battlefield sometimes make mistakes in following secure radio procedures and can give an enemy a way to break into the frequency-hopping patterns. That might have happened during some battles between Israel and Hezbollah, according to the Lebanese official. Hezbollah teams likely also had sophisticated reconnaissance devices that could intercept radio signals even while they were frequency-hopping.

I agree with this comment from The Register:

Claims that Hezbollah fighters were able to use this intelligence to get some intelligence on troop movement and supply routes are plausible, at least to the layman, but ought to be treated with an appropriate degree of caution as they are substantially corroborated by anonymous sources.

But I have even more skepticism. If indeed Hezbollah was able to do this, the last thing they want is for it to appear in the press. But if Hezbollah can’t do this, then a few good disinformation stories are a good thing.

Posted on September 20, 2006 at 2:35 PMView Comments

University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online—or, at least, involves online access—restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers—the core where the database servers live—are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AMView Comments

Software Failure Causes Airport Evacuation

Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts “test” bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta’s Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen. Normally the software flashes the words “This is a test” on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a “suspicious device,” according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn’t find it, of course, but they evacuated the airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country’s busiest passenger airport. It’s Delta’s hub city. The delays were felt across the country for the rest of the day.

Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn’t fail—everyone did what they were supposed to do.

What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure—in this case, a software glitch in a single X-ray machine—cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates—I’ve seen this design at several European airports—so that when there’s a problem the effects are restricted to that gate.

Of course, this distributed security solution would be more expensive. But I’m willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.

Posted on April 21, 2006 at 12:49 PMView Comments

Proof that Employees Don't Care About Security

Does anyone think that this experiment would turn out any differently?

An experiment carried out within London’s square mile has revealed that employees in some of the City’s best known financial services companies don’t care about basic security policy.

CDs were handed out to commuters as they entered the City by employees of IT skills specialist The Training Camp and recipients were told the disks contained a special Valentine’s Day promotion.

However, the CDs contained nothing more than code which informed The Training Camp how many of the recipients had tried to open the CD. Among those who were duped were employees of a major retail bank and two global insurers.

The CD packaging even contained a clear warning about installing third-party software and acting in breach of company acceptable-use policies—but that didn’t deter many individuals who showed little regard for the security of their PC and their company.

This was a benign stunt, but it could have been much more serious. A CD-ROM carried into the office and run on a computer bypasses the company’s network security systems. You could easily imagine a criminal ring using this technique to deliver a malicious program into a corporate network—and it would work.

But concluding that employees don’t care about security is a bit naive. Employees care about security; they just don’t understand it. Computer and network security is complicated and confusing, and unless you’re technologically inclined, you’re just not going to have an intuitive feel for what’s appropriate and what’s a security risk. Even worse, technology changes quickly, and any security intuition an employee has is likely to be out of date within a short time.

Education is one way to deal with this, but education has its limitations. I’m sure these banks had security awareness campaigns; they just didn’t stick. Punishment is another form of education, and my guess it would be more effective. If the banks fired everyone who fell for the CD-ROM-on-the-street trick, you can be sure that no one would ever do that again. (At least, until everyone forgot.) That won’t ever happen, though, because the morale effects would be huge.

Rather than blaming this kind of behavior on the users, we would be better served by focusing on the technology. Why does the average computer user at a bank need the ability to install software from a CD-ROM? Why doesn’t the computer block that action, or at least inform the IT department? Computers need to be secure regardless of who’s sitting in front of them, irrespective of what they do.

If I go downstairs and try to repair the heating system in my home, I’m likely to break all sorts of safety rules—and probably the system and myself in the process. I have no experience in that sort of thing, and honestly, there’s no point trying to educate me. But my home heating system works fine without my having to learn anything about it. I know how to set my thermostat, and to call a professional if something goes wrong.

Computers need to work more like that.

Posted on February 20, 2006 at 8:11 AMView Comments

Company Continues Bad Information Security Practices

Stories about thefts of personal data are dime-a-dozen these days, and are generally not worth writing about.

This one has an interesting coda, though.

An employee hoping to get extra work done over the weekend printed out 2004 payroll information for hundreds of SafeNet’s U.S. employees, snapped it into a briefcase and placed the briefcase in a car.

The car was broken into over the weekend and the briefcase stolen—along with the employees’ names, bank account numbers and Social Security numbers that were on the printouts, a company spokeswoman confirmed yesterday.

My guess is that most readers can point out the bad security practices here. One, the Social Security numbers and bank account numbers should not be kept with the bulk of the payroll data. Ideally, they should use employee numbers and keep sensitive (but irrelevant for most of the payroll process) information separate from the bulk of the commonly processed payroll data. And two, hard copies of that sensitive information should never go home with employees.

But SafeNet won’t learn from its mistake:

The company said no policies were violated, and that no new policies are being written as a result of this incident.

The irony here is that this is a security company.

Posted on May 10, 2005 at 3:00 PMView Comments

Bank Mandates Insecure Browser

The Australian bank Suncorp has just updated its terms and conditions for Internet banking. They have a maximum withdrawal limit, hint about a physical access token, and require customers to use the most vulnerability-laden browser:

“suitable software” means Internet Explorer 5.5 Service Pack 2 or above or Netscape Navigator 6.1 or above running on Windows 98/ME/NT/2000/XP with anti-virus software or other software approved by us.

Posted on February 7, 2005 at 8:00 AMView Comments

1 6 7 8

Sidebar photo of Bruce Schneier by Joe MacInnis.