Entries Tagged "schools"

Page 7 of 8

The Problem with "Hiring Hackers"

The Communications Director for Montana’s Congressman Denny Rehberg solicited “hackers” to break into the computer system at Texas Christian University and change his grades (so they would look better when he eventually ran for office, I presume). The hackers posted the email exchange instead. Very funny:

First, let’s be clear. You are soliciting me to break the law and hack into a computer across state lines. That is a federal offense and multiple felonies. Obviously I can’t trust anyone and everyone that mails such a request, you might be an FBI agent, right?

So, I need three things to make this happen:

1. A picture of a squirrel or pigeon on your campus. One close-up, one with background that shows buildings, a sign, or something to indicate you are standing on the campus.

2. The information I mentioned so I can find the records once I get into the database.

3. Some idea of what I get for all my trouble.

Posted on December 27, 2006 at 1:40 PMView Comments

Major Privacy Breach at UCLA

Hackers have gained access to a database containing personal information on 800,000 current and former UCLA students.

This is barely worth writing about: yet another database attack exposing personal information. My guess is that everyone in the U.S. has been the victim of at least one of these already. But there was a particular section of the article that caught my eye:

Jim Davis, UCLA’s associate vice chancellor for information technology, described the attack as sophisticated, saying it used a program designed to exploit a flaw in a single software application among the many hundreds used throughout the Westwood campus.

“An attacker found one small vulnerability and was able to exploit it, and then cover their tracks,” Davis said.

It worries me that the associate vice chancellor for information technology doesn’t understand that all attacks work like that.

Posted on December 13, 2006 at 6:43 AMView Comments

Bulletproof Textbooks

You can’t make this stuff up:

A retired veteran and candidate for Oklahoma State School Superintendent says he wants to make schools safer by creating bulletproof textbooks.

Bill Crozier says the books could give students and teachers a fighting chance if there’s a shooting at their school.

Can you just imagine the movie-plot scenarios going through his head? Does he really think this is a smart way to spend security dollars?

I just shake my head in wonder….

Posted on November 3, 2006 at 12:11 PMView Comments

University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students — a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors — the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online — or, at least, involves online access — restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers — the core where the database servers live — are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AMView Comments

What is a Hacker?

A hacker is someone who thinks outside the box. It’s someone who discards conventional wisdom, and does something else instead. It’s someone who looks at the edge and wonders what’s beyond. It’s someone who sees a set of rules and wonders what happens if you don’t follow them. A hacker is someone who experiments with the limitations of systems for intellectual curiosity.

I wrote that last sentence in the year 2000, in my book Secrets and Lies. And I’m sticking to that definition.

This is what else I wrote in Secrets and Lies (pages 43-44):

Hackers are as old as curiosity, although the term itself is modern. Galileo was a hacker. Mme. Curie was one, too. Aristotle wasn’t. (Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife’s teeth. A good hacker would have counted his wife’s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.)

When I was in college, I knew a group similar to hackers: the key freaks. They wanted access, and their goal was to have a key to every lock on campus. They would study lockpicking and learn new techniques, trade maps of the steam tunnels and where they led, and exchange copies of keys with each other. A locked door was a challenge, a personal affront to their ability. These people weren’t out to do damage — stealing stuff wasn’t their objective — although they certainly could have. Their hobby was the power to go anywhere they wanted to.

Remember the phone phreaks of yesteryear, the ones who could whistle into payphones and make free phone calls. Sure, they stole phone service. But it wasn’t like they needed to make eight-hour calls to Manila or McMurdo. And their real work was secret knowledge: The phone network was a vast maze of information. They wanted to know the system better than the designers, and they wanted the ability to modify it to their will. Understanding how the phone system worked — that was the true prize. Other early hackers were ham-radio hobbyists and model-train enthusiasts.

Richard Feynman was a hacker; read any of his books.

Computer hackers follow these evolutionary lines. Or, they are the same genus operating on a new system. Computers, and networks in particular, are the new landscape to be explored. Networks provide the ultimate maze of steam tunnels, where a new hacking technique becomes a key that can open computer after computer. And inside is knowledge, understanding. Access. How things work. Why things work. It’s all out there, waiting to be discovered.

Computers are the perfect playground for hackers. Computers, and computer networks, are vast treasure troves of secret knowledge. The Internet is an immense landscape of undiscovered information. The more you know, the more you can do.

And it should be no surprise that many hackers have focused their skills on computer security. Not only is it often the obstacle between the hacker and knowledge, and therefore something to be defeated, but also the very mindset necessary to be good at security is exactly the same mindset that hackers have: thinking outside the box, breaking the rules, exploring the limitations of a system. The easiest way to break a security system is to figure out what the system’s designers hadn’t thought of: that’s security hacking.

Hackers cheat. And breaking security regularly involves cheating. It’s figuring out a smart card’s RSA key by looking at the power fluctuations, because the designers of the card never realized anyone could do that. It’s self-signing a piece of code, because the signature-verification system didn’t think someone might try that. It’s using a piece of a protocol to break a completely different protocol, because all previous security analysis only looked at protocols individually and not in pairs.

That’s security hacking: breaking a system by thinking differently.

It all sounds criminal: recovering encrypted text, fooling signature algorithms, breaking protocols. But honestly, that’s just the way we security people talk. Hacking isn’t criminal. All the examples two paragraphs above were performed by respected security professionals, and all were presented at security conferences.

I remember one conversation I had at a Crypto conference, early in my career. It was outside amongst the jumbo shrimp, chocolate-covered strawberries, and other delectables. A bunch of us were talking about some cryptographic system, including Brian Snow of the NSA. Someone described an unconventional attack, one that didn’t follow the normal rules of cryptanalysis. I don’t remember any of the details, but I remember my response after hearing the description of the attack.

“That’s cheating,” I said.

Because it was.

I also remember Brian turning to look at me. He didn’t say anything, but his look conveyed everything. “There’s no such thing as cheating in this business.”

Because there isn’t.

Hacking is cheating, and it’s how we get better at security. It’s only after someone invents a new attack that the rest of us can figure out how to defend against it.

For years I have refused to play the semantic “hacker” vs. “cracker” game. There are good hackers and bad hackers, just as there are good electricians and bad electricians. “Hacker” is a mindset and a skill set; what you do with it is a different issue.

And I believe the best computer security experts have the hacker mindset. When I look to hire people, I look for someone who can’t walk into a store without figuring out how to shoplift. I look for someone who can’t test a computer security program without trying to get around it. I look for someone who, when told that things work in a particular way, immediately asks how things stop working if you do something else.

We need these people in security, and we need them on our side. Criminals are always trying to figure out how to break security systems. Field a new system — an ATM, an online banking system, a gambling machine — and criminals will try to make an illegal profit off it. They’ll figure it out eventually, because some hackers are also criminals. But if we have hackers working for us, they’ll figure it out first — and then we can defend ourselves.

It’s our only hope for security in this fast-moving technological world of ours.

This essay appeared in the Summer 2006 issue of 2600.

Posted on September 14, 2006 at 7:13 AMView Comments

Cheating on Tests

“How to Cheat Good.”

Edit > Paste Special > Unformatted Text

This is my Number 1 piece of advice, even if it is numbered eight. When you copy things from the web into Word, ignoring #3 above, don’t just “Edit > Paste” it into your document. When I am reading a document in black, Times New Roman, 12pt, and it suddenly changes to blue, Helvetica, 10pt (yes, really), I’m going to guess that something odd may be going on. This seems to happen in about 1% of student work turned in, and periodically makes me feel like becoming a hermit.

Posted on May 25, 2006 at 12:26 PMView Comments

Al Qaeda Hacker Captured

Irhabi 007 has been captured.

For almost two years, intelligence services around the world tried to uncover the identity of an Internet hacker who had become a key conduit for al-Qaeda. The savvy, English-speaking, presumably young webmaster taunted his pursuers, calling himself Irhabi — Terrorist — 007. He hacked into American university computers, propagandized for the Iraq insurgents led by Abu Musab al-Zarqawi and taught other online jihadists how to wield their computers for the cause.

Assuming the British authorities are to be believed, he definitely was a terrorist:

Suddenly last fall, Irhabi 007 disappeared from the message boards. The postings ended after Scotland Yard arrested a 22-year-old West Londoner, Younis Tsouli, suspected of participating in an alleged bomb plot. In November, British authorities brought a range of charges against him related to that plot. Only later, according to our sources familiar with the British probe, was Tsouli’s other suspected identity revealed. British investigators eventually confirmed to us that they believe he is Irhabi 007.

[…]

Tsouli has been charged with eight offenses including conspiracy to murder, conspiracy to cause an explosion, conspiracy to cause a public nuisance, conspiracy to obtain money by deception and offences relating to the possession of articles for terrorist purposes and fundraising.

Okay. So he was a terrorist. And he used the Internet, both as a communication tool and to break into networks. But this does not make him a cyberterrorist.

Interesting article, though.

Here’s the Slashdot thread on the topic.

Posted on March 28, 2006 at 7:27 AMView Comments

School Bus Drivers to Foil Terrorist Plots

This is a great example of a movie-plot threat:

Already mindful of motorists with road rage and kids with weapons, bus drivers are being warned of far more grisly scenarios. Like this one: Terrorists monitor a punctual driver for weeks, then hijack a bus and load the friendly yellow vehicle with enough explosives to take down a building.

It’s so bizarre it’s comical.

But don’t worry:

An alert school bus driver could foil that plan, security expert Jeffrey Beatty recently told a class of 250 of drivers in Norfolk, Va.

So we’re funding counterterrorism training for school bus drivers:

Financed by the Homeland Security Department, school bus drivers are being trained to watch for potential terrorists, people who may be casing their routes or plotting to blow up their buses.

[…]

The new effort is part of Highway Watch, an industry safety program run by the American Trucking Associations and financed since 2003 with $50 million in homeland security money.

So far, tens of thousands of bus operators have been trained in places large and small, from Dallas and New York City to Kure Beach, N.C., Hopewell, Va., and Mount Pleasant, Texas.

The commentary borders on the surreal:

Kenneth Trump, a school safety consultant who tracks security trends, said being prepared is not being alarmist. “Denying and downplaying schools and school buses as potential terror targets here in the U.S.,” Trump said, “would be foolish.”

This is certainly a complete waste of money. Possibly it’s even bad for security, as bus drivers have to divide their attention between real threats — automobile accidents involving children — and movie-plot terrorist threats. And there’s the ever-creeping surveillance society:

“Today it’s bus drivers, tomorrow it could be postal officials, and the next day, it could be, ‘Why don’t we have this program in place for the people who deliver the newspaper to the door?’ ” Rollins said. “We could quickly get into a society where we’re all spying on each other. It may be well intentioned, but there is a concern of going a bit too far.”

What should we do this with money instead? We should fund things that actually help defend against terrorism: intelligence, investigation, emergency response. Trying to correctly guess what the terrorists are planning is generally a waste of resources; investing in security countermeasures that will help regardless of what the terrorists are planning is much smarter.

Posted on February 21, 2006 at 9:07 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.