University Networks and Data Security

In general, the problems of securing a university network are no different than those of securing any other large corporate network. But when it comes to data security, universities have their own unique problems. It’s easy to point fingers at students—a large number of potentially adversarial transient insiders. Yet that’s really no different from a corporation dealing with an assortment of employees and contractors—the difference is the culture.

Universities are edge-focused; central policies tend to be weak, by design, with maximum autonomy for the edges. This means they have natural tendencies against centralization of services. Departments and individual professors are used to being semiautonomous. Because these institutions were established long before the advent of computers, when networking did begin to infuse universities, it developed within existing administrative divisions. Some universities have academic departments with separate IT departments, budgets, and staff, with a central IT group providing bandwidth but little or no oversight. Unfortunately, these smaller IT groups don’t generally count policy development and enforcement as part of their core competencies.

The lack of central authority makes enforcing uniform standards challenging, to say the least. Most university CIOs have much less power than their corporate counterparts; university mandates can be a major obstacle in enforcing any security policy. This leads to an uneven security landscape.

There’s also a cultural tendency for faculty and staff to resist restrictions, especially in the area of research. Because most research is now done online—or, at least, involves online access—restricting the use of or deciding on appropriate uses for information technologies can be difficult. This resistance also leads to a lack of centralization and an absence of IT operational procedures such as change control, change management, patch management, and configuration control.

The result is that there’s rarely a uniform security policy. The centralized servers—the core where the database servers live—are generally more secure, whereas the periphery is a hodgepodge of security levels.

So, what to do? Unfortunately, solutions are easier to describe than implement. First, universities should take a top-down approach to securing their infrastructure. Rather than fighting an established culture, they should concentrate on the core infrastructure.

Then they should move personal, financial, and other comparable data into that core. Leave information important to departments and research groups to them, and centrally store information that’s important to the university as a whole. This can be done under the auspices of the CIO. Laws and regulations can help drive consolidation and standardization.

Next, enforce policies for departments that need to connect to the sensitive data in the core. This can be difficult with older legacy systems, but establishing a standard for best practices is better than giving up. All legacy technology is upgraded eventually.

Finally, create distinct segregated networks within the campus. Treat networks that aren’t under the IT department’s direct control as untrusted. Student networks, for example, should be firewalled to protect the internal core from them. The university can then establish levels of trust commensurate with the segregated networks’ adherence to policies. If a research network claims it can’t have any controls, then let the university create a separate virtual network for it, outside the university’s firewalls, and let it live there. Note, though, that if something or someone on that network wants to connect to sensitive data within the core, it’s going to have to agree to whatever security policies that level of data access requires.

Securing university networks is an excellent example of the social problems surrounding network security being harder than the technical ones. But harder doesn’t mean impossible, and there is a lot that can be done to improve security.

This essay originally appeared in the September/October issue of IEEE Security & Privacy.

Posted on September 20, 2006 at 7:37 AM46 Comments

Comments

raschi September 20, 2006 8:15 AM

Actually, not only the decentrality is the problem, but also the freedom of teaching and research. At least in Europe, but also in the U.S. university that I went to, the professors wouldn’t want to be limited in how they can use IT resources in their research. This, of course, specifically applies to the computer science department, but e.g. business departments are not much different anymore and I would assume that other fields have caught up as well.
The freedom of the academia is also a high cultural and political value, in some countries such as Germany, this freedom is even laid down in the constitution (if I’m not terribly mistaken). And of course, limitations via poilicies and rules and such have a tendency to violate this freedom effectively.

Greg September 20, 2006 8:19 AM

I think you are part right. There is still the PHB (Pointy Haired Boss aka Dilbert) syndrome at unis. We had some major issues that the new VC wanted MS express as the main mail server. Well lets just say that didn’t work out and deparments that had use the centralized mail system since ARPANET days got there own systems with that fisaco.

In fact political problems, or bosses that no nothing about technology are often the ones calling the shoots.

But you really are spot on with departments need to look after there own important stuff.

Dimitris Andrakakis September 20, 2006 8:28 AM

@raschi :

In Greece too. For example, police is forbidden to enter any academic campus, without authorization from the university itself.

gal_sec September 20, 2006 8:50 AM

Bruce, I’m with you on this.

Divide and conquer – if you cannot secure everything, secure most important parts.

difference September 20, 2006 9:22 AM

There is one significant difference: in a corporate environment, much if not all of the technology is owned by some part of the corporation. At universities, students and faculty often own their own equipment – and along with ownership comes control. It’s easy to say, ‘you can’t use our network unless you secure x, y, and z’ on your computer, but it is a lot harder to implement it. Universities should by design, be untrusted networks, with small trusted segments if any, and completely secured hosts.

John Davies September 20, 2006 9:25 AM

Student networks, for example, should be firewalled to protect the internal core from them.

I’d recommend a wide moat and boiling oil as well 🙂

Beth September 20, 2006 9:33 AM

As a former university dba, your article resonates. In U.S. universities we are required to meet a slew of privacy regulations while still enabling open access to information where needed.

I do NOT mean to imply that faculty/researchers/administrators don’t care about securing data. They’re well-meaning folks. It’s been my experience that they often lack knowledge of the risks.

Compounding the issue, their departments are often poorly-funded; they typically rely on CS students to manage departmental servers. Many students are bright and capable, and many are good at giving that impression. (The same can be said of us non-student workers.) Regardless, they need guidance from professionals versed in the regulations and risks.

Carlo Graziani September 20, 2006 9:34 AM

I don’t disagree with any specific recommendation here, so much as with the corporate CIO-ish outlook that informs the article.

The fact of the matter is that Universities are not just weird corporations. Their core mission is research, education, and communication, which makes their institutional character fundamentally different from and incommensurable with for-profit corporations.

Our IT services are the way they are, and frighten the bejeesus out of corporate CIOs, because they are designed to enable the core mission of the University, rather than being organized around a fund and source of proprietary information. To the extent that IT security “reform” hinders that core mission, it is simply not acceptable.

IT Security is not an absolute value, but must rather be balanced against other competing values. Increased security always comes at the expense of openness, and openness is a crucial value in academia. The compromise between security and openness that is appropriate to corporate IT environments is totally inappropriate to Universities. I would have appreciated this article more if it had placed a greater emphasis on the central importance of openness in academic IT.

AcademiaITguy September 20, 2006 10:02 AM

The greatest threat in IT security I’ve witnessed is the egos of Faculty members who think they are infallible while serving SSN’s of students from their own web servers unencrypted/unprotected to the Internet (true story of MIS faculty). Thank goodness a student found it and let us know before something bad happened.

E. Abbey September 20, 2006 10:12 AM

Agreed. College networks are swiss cheese.

What I especially enjoy is once I have installed my rootkits on a Professor’s PC I do a file search for “FinalDraft*.Doc”

Usually I find one or two…sometimes more on a given machine. I can then open these files and transpose a few numbers that look “key”. Or just add a “not” (or delete a “not”) in the text.

It’s great fun.

another_bruce September 20, 2006 10:35 AM

what are the real issues here? keeping confidential information confidential works the same way at colleges as it does in businesses: central, secure location, firewalls/airgaps, thoughtful policies enforced.
during any article on academic computer security, i wait for mention of the other major concern: stopping students from trading copyrighted music files! it’s a good thing the internet didn’t exist when i was in college or i might never have graduated.

Ale September 20, 2006 10:41 AM

I would emphasize not only openness, but the heterogeneity of requirements that researchers have. In many companies, a simple setup with common office applications might suffice for almost everybody. Within academia, the hardware and software requirements that the chemistry, CS or biology researchers have are widely diverging. Trying to impose some sort of centralized standard on any research group is hopeless. Most of the time, these groups will need esoteric hardware and in-house, bleeding edge code – usually running with high privileges.

I agree with Bruce: A guarded core for university-wide data that has strong legal implications, and multiple, disjoint networks with varying levels of trust (and clear security perimeters) for the research groups.

Mark Rose September 20, 2006 11:06 AM

One potential problem that hasn’t been mentioned is the privacy of research subjects’ personal data. Having a secure “core” won’t help someone who participates in a study, gets paid, and then has his or her SSN and other data stored on some unsecure departmental server. It seems that some policies, such as privacy policies, must be centrally imposed, even if they get in the way of academic openness. Perhaps one needs both a strong privacy policy and some trusted server in each department, managed by IT, which is dedicated for that purpose.

emplID withheld September 20, 2006 11:09 AM

I must note that there is a big mistake in likening a University (or in our case here, a University System made up of several Universities and other state-related parties) to a standard corporation. We are technically an ISP without ISP status on top of all of our other enterprise-related duties. This makes the argument that ‘students are like employees on the corporate network’ fall flat on its face. They aren’t, and the law makes that quite clear. I can analyze traffic between an employee’s machine and a server I admin ’till the cows come home if I want, but if I do that to a student’s machine I could be facing jail time (minor hyperbole, but the point should be clear). Both are on the “same network”–but they are not the same.
Universities operate primarily around things being open, and that philosophy seems to have worked even for the non-enterprise computing architecture. The core stuff all lives on a private LAN for a reason. As for the infrastructure, our network people do their best to make sure that people don’t do dumb things like running wired ethernet outdoors, but it drives me nuts to hear the networking “cheif” claim that he can regulate the use of 802.11b/g technology and 2.4 GHz phones on campus (the FCC says no on that BTW). Either we are an ISP or we aren’t, but wait, we’re a Universtiy too. We don’t fit in the same box as a regular “atomic” corporation.

derf September 20, 2006 11:39 AM

Mention firewall within five miles of a university network and mobs of angry faculty with torches and nearly naked students carrying pitchforks will mercilessly hunt you down. You’ll be tarred and feathered then drawn and quartered while these heathens dance around your carcass decrying “culture” and chanting “tradition” and “openness”.

Trust me – it isn’t a pretty sight.

Carlo Graziani September 20, 2006 11:52 AM

@derf:

That’s not really accurate. Usually the faculty have the students carry their torches for them.

sconzey September 20, 2006 12:05 PM

A mate of mine is just about to start a course in Maths at St Peter’s college, Oxford.

He’s bringing a desktop computer, with no wireless capabilities, so I was very suprised when he told me they had asked him for the MAC address of his NIC…

Reasonable security measure? Or paranoid over-reaction?

Lee September 20, 2006 12:15 PM

@Sconzey – They did this at my uni too, both for wireless and ethernet. I couldn’t log on to anything until I had submitted my MAC address. I think its a pretty standard thing for academic institutions to do.

MCP September 20, 2006 12:30 PM

In the US, don’t forget you have to deal with FERPA. The federal government requires that certain data MUST be protected, and some of that data resides in Professor’s accounts– grades, and SSN’s in particular.

And then, if you have a CS program with a security class… oy.

Pat Cahalan September 20, 2006 12:57 PM

@ emplID

Both are on the “same network”–but they are not the same.

That’s part of the point of the article. You don’t necessarily need to run portscans of the student machines -> move them to their own network, and for the purposes of the security policy in the “core” assume that they are untrusted. If you can’t legally scan them, don’t scan them… but don’t allow them to talk to your financial servers 🙂

@ Carlo

To the extent that IT security “reform” hinders that core mission,
it is simply not acceptable.

This is an expectation problem, not a technical problem, and one I deal with daily.

Increased security always comes at the expense of openness,
and openness is a crucial value in academia.

I agree that openness is a crucial value in academia, but I dispute that increased security always comes at the expense of openness – I think that’s flat out wrong. Increased security comes at the expense of authorization, which may or may not have an effect on openness.

The problem is that increased security, which decreases authorization, necessarily changes information sharing processes. This affects openness, but does not have to remove openness. In a very real way, it can enable openness, because hacked machines need to be removed from the network (removing services and information), whereas more secure machines can continue to function.

Simply put, a great many security steps are regarded as unacceptable because people don’t want to change process. That’s understandable, but hardly realistic in the long run. It’s not even (really) a security problem – people change processes all the time, they just change them without taking security into account.

In fact, I would argue that decreased security has a much greater effect in the long run on openness than increased security.

@ Ale

Most of the time, these groups will need esoteric hardware and in-house,
bleeding edge code – usually running with high privileges.

All of which can be allowed and segregated from production systems and protected from the general internet. Running a logic analyzer with an embedded OS is fine. Running a logic analyzer on a network with a workstation to capture data is fine. Running it on an untrusted network is insane. Sooner or later it will be hacked, and then you can’t use it at all.

David September 20, 2006 1:09 PM

Humm, some of the items mentioned, including the “here here” of the commenters don’t seem to resonate with me.

Many Universities block off their students unless they are “approved” via MAC address blocking (several). Others also split off those PCs that don’t meet certain standards into a seperate Vlan where they can’t do anything except update thier PCs (U.F.). There are others (U.T.?) that do very fancy firewall blocking to keep Windows NT machines (without patches) not only on the internet, but bug free.

I’m sure it’s not as well as it can be, but there are Universities that handle security quite well.

Brian September 20, 2006 1:13 PM

The comments about professors with student’s grades and SSNs on their workstations struck a chord with me. Here is a case where security and openness really are in conflict. Convincing the registrar to lock down their network should be relatively easy, from a political perspective. But the research workstations are another problem entirely.

Do you lock down the workstation, thus interfering with the professor’s research? (Cue mob with torches.)

Or do you tell them not to keep grades on there? (Good luck with that.)

It seems like a good solution would be to make sure that professors don’t need access to things like SSNs. Make sure student’s ID numbers are not their SSNs, and even better do not need to be kept secret.

Sure, maybe the professor’s machine gets hacked, and some information gets stolen. But there is no reason that the workstation needs to have all of the information about the student, just their grade for one class.

Mark J. September 20, 2006 2:53 PM

Some observations from a university insider. Many smaller departments (like mine) have one “IT guy.” This was fine in days of yore when you had one server for email, files, logon, etc. When I started my current job, we had 3 servers – web, PDC/file, and email. Now we have 14. We’ve also gone from a CAT3 10MB network to CAT6 100/1000MB. Remote access and distance learning are the new “toys.” Everyone wants them. New apps like Sharepoint require an SQL server. Much larger files require several files servers. Business continuity initiatives mean you need one or more off-site servers. Wireless is all the rage, so we have that now, too.

I’m still the only “IT guy.” How do you get around the idea that “we’ve always gotten by with one IT guy” without looking like you can’t follow in the footsteps of your predecessor? (The guy with 3 servers).

Another difference between academia and business; if our systems go down, it’s a hassle, but no one is losing money. In business if your primary systems go down or get hacked, big bucks are lost. And in most cases, so is your job.

Mark J. September 20, 2006 2:59 PM

I should note here that I’m not complaining. Just making observations. Our university is incredible when it comes to security and robust service. The wizards at campus level have really gone the extra mile to insure our system is safe and highly available. Plug an infected PC or laptop into our network and you’ll be blocked by MAC address in seconds. Sure, there will be PCs that get hacked. But campus IT and campus security are very quick to isolated them and it’s a public humiliation for the net admin whose network got hacked. That keeps most net admins here on their toes.

jny September 20, 2006 3:21 PM

And then, if you have a CS program with a security class… oy.

Actually, it can be quite amusing when the “network police” show up during a “Hack the network” class and bust one of the students for doing something dumb.

We had that situation years ago where someone was bridging between the wireless and wired networks in a lecture hall, in the years before we were able to upgrade to switches with a sane port security policy. This generally caused problems on wireless, and overloaded the DHCP servers with hosts on “the wrong network”, so the SOP was to remotely shutdown the wired port that was participating. If the remote port was in a hub, the whole hub would often lock up and be useless, so a tech would have to go “divide and conquer.”

So after determining the offending patch to the lecture hall, a tech walked over while class was running, and interrupted to point out which seat in the hall was causing the network problem. It certainly made an impression, and from then on, we had no further network problems stemming from that class. It demonstrated that a disruptive attack would bring a response, for sure!

Jamie Riden September 20, 2006 3:45 PM

Spot on, as usual.

Funnily enough, I cited one of Bruce’s papers when writing the case for the first IDS machine I put into our campus network. In this case it was snort and I would probably have had a nervous breakdown without it.

Christoph Zurnieden September 20, 2006 4:10 PM

Excuse my ignorance, but what can the knowledge of a MAC address do to increment security?
It’s not needed for controled (by the administrators) machines (except the proper use) and can be changed to nearly arbitrary values on uncontrolled machines. Anyone in network contact with the machine can read the MAC and use it for any unholy use e.g. a DoS (if your collegiate uses your login+MAC for some “fun” and your MAC gets blocked just hours before the deadline …). The MAC is just a publicaly known password in the cases described in other posts above.
No, sorry, I can’t see any advantage in using the MAC for security purposes but I might be wrong.

CZ

Stefan Wagner September 20, 2006 6:09 PM

When I visited university, I visited 5 different locations for major and minor fields of study.

I only made experiences from a student-consumer point.

In contrast to corporate networks, where you have your bureau and coworkers, on university nobody knows noone.
Students come and go in small intervals and profs too.
You need an account immeadiately and will never resign it official. Just stop using it.
On most faculties you can visit a tutorial without legitimation and nobody will care – only for absolvation of a test you will need to show your documents.

EDV-curses often start with login and password for the whole curse (to keep administration easy, and prevent a lot of students with lost password annoying the admins) written on the board.
sem96 and sem9697 where really popular – I guess sem0607 will work today. 🙂

Wifi, Laptops and Music were no theme these days, but virus, pictures and software.
The allmost insecure student-PCs where separated from serious parts of the LAN and often out of order due to viruses.

I guess a serious secured network for the students would just be too expensive.

The mentioned MAC-adresses fit into this picture: Put them into the dhcpd.conf and connect them easily with a certain subnet.
Easy administration – not security.

Ralph September 20, 2006 7:21 PM

This is a good example of risk priority in action.

Even without the added complexity of university culture the process is very similar for deciding what to spend and where.

Unless IT has high political standing there are always barriers inside any organisation of size that are hard to over come.

The interesting story here would be the attempt to get the above plan implemented!

Mke J September 20, 2006 8:08 PM

“The interesting story here would be the attempt to get the above plan implemented!”

We have been implementing the above plan for the last 4 or so years. We are far enough along to be able to say that it can be done.

We have 200,000 students and 17,000 staff at 80 sites. All networks are firewalled (since 2002) with a default deny policy. Core databases and applications with student data are firewalled separately from all other networks.

We are in the process of classifing all campus network segments, moving all sensitive data to secured networks, and controlling access from unsecure to secure networks with policy, firewall rules, authentication and encryption.

It is alot of work, but it is do-able.

Davi Ottenheimer September 21, 2006 2:49 AM

Interesting points, but unless I missed something you did not mention the point of academic freedom from a centrally managed authority (also mentioned by raschi above). Professors work under the aegis and affiliation of a University or College, but they also probably want management of information to be as unrestricted as possible to allow free-flow of ideas. That, to me, has been one of the most powerful counter-arguments when trying to provide a meaningful security service to various faculty that collaborate around the world.

I feel I should also point out that the openness of the academic environment is precisely what led to the interconnectedness we both enjoy and curse today. The double-edged sword of their (and MCI’s) resistance to a powerful central authority, or at least a notion that we could strive for a more pedestrian notion of accountability, gave us the Internet.

Much of what you suggest, therefore, is meaningless without details of a system of representation and governance. That seems to increasingly be the achilles heel of information security. When you tell people they will be better off by giving control of their most sensitive data to someone else, do you or they know what rights and representation they will really have?

Your suggestion for “a top-down approach to securing their infrastructure” can mean just about anything from an elected leader to fascist dictator calling the shots. You are essentially advocating for governance, but at a such a vague level it is almost impossible to take as meaningful.

Managing the economics of security in a public educational institution can also be quite different and would likely impede your notion of central control. For one thing, funding often has rigid lines of use/guidance (from external governance). Another is that one person’s fringe is another person’s norm, as shown by boom/bust technology movements. I could give a dozen real-world examples but the bottom line is that non-central groups often have funding that holds them to a much higher standard than central authorities are capable or even allowed to support. The market/opportunity can thus be another powerful counter-argument.

In other words, “it depends” is often the best answer for what system will work best for university networks and data security. Centralize where practical for efficiency sake, but diversify and localize where necessary; both without losing sight of the greater mission of higher education.

Davi Ottenheimer September 21, 2006 2:59 AM

“Increased security always comes at the expense of openness, and openness is a crucial value in academia.”

This is perception, not reality.

Communications can be far more open, for example, if the end-points know that they have strong confidentiality/trust established.

I would even say that academia is really only open because they have successfully established a reasonable level of security. It always bothers me when people equate security only with roadblocks, and not the brakes on their car (e.g. that enable them to drive faster).

antimedia September 21, 2006 5:43 PM

Security at a modern university is being driven, to some degree, by outside forces; regulatory primarily. That will change the culture, by force if necessary, without the IT department having to do anything but shrug their shoulders and say, “The government requires it.”

To answer an implied question – using MAC address restrictions is much like door locks. They keep honest people honest. We have other ways of determining who you are when you break the rules.

Essentially, Bruce is correct. Security at universities will be (and already is) implemented in a top-down manner. The researchers won’t like it, but in the end they won’t have any choice.

The choice is simple – yesterday’s “freedom” and open access == closed doors due to too many serious compromises. “Restriction” and controlled access == you still have a university to teach at.

To echo a previous commenter, the concept that secure communications and “freedom” cannot coexist is a widely believed canard. “Doing” secure open access is simply a different way of approaching the same issues. Some trivial examples – you can still connect to your shell account, you just have to use ssh instead of telnet to do it. You can still read and send email from anywhere in the world, even on sabbatical. You just have to use imaps and stmps with smtp-auth.

It’s readily apparent that the internet is becoming more secure every day. When’s the last time you heard of a virus outbreak? The predominant attacks today are application attacks precisely because the edges and OSes have become more secure through active efforts to secure them.

A. B. Normal September 21, 2006 6:00 PM

Coming from a DoD environment to academia was a culture shock for me, especially since I’ve spent the last 20 years in Information Security. Top that off with being at a medical research university, and you have the recipe for a migraine. In other words, I’m an InfoSec professional.

One commenter mentioned FERPA (applicable to all universities), and we have to deal with HIPAA as well, which includes its own requirements for privacy and security. As mentioned above, we also do medical research, involving millions of dollars a year in research grants coming into the university. Much of the monies coming in are federal grants, requiring the university and the individual recipients to agree to and follow federal guidelines and laws. We periodically have to remind the research faculty of this fact when they want to make changes to systems and the network which puts the entire university network at risk. We also remind them that, if their intellectual property goes public before it can be patented, all the money going into the research will have been wasted since it won’t then be patenable (is that a word?).

In the last several years, we have made large strides toward improving the security posture of the university, and we’re proud of the work we’ve done. A great deal of that success has been because of the emphasis my office puts on the education of our customers (faculty, staff, and students), which is low-cost, high-return in terms of results. Part of that education has gotten easier in the last year with all of the news reports of data breaches (see privacyrights.org as a good example). The long and short of it is we tell them that they do not want to be the headline on the morning paper or the lead-off story on the six o’clock news. Research grants, jobs, and entire careers have been ruined due to data breaches at universities, and those that say that information wants to be free and that decry security on campus need to remember that it could happen to them.

Anarchy? No. InfoSec facism? No. There has to be a balance, and I feel tha Bruce’s approach is sound and must be tailored for each institution. There is no one-size-fits-all fix for security.

sutireme September 22, 2006 10:31 AM

“Increased security always comes at the expense of openness, and openness is a crucial value in academia.”

This is one of the great myths among some in higher education circles. One that I deal with constantly. But in fact, the concepts of security and openness are not mutually exclusive. The truth is that increased security usually comes at the minor expense of inconvenience – much like adding deadbolts to the doors to your house, which increases security but doesn’t prevent reasonable access. It just makes it a little less convenient.

The “openness” concept was never meant to imply access to everything, no matter how personal, private, confidential or damaging.

Great article, Bruce. Thanks.

Jim Dillon September 22, 2006 11:02 AM

Bruce,

You have the landscape down well, and the solution you suggest is largely what’s being attempted here. There is one big, ugly complicator, and that is the number of employees that truly need access to the central “sensitive” data!

I did tests here on one of our nameless but highly important central systems, and I tracked the data out through every interface to see where it went. I then took those secondary repositories and followed them one level farther. All the way through I harvested the ACLs for the various data sets and created long lists which I normalized against a similar long list of ALL administrative employees. This is roughly somewhere between 1/3 and 1/2 of all the university employees, some 8k to 9k persons. Of these over 83% were identified as having access to at least one of these data sets (the first 3 tiers of data) and I only followed the 3rd tier to 7 of over 3000 organizational units! Ultimately the need for academic records (in creating graduating lists, mailing lists for fellowships, and one after another of these sort of obscure but “educational record” tied sorts of things) is truly widespread to many if not most university administrative and educational type positions.

There was one assumption – that is that all professors, by right of classroom information, had class rosters (at the time the ID was SSN) and that this was of course a FERPA protected piece of sensitive information.

So your nice design concept for protection has a problem in that nearly everyone is used to getting at the sensitive data in the middle!

We’ve resolved a lot of the problem by eliminating sensitive data as an identifier (no more SSNs!) as a result, and that probably reduces this problem by 70 percent (just a guess), but there are still a vast number of uses for protected educational data going to a vast number of users. Trying to sort this down to some reasonable or minimal list of centrally controlled access to more secure network segments is going to be a whopper of a chore, as you have to carefully consider the needs of thousands of users with squishy job descriptions working with some of the most highly “sensitive” personal information around due to the many regulations that apply here.

The sum of this all is that I agree with your plan in concept, but the execution of it is a major interruption of the normal business flow of an institution, and the process redesign and maturation of job roles required to accomplish it are a tall mountain to scale. I sometimes wonder if it is truly possible to reach the goal given contradictory objectives and goals of the various parties involved.

Think about it: A student wants a credential to be public and certified in order to land a better job. They want and need their student information (which has to be identifiable) available to any potential employer. BUT the privacy advocates and even the same student group do not want the same private information being distributed to anyone else. So you have a bunch of shared secrets in a bunch of hands, and the result is obviously not that secret of a secret. This is the challenge facing Higher Ed – the contradiction in public policy for privacy and the great need to make public much of what is required to remain private.

Thanks for the article, I think you captured the environment well. I hope this adds a little flavor to understanding the core problem.

As to the “Academic Freedom” arguments against your take, they are bunk, an excuse to not be diligent in meeting personal responsibilities by persons that seem to believe that their freedoms are more important than those of their paying constituents and stakeholders. There are numerous ways to establish open communications with peers outside a university perimeter following the campus’ security guidance, they just aren’t always as convenient as logging on to AOL and using instant messanger. Those who make these arguments are entirely uninformed or simply in denial about the responsibilities that come with freedom of any sort, even the freedom of information exchange. Such a freedom cannot come at the cost of denying others their equally valued rights to privacy and protection from irresponsible handling of their information. The “Academic Freedom” banner used in this fashion is amazingly similar to the logic used by those around the world to support cultural genocide and class cultural distinctions that represent most of the lowest points in human history. “The pursuit of my research is so important that nothing else should interfere…” is the attitude that I find so often accompanies this. Do anything to bound the excesses this produces and the person merely moves on to another more susceptible or gullible school. Dishonesty, fraud, cheating, and any number of failures to follow policy or regulatory guidance are soon to follow this attitude. Bah, its usually over the denial of FTP as a protocol and the “overwhelming” inconvenience in implementing SFTP or some such rot. As stakeholders we must not let those using “academic freedom” as a bludgeon do so at the expense of the safety and security of our family members, state/federal tax dollars, and the basic principles of privacy and due diligence that are reasonable in any community.

Best regards,

Jim Dillon, CISA, CISSP
IT Audit Manager
Large State School

Bob Gezelter October 17, 2006 10:04 PM

I read with interest your recent Counterpane article “University Networks and Data Security”. I found the essay extremely accurate as to the realities of the campus environment.

The corporate world presents an equally challenging security policy problem. Different organizations within a corporation, and different groups within each organization, have dramatically different needs and obligations with
respect to security. Yet they still require access to a multitude of resources, each with different security obligations. Conceptually, this is no different than the challenge faced by a university.

In fact, your description of the need for multi-level security policies is the only solution for complex, hierarchical environments, as I noted in the Internet Security chapters of the Computer Security Handbook, 3rd Edition (1995) and 4th Edition (2002, outline at http://www.computersecurityhandbook.com
).
Thank you for reminding people that it is not a matter of finding a single policy that fits all, but structuring a combination of policies and network topologies that satisfies the needs and obligations of all.

John Baines October 26, 2007 7:52 AM

Security should follow the sensitivity level of the data, as Bruce suggests. In the past we have become too concerned with protecting networks or servers or other hardware/software components. We need to spend more time evaluting and controlling what sensitive data rides on what parts of (segmented) network systems, down to laptops and USB drives and then make sure protection is approriate for the level of the sensitivity of the data. In this way we can give people incentives NOT to store sensitive data by making them live up to more stringent security requirements if they do… And we can focus on the areas where the sensitive data is really needed and can be effectively protected.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.