The Security Risks of Not Teaching Malware

Essay by George Ledin on the security risks of not teaching students malware.

Posted on August 25, 2011 at 6:22 AM • 17 Comments

Comments

Danny MoulesAugust 25, 2011 6:47 AM

Given a lot of courses can't even find time to teach hardware fundamentals or security essentials I'm surprised he thinks they should focus on one specialist area. I think he underestimates the challenges educators face in providing a broad but deep knowledge base to students. He seems to be basing his piece on fighting the (perceived) argument that malware education is bad without assessing the actual benefit of his approach compared to other gains that could be made.

It pretty much boils down to: When we can't teach computer science students basic security knowledge and methodologies, why should we have them focusing their attentions on a particularly specialist area of study? There are much more important fights to be had - and whilst it's nice that someone is considering the possible benefits of focusing on a narrow specialty, it shouldn't barge other considerations out of the way which, arguably, provide even more benefit.

filosofisAugust 25, 2011 7:15 AM

Im studdying Computer Security @BTH and one teacher discussed the pros and cons of the education being 100% theoretical focusing on Cryptology and general Math, without ever touching a computer. His point was that practical knowledge goes out of date very fast.

But what use will the world have of a security engineer with no practice at a computer?

Clive RobinsonAugust 25, 2011 7:19 AM

I'm with Danny on this one teaching time is scarce and getting more scarce with the lack of finances.

Also the students are going to forget most of it within a short period of time.

There are other factors (ie employeers) which will make it even less likley to be used.

I'm increasingly of the view that for 99.99...% of code cutters security is something they neither understand or want to understand because at the end of the day it does not feature in their current or future remuneration or promotion.

Thus there are two basic solutions,

1, Somehow make it of interest to code cutters.
2, Make the code they cut with inherantly safe.

I can not in all honesty see the first happening for many many reasons.

So I'd look at the second option. That is get them to use a scripting language that has been specificaly written with security in mind.

This will have two benifits, firstly scripting languages tend to be very very high level thus the number of lines of code required will be small (thus considerably lowering the number of bugs in a program). Secondly it reduces the need for programers to dig deep into low levels etc where historicaly many of the attacked bugs have originated.

Although scripted languages are said to be slow, it realy does not matter much, and the upside for managment should be greater productivity and lower maintanence.

stevekAugust 25, 2011 8:14 AM

@Clive

First, let me take this opportunity to thank you for all the insights you provide to threads on this forum.

Next, the best way I've found to manage is by manipulating incentives. People can be counted on to do what serves their interests, and if I can align them with my interest (the company interest) then I'll get far better compliance.

Right now, we reward programmers for timeliness and ease of use. Managers will scold for buggy code or complex workflow, but not for security breach. The solution lies in making security important to the coders. When a company has to spend time fixing a discovered vulnerability, track back to the coder and bring it up at annual review. Run a zero-defects program with rewards for coding groups that show trend improvement.

There's a lot of research showing that autonomy is a motivation (search for "Dan Pink motivation" and watch the TED or RSAnimate video you find). Reward good coding (saving time on fixing bugs) with something like 20% time.

There are other ways to motivate, too, but the message here is that security won't get built in until the coders and their managers want to build it in.

ScottyAugust 25, 2011 9:21 AM

About 90% of the people in our IT security group can't write a "Hello, world!" program in any language, let alone understand anything more complex.

HarkyAugust 25, 2011 9:47 AM

@Clive:

"So I'd look at the second option. That is get them to use a scripting language that has been specificaly written with security in mind."

While I agree this could have largely positive effects, there a couple potential downsides that should be considered as well:
  1. Any bug / vulnerability found in the scripting language would result in a much larger scope than a single product
  2. For applications where speed and efficiency are critical, the pool of skilled candidates for the job would be much smaller (although this may not be a bad thing, if they are also of better quality)

@filosofis and @Clive:I am still a big believer that teaching the theory and fundamentals. Unfortunately as mentioned by both of you employers care more about the practical experiences. In my opinion a person with strong fundamentals and little practical experience will produce better results in long term. Unfortunately most employers seem to be more concerned about the short term benefit of the person who knows how to get things done in language xyz on day 1 even if they are more limited in growth potential. Fortunately, there are still SOME employers who will invest in the long term scenario

Clive RobinsonAugust 25, 2011 11:17 AM

@ SteveK,

First of thank you for making my ears go pink 8)

Secondly I agree with you about incentives being the best way to get things achived (the old saw about "A willing man is worth five pressed men" comes to mind).

However like Quality I fully expect Security to show dividends long term, providing it has buy in from the very top all the way down and it's a process that is started before the product is even thought about and stays untill the product is fully retired.

But like Quality, Security is not something that shows an immediate return. Nor does it have a simple ROI calculation and it does not "keep up with the competition". Which are all negitives for those with a shorterm outlook on walnute corridor.

Even worse if you ask lots of people in the industry you get the same sort of answer that Security in software is like safety features in cars. That is everybody says that you should have them, but as customers they won't pay for them and the reality is they will buy a car on power performance top speed or just about any other feature than safety.

So security to managment is a difficult sale at best over feature buzz and zip.

Hence part of the idea around the scripting is to get managment buy in on less production time and less maintenance which are the biggies cost wise, have fairly easy ROI to calculate, which shows a quick return, whilst also keeping up with the competition on delivery times.

Also some of us remember "Rapid Prototyping" and how the "script version" got the end user buy in and delivered a working prototype very quickly. However the "C++ version under MFC" usually looked nothing like what the end user had agreed too, was buggy as hell usually late and incompleat and often ran slower than the script version...

This issue I found was got by the wrong incentives from managment (ie lines of code a day and extra flashy features).

So it's kind of a catch 22 managment won't go for security incentives unless you can prove the benifit immediatly or in the very short term. You cann't show the benifits in the short term because it takes time for the process to work it's way through the system and show it's real value to the bottom line over the entire life cycle.

I'm sure it's something Nick P will comment on because it's the same issue with using formal methods and provable verification of design etc. Even the "clean room technique" which has been shown (counter intuitively) to work rarely gets a look in.

I used to work with a developer that used Z he took twice as long to produce his first cut code as the gung ho code cutters, but he only took half as long again getting final code, where the gung ho mob would go round the wash rinse cycle five or six times taking maybe six times as long to debug as their initial code cut. Oh and his code never came back from test, where as the gung ho code used to come back every time for atleast another wash and rinse cycle.

@ Scotty,

"About 90% of the people in our IT security group can't write a "Hello, world!" program in any language, let alone understand anything more complex."

So what do they do all day?

@ Harky,

If I read your point 1 correctly the same issue applies to bugs in the compiler or assembler of other program writing methods.

It's just that the bugs will be higher up the stack as it where. Now this might or might not actually increase security depending on how the scripting language is designed to work (which is a chat I've had with Nick P int the past on this blog).

Your point 2 is correct, but it depends on how you view/do things. For instance if those "best of breed" programers are used to develop the components of the scripting language then their skills get spread across many projects at no extra cost.

When I thought about it originaly my starting point was "how do I leverage the skills of the best secure code writers across all projects even though theres only enough of them to do one project in a thousand?".

The result was to think about a process involving a scripting process that had a hypervisor running in tandem that had a security signiture prototype an ordinary coder effectivly filled in. The hypervisor would "watch" the script and cause an exception if the signiture bounds were broken.

With regards the fundementals I'm right along side you in that battle, and if you look back on this blog you will see I've said so many times.

The problem starts with the education establishment, few are truly independant of industry, and when a large employer in the area starts dropping hints and money about what they want to see in graduates it's a brave organisation that ignores it. Partly because it's in their interests to go along with it but also it's sort of in the graduates initial interest short term (ie they get their first job).

However "teaching the tools" not "teaching the fundementals" is a very bad idea in the medium to long term. As you note the tools go out of fashion and if the student has no fundementals to fall back on then they have to start from the very bottom again which is not good for them nor is it good for industry.

One thing that often annoys me is "teaching to program in X" not "teaching to program in general". Any programing language has certain features that it excells at and teaching a student how to get the best out of language X is doing nobody any favours short term or long term.

A piece of advice I read somewhere is "learn a new language every year" not because you have any intention of using it but because like travel it broadens you perspective and understanding.

Or to put it another way there are two parts to speaking any human language, the mechanics of speach and the peculiarities of a specific language.

If you learn only one laguage when young you might as with certain languages not learn some of the fundimentals of speach. I'm painfully aware of this when many people who learnt to speak english after they were about ten try to speak my name (the C of Clive, and the R of Robinson).

However I likewise have lost out because I'm not "pitch perfect" like most Westerners, where as being "pitch perfect" is almost a requirment for some languages.

Mind you fundementals can be hard to learn, it takes about four years to learn to write neatly, about another three years to do technical drawing, and a life time to be able to draw artisticaly. All of that involves the use of what most consider the most basic of tools the pencil/pen.

Nick PAugust 25, 2011 12:08 PM

There's two points that have come up that I totally agree with. The first is that IT education is barely happening by itself, much less security education. Many computer users still don't know much about how a computer operates. Sadly, the same can be said for many IT guys. It's hard enough to get IT managers and workers basic knowledge on security. And this guy wants us to dedicate the scarce resources to one subfield? I think it's better to teach them IT basics, then security principles, and then give them information about common threats, like malware.

The second point I'd like to hone in on is incentives. I agree with a previous poster that there are few incentives to produce secure software and companies won't get it until there is a price attached to reduced vulnerability rate. Development methodologies like Cleanroom and Fagan's Software Inspection Process give reliable measures for defects, which may include vulnerabilities. Cleanroom is statistically certifiable. So, a company employing methodologies like this make it easier for a company to attach a financial bonus to a low defect rate.

I also agree with Clive's idea of building a platform with security baked in and that allows seemless construction of systems without certain types of vulnerabilities. Many have been promoting this kind of language- or platform-based security for years. JIF/SIF and the Ocaml VM runtime are examples of such platforms. WinDev does something similar for RAD, but I haven't evaluated security of generated apps. The newer Ocaml-based Opa language claims to generate safe & secure web applications automatically from a high level language.

So, these do exist. We just need to get more people using them to increase the number of libraries and development tools. Then, they will prove to be attractive platforms that let people get the job done with plenty of safety/security for free.

Brandioch ConnerAugust 25, 2011 1:50 PM

@filosofis
"His point was that practical knowledge goes out of date very fast.

But what use will the world have of a security engineer with no practice at a computer?"

The simple answer to that is that once you understand the theory, you ask the question "how do I do X in environment Y".

If you focus on the practical and you end up in a different environment the question is "what do I do in environment Y".

I can give you lots of anecdotes about people in IT who, after a penetration test, only fix the specific issue in the specific instances that allowed the tester to get access.

Richard Steven HackAugust 25, 2011 2:08 PM

I don't know where to start on this mess. :-)

First, "this guy" doesn't require curriculums to "concentrate on one specialty". He's arguing that the same time you spend on AI courses in a general computer science course might be better spent on malware research, so someone - anyone - in computer science graduates with at least a nodding acquaintance of the subject, as opposed to the utter ignorance they display now.

That said, I see a lot of hype in this article.

He starts out with hyperbole such as:

"Before attacking the U.S. on Sept. 11, 2001, terrorists rehearsed their assaults on a smaller scale at the World Trace Center and in several more distant venues. Since that infamous date, paralleling physical attacks, cyberstrikes of
increasing severity have been carried out against many targets. A few small nations have been temporarily shut down. These attacks are proofs of concept waiting to be scaled up. I hope cybersecurity is on governments’ front burners. We ought not wait to react until a devastating cyber-onslaught is unleashed upon us."

He's watched "Live Free and Die Hard" too many times...

Or: "Inadequately capable of defending ourselves from being burgled, we are easy targets for evil geniuses plotting fresh hostilities."

Thanks for the compliment... :-)

"We cannot protect ourselves from what we do not know."

That part he gets right.

"The reason we cannot solve the malware problem is simple: We don’t have a theory of malware."

I label this complete nonsense. It's based on the notion that if you study malware like you study physical objects, you'll eventually know everything there is to know about it and be able to deal with it as you please. While this may be technically true, I'd submit he doesn't believe - based on his hyper-alarmism earlier - that the world has another couple centuries in order to develop a "Grand Unified Theory of Malware".

And what happens when said "GRUNT" runs into my meme? :-) Which it will because "malware" is just human ingenuity directed to malicious ends.

I think he IS correct that there should be proactive research into the production and distribution of malware. This at least removes the ignorance on the part of the afflicted party, even while it improves the knowledge of the subject of the attacking party (which it will, of course.) The military does this and IT security researchers do this by examining code and attempting to produce exploits as part of "vulnerability analysis". So starting an actual, well-funded research program for this is a no-brainer.

He correctly notes that the problem is that funding the production of basically harmful artifacts is ruinous of reputations. But we seem to have no problem with that when it's the military-industrial complex producing new and ever more lethal nuclear weapons, or new fighter planes or other new - and very expensive - weapons which either 1) probably won't work right, or 2) will never be used, right?

Finally, I'd say he's being alarmist in terms of the progress of events. Some years back there probably were almost no courses on computer security. Now I suspect almost every college - even community colleges - have courses in computer security. Some of them, like San Francisco Community College, have multiple courses in computer security leading to an AS specialty.

Most of them don't teach malware analysis and design per se, however, and a separate course in that might be useful. Several universities already have "malware analysis" courses.

So over time as malware becomes even more pervasive, I would expect schools to develop more malware-related courses - assuming they can overcome the "you're training hackers" theme, as many have done with "ethical hacking" courses.

tommyAugust 25, 2011 10:29 PM

Easy. Say it with me:

"Software liability. Software liability. Software liability."

(for damage demonstrably caused by improper coding allowing the exploit that caused the damage, not for some (l)user clicking unknown links in spam e-mails.)

When the vendor is liable, they will make darn sure that security is baked in, and one part of doing that is hiring security-aware programmers and reviewers, and/or remedial training if necessary. Plus, outsource to an independent pen-test firm, and see if they can find an expliot, *before* you release the product.

This is "incentive" in the Big Picture. Anything less doesn't address the root cause, "externalities". Bruce has blogged this latter factor many times, so I'm not claiming credit. It's just more powerful an argument when a cryptographer and an economist agree.

tommyAugust 25, 2011 10:34 PM

@ Scotty:

"About 90% of the people in our IT security group can't write a "Hello, world!" program in any language,..."

@ echo off
echo Hello, world!
pause
exit

When do I start work?

ModeratorAugust 26, 2011 5:03 PM

Nick, please keep your comments focused on security. In particular, please don't post videos that aren't about security.

CPAugust 26, 2011 6:07 PM

Holy overthinking academic, Batman.

From the article: "The reason we cannot solve the malware problem is simple: We don’t have a theory of malware."

Here's your Unified Theory of Malware: Malware *is* software.

It's actually *GOOD* software, in that it is designed to operate with a small footprint/overhead, have a minimum of dependencies, and be resilient to failure or interference. Any CS student who isn't blown away by the elegance, economy and cleverness of a program like Stuxnet or Conficker should just hang it up.

So instead of the wailing and gnashing of teeth, how about you use malware as a model for how to write good software? Today's programmers are so abstracted away from the actual underlying code by IDEs and SDKs and frameworks and business logic and other nonsense that they don't actually know what's going on under the hood. They aren't taught the techniques which make malware possible, all of which go to the root of their selected discipline.

But the good Doctor asks, how do we do this without all that "moral hazard"?

Simple: Don't teach malware. Saying you're going to "teach malware" is overly provocative and sensational, and ultimately counter-productive. Instead, teach the programming techniques that malware uses to be effective software.

Start by removing the value judgement that comes from the notion that malware is somehow different because it does things to your computer you don't like, or that aren't advertised on the box. I have paid actual money for plenty of 'legitimate' software that has done the very same thing (DRM, hardware fingerprinting preventing license portability, phoning home with usage data, etc.)

Also, don't let people get hung up on the 'why' of malware. Software is written to accomplish an objective. Whether that objective is to erase every Office file on your hard drive or draw a pretty pink pony is irrelevant. All an upper-class CS major should be concerned with is how to use the techniques they have learned to accomplish the task. Ethics is for freshmen.

* Walk students through the CWE and look the HUNDREDS of ways that poor exception handling allows malware to thrive.
* Show them how to program in a way that properly uses access control, permissions, least privilege and least functionality instead of running every-damn-thing as SYSTEM or root.
* Teach sandboxing and chrooting technologies that can be used as a means to combat and contain malware.
* Teach kernel hooking, file/registry hiding, source code obfuscation, run-time modification, covert channels, and other things malware uses to get the job done.

All of these are very useful advanced programming skills and all have legitimate uses.
And the more people know and understand these techniques, the better chance you have of some brilliant students actually confronting and beginning to solve the malware problem.

Dave FAugust 29, 2011 10:59 AM

I am strongly in favor of setting up a strong program of maleware production at the university level with follow-on masters, PhD, and Post-Doc Programs. This program will inevtability be followed with excelent projects such as the aforementioned Grand Unified Theory of Malware. With such underpinnings, can Sigma 6 certification be far behind? ITIL for hackers? The prospects for all areas are limitless. Except one. Productivity. We can expect the productivity of maleware to suffer. Next step Russia and China will be exporting the production of maleware to India. When a worm instructs the user to call 1-800-KILL-BOT, we will have achieved success. The Amish virus will now be the gold standard world wide and our computer systems will finally be safe!

OtterAugust 29, 2011 2:10 PM

You guys are Michaelangelos ranting about the guys who somehow stumbled into jobs working for sign-painting corporations. They know the words and they can paint inside the lines; but they will never paint the Sistine Chapel and their bosses don't want them to.

VlesAugust 29, 2011 3:20 PM

Here's a computer scientist who I think would agree with teaching malware. It would slot in nicely with his point 5a "break the dmca". Experience is everything..
Although this would be more for older kids. Could still be fun for the whole family!

http://www.ted.com/talks/...

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..