Schneier on Security
A blog covering security and security technology.
« Economics and Information Security |
| Bundesamt für Sicherheit in der Informationstechnik »
May 17, 2006
Online Student Exams
I'm sure this is a good idea, but I wonder when the first case of cheating-by-rootkit will occur.
Posted on May 17, 2006 at 7:06 AM
• 32 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Not least from entrepreneurial students, maybe using the paypal donation system?
My company does automated exams, and I happen to be responsible for the test engine.
It is a hard thing to secure such a system; things get easier when the test is held online, but there are limited things you can do with just a web interface: multiple choice, fill-the-gaps and essay type questions (our system works off-line).
I was boggled to discover that some law schools use blue book software for exams that locks down the host PC. (I believe it is this product; if not, it is something similar. It is all well and good, except (of course) the sense of security is bogus- a malicious student (or a Linux or Mac user) can just slip vmware, parallels, or qemu underneath the 'locked down' OS and trivially circumvent the lockdown. As a Linux user who will have to use vmware to run this at all when I go back to school in the fall, I'm sort of dreading my first conversation with a professor about this- 'yes, I can run it, but yes, I plan to circumvent all the security features in order to do so. Are you OK with that?' Frankly, I'll probably chicken out and just borrow a friend's box- but either way it'll be less than ideal.
I'm sure a lot of students will be able to cheat using this system. On the other hand it isn't very hard to cheat using the "old" system either. The real benefit must be to save time by not having to correct the exams manually.
"I'm sure a lot of students will be able to cheat using this system. On the other hand it isn't very hard to cheat using the 'old' system either."
You're right. While techies will focus on new ways of cheating enabled by online testing, most of the old ways of cheating are still equally applicable.
I fear that the first online test anti-cheating-rootkit comes quite soon.
Many online certifications exams from vendors still have issues regarding obvious web explotation flaws. How much time until people start cheating the new system?
It's not covered in the article, so maybe the students take the on-line exams at a testing facility...but if not, who's to say the student taking the test is the one turning it in? Students have paid ringers to take their exams for years, so proctors often check ID cards. How do you verify it's really Johnny sitting at Johnny's computer taking the test?
Furthermore, how can you prevent a number of students taking the test serially or simultaneously and sharing questions and answers? Or cheat sheets? You could have a large pool of potential questions but would place an even higher burden on instructors preparing the test, and many are already overworked.
Given the details I know right now, I'd trust this system about as much as those online colleges that will award me a degree based on my "years of experience".
As a former computerized test admin (http://www.prometric.com/Default.htm), what I would like to know is how this sort of testing will be different than the testing given at prometric test centers? If the machines set to give the tests are from a controlled laboratory as in the case with most prometric test centers, then this sort of testing has been done for years (think GRE, GMAT, the various NASD series tests, etc)....
A student who knows the material well enough to earn a B grade could improve his standing with a man-in-the-middle attack on every other student, lowering their performance to a worse grade, say by randomly changing a third of true-false or multiple-choice answers.
The attacks could poison essay items by replacing the text of most of the victims with copies of the text of the rest of the victims, producing convincing evidence of widespread cheating, and plagiarism at the very least. The culprit will be the only one with unique answers, obviously the only one not cheating.
I run a computerized testing server based on MapleTA for a department of mathematics. Overall, the risk of cheat-by-rootkit is quite low, as the tests take place in a proctored testing environment where students are checked in by ID card and a proctor monitors the area, the client computers are restricted by the routers to only be able to reach certain ports on the testing servers, and the servers themselves are Linux boxes that run fairly minimal services and have console access restricted to the sysadmins and a few instructors who write new libraries for the system.
Given the complexity involved in a rootkit specifically designed to attack our install location, it's not something I worry about.
The significant threats, as MapleTA has an absolutely abysmal security model (plaintext passwords and user data stored on disk, no encryption on network connection), are loss of private student information, including SSNs, or simple password breaking. Why bother cheating on the test when you can steal an instructor's password from off the desk and manipulate your grade from a non-proctored environment? Because instructors want to be able to check or manipulate grades from machines all over campus, I can't lock down the instructor interface to the system by IP.
This isn't exactly a new threat, though, except that as we put more and more students on a single server the likelihood of one of them altering a file on the "teacher's computer" increases as well.
The only direct attack on the server OS that I worry about is a DOS attack during testing, the equivalent of the "pull-the-fire-alarm" trick in the digital age. I wanted to have test score storage on different servers from the servers that do testing themselves, so the test servers could be maintained inside the isolated testing room network, but was unfortunately overridden.
I have done programming exams on a PC in college before where there was Internet access. Technically, they could not have stopped us from using the Internet or sending messages to each other during the exam. However they did monitor the network traffic for cheaters (we were logged in and had been assigned PC's). Nobody did cheat - so I'm not sure if it was a sucessful system. I'm also not sure what monitors they were using either.
It seems that first cheating by rootkit has already occured. On brainbench.com once there was topmost result on Network security which has 100% of answers correct. Much better than best result on any other topic. It is a rumor that guy just breaks into their system and being a honest hacker set himself highest grade on the topic which was closest to the thing he has actually done.
I took one online test, in a computer lab not specifically laid out for exam taking.
The easiest way to cheat in that case would still have been the old fashioned one of peeking at someone's exam.
The difference was, that their exam was on a big, glowing, vertical screen, rather than flat on a desk. The number of exams available for peeking was increased considerably - a cheater could have read about three or four answers, and picked among the best ones (or applied a voting system to true/false questions - 3 out of 4 of my classmates think this is true...)
Wow. I'm not sure whether to applaud your insight or be appaled at your deviousness! ;-)
I develop and administer an online testing system at the University where I work. One trick we use which is very effective at increasing security is to reboot all the testing machines into custom, locked-down Knoppix Live-CDs. This means we have absolute control over what resources they can or cannot use; what network machines they can log into, etc.
After that, it's just a matter of preventing them sneaking notes in in the traditional matter...
Some friends of mine made a gradebook server that was all web-based for my university. It was a linux box, mySQL, etc. Prof's used it to record grades and generate reports. It worked relatively well for a few years. Then, after we had all graduated, it got root'ed and the data was molested. The prof's went back to pen and paper.
1. This isn't anything new: universities have been running final exams and quizes with WebCT for almost a decade
2. These labs are very difficult to secure. After proctoring an exam, I took a look around the "secure" workstation and came up with over two dozen ways to communicate amongst the networked terminals. For example, although the Windows terminal had explorer, run, and local browsing disabled in the web browser, I was able to create a .bat file to run cmd and then used net send. I could also use the open dialog box in Notepad to run cmd.exe to do the same.
Unless there is a proctor for every student and there is only one straight row of computers, online (networked) student exams will be very difficult to secure.
Interesting you should mention WebCT. At my university, I had several classes that used WebCT for quizzes and exams. We were expected to take the tests at home, on our own machines. There was no supervision or proctoring at all. The tests were explicitly "open-book, open-notes", but no attempt at all was made to prevent collaboration or Internet use. I was never sure if the instructors didn't think of it, or just didn't care.
It is part of the no student left behind program. If you can't catch them cheating, then they pass and they aren't left behind. Sorry.
seriously, there are many ways around preventing some of these types of cheating:
1) Random questions from large question set.
2) Timed response per question driven by server time.
3) Passcodes e-mailed to an account on file (and if it is at a university, they have an account) for each question.
But then again, I've always felt tests are overrated.. I mean, in the real world you collaborate and look up information all the time.. Why would you want to restrict that? Learning can be shown through conversation, not through random multiple choice tests.
Now, if they start doing web cam drivers tests.. then we might have an issue.
Having compiled an exam or two, IMHO, I would say that a properly constructed examination (for most subject, although certainly not all) has the property that accessibility to information is not a significant componant in achieving an optimal result.
Examinations should test knowledge, not information. Some of the harder exams I took in my mathematics education were in some wise open book, open note, or take-home. Access to additional information was marginally helpful, as the exam was testing the meta-level knowledge -> how do you apply information?
Now, obviously this isn't 100% true, especially at the lower academic levels where rote information must be absorbed in order to enable higher-level learning. People still need to memorize their times tables, for example.
The problem with many medium to higher level examinations (7th grade through undergraduate university examinations) is that knowledge testing is hard, and requires a lot of work on the part of the instructor, whereas information testing is easy, and requires something that can read a scantron to grade. Standardized examinations in particular are a really horrible way to test people for bulk knowledge.
Perhaps the use of online is misleading?
"A spokesman stressed that while a mouse would replace a pen for candidates, the other aspects of the exam would remain the same as normal. Only the candidate and the invigilator will be able to see individual screens, while the answers will be submitted directly to a secure SQA area."
From this I gather that the test is taken on campus, and not over the internet.
I wrote and maintain a web based assessment system that we use to give students assignments and tests.
We've taken some measures to protect tests - supervised labs, and students get different (semi-random) variations of the questions.
But the thing we emphasize with the "take home" web based assignments is that it doesn't matter too much if they're insecure - the older method of paper based assignments were much easier to cheat at and seem to have worked quite well for teaching for many decades.
Yes, students can sit around a computer and work together to all get full marks in an assignment. But they could (and did) do that with paper based ones too. At least this way we can give each student a customised assignment so they can't just copy the answers, they have to copy the *method*, or get a friend to do it for them.
The rootkitting threat in supervised tests to me is less plausible than, say, sneaking in a cellphone or a PDA pre-filled with formulae.
Which people do.
The real benefit, it seems to me, is the efficiency for the people who have to grade the massive amount of exams.
As I mentioned a couple of days ago with regard to voting systems, nation-wide/standardized exams should be studied thoroughly if anyone wants to understand the efficiency of massively scaled scoring systems.
Ah, nothing's perfect:
""My math score"—a 640—"kind of shocked me," says Smith, an 18-year-old senior who wants to study acting in college. "It was still OK, but I thought it wasn't my grade." It turns out he was right. Last Wednesday, five months after he took the exam, Smith received an e-mail from the College Board, which administers the tests, telling him that his SAT had been scored inaccurately."
Many many years ago I wrote an "online" (LAN-only) student exam for an intro class at a large University. While we knew there was some risk/margin of error to be anticipated, the project goal was to reduce the existing load and margin of error from the tedious manual processes. It was common to find human graders prone to simple error from fatigue and stress (imagine hours and hours of A, A, B, A). Unfortunately, the more the grading was delegated to reduce fatigue the more social engineering became a worry. So we reasoned that there would be a baseline goal to achieve for grading that would maximize accuracy in the first release rather than attempt absolute perfection I suppose if we were sending a shuttle to the moon the standards for accuracy before launch might have been different.
We considered allowing students to review and dispute the results as one detective solution to the data integrity issues, except for the fact that this made the results a study-guide for the next people taking the exam. Lots of trade-offs, but in the end the automation of the tests resulted in less time spent by staff and professors on course administration and more actually spending more quality time with students...
Incidentally, students are often willing to experiment and game the systems, but I always found that logging was an easy deterrent. If you could demonstrate that you had reasonable visibility into someone trying to abuse a system, and this was clearly communicated as behavior that would cause them to forfeit the exam or even the whole class, then there was less "playful" experimentation. The determined attackers, well, we had other ways of dealing with them. :)
"The rootkitting threat in supervised tests to me is less plausible than, say, sneaking in a cellphone or a PDA pre-filled with formulae."
Same attacks different medium. I'm sure every teacher could give a dozen examples of how students have tried to cheat tests using paper, pencils and even calculators.
One of my favorite examples from a long time ago (you know who you are if you're reading this) was when someone asked their French teacher if they could use their calculator on the tests. The French teacher apparently couldn't imagine the calculator being any use, so she didn't object. Well, the calculator had a small graphing screen and the student had pre-programmed it with all the vocabulary...today this functionality exists on almost every mobile device, which makes me think students might soon be actually taking their exams on their cell phone, if they aren't already.
Maybe this ubiquity of information access is really the proof of Albert Einstein's prediction/advice:
"Never memorize anything you can look up."
``...imagine hours and hours of A, A, B, A...."
Why not use OMR?
``...calculator had a small graphing screen and the student had pre-programmed it with all the vocabulary...."
Before every University exam I have set we get told to clear ``the memory bank of any calculator." Must exams don't permit the use of a calculator anyone (default deny!)
When I said `set' I meant `sat.' (exams I have sat)
And speaking of exams - I better get going! (Ignore the timestamp its 0845 over here)
I used to work on online testing software in the UK, and I don't know about other systems, but the system I worked on was fairly difficult to break the security on, installing a rootkit (sic) would not gain you anything No data on what the correct answer is is stored on the system, it is all processed elsewhere.
The real problem is that using monitors, it is a lot easier to look at what someone else is doing, and harder to see if someone is doing just that.
@packrat et al:
It's often said here that the goal of security is to increase the cost/benefit ratio for the attacker : make the attacks more difficult than the data is worth. It's interesting to change the other side of the equation - make the data actually worth less - in this case by making an open-book exam taken from home, and factoring that into the overall grading scheme.
I expect the accreditation authorities periodically audit schools. Computer testing (and online security) should be part of regulatory requirements - if they aren't already - just as banks, hospitals and other organizations must now follow e-commerce and information security regulations.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.