Entries Tagged "schools"

Page 2 of 8

Nice Security Mindset Example

A real-world one-way function:

Alice and Bob procure the same edition of the white pages book for a particular town, say Cambridge. For each letter Alice wants to encrypt, she finds a person in the book whose last name starts with this letter and uses his/her phone number as the encryption of that letter.

To decrypt the message Bob has to read through the whole book to find all the numbers.

And a way to break it:

I still use this example, with an assumption that there is no reverse look-up. I recently taught it to my AMSA students. And one of my 8th graders said, “If I were Bob, I would just call all the phone numbers and ask their last names.”

In the fifteen years since I’ve been using this example, this idea never occurred to me. I am very shy so it would never enter my mind to call a stranger and ask for their last name. My student made me realize that my own personality affected my mathematical inventiveness.

I’ve written about the security mindset in the past, and this is a great example of it.

Posted on April 9, 2013 at 1:49 PMView Comments

Internet Safety Talking Points for Schools

A surprisingly sensible list.

E. Why are you penalizing the 95% for the 5%? You don’t do this in other areas of discipline at school. Even though you know some students will use their voices or bodies inappropriately in school, you don’t ban everyone from speaking or moving. You know some students may show up drunk to the prom, yet you don’t cancel the prom because of a few rule breakers. Instead, you assume that most students will act appropriately most of the time and then you enforce reasonable expectations and policies for the occasional few that don’t. To use a historical analogy, it’s the difference between DUI-style policies and flat-out Prohibition (which, if you recall, failed miserably). Just as you don’t put entire schools on lockdown every time there’s a fight in the cafeteria, you need to stop penalizing entire student bodies because of statistically-infrequent, worst-case scenarios.

[…]

G. The ‘online predators will prey on your schoolchildren’ argument is a false bogeyman, a scare tactic that is fed to us by the media, politicians, law enforcement, and computer security vendors. The number of reported incidents in the news of this occurring is zero.

H. Federal laws do not require your draconian filtering. You can’t point the finger somewhere else. You have to own it yourself.

I. Students and teachers rise to the level of the expectations that you have for them. If you expect the worst, that’s what you’ll get.

J. Schools that ‘loosen up’ with students and teachers find that they have no more problems than they did before. And, often, they have fewer problems because folks aren’t trying to get around the restrictions.

K. There’s a difference between a teachable moment and a punishable moment. Lean toward the former as much as possible.

[…]

O. Schools with mindsets of enabling powerful student learning usually block much less than those that don’t. Their first reaction is ‘how can we make this work?’ rather than ‘we need to keep this out.’

Posted on August 24, 2012 at 1:18 PMView Comments

Cheating in Online Classes

Interesting article:

In the case of that student, the professor in the course had tried to prevent cheating by using a testing system that pulled questions at random from a bank of possibilities. The online tests could be taken anywhere and were open-book, but students had only a short window each week in which to take them, which was not long enough for most people to look up the answers on the fly. As the students proceeded, they were told whether each answer was right or wrong.

Mr. Smith figured out that the actual number of possible questions in the test bank was pretty small. If he and his friends got together to take the test jointly, they could paste the questions they saw into the shared Google Doc, along with the right or wrong answers. The schemers would go through the test quickly, one at a time, logging their work as they went. The first student often did poorly, since he had never seen the material before, though he would search an online version of the textbook on Google Books for relevant keywords to make informed guesses. The next student did significantly better, thanks to the cheat sheet, and subsequent test-takers upped their scores even further. They took turns going first. Students in the course were allowed to take each test twice, with the two results averaged into a final score.

“So the grades are bouncing back and forth, but we’re all guaranteed an A in the end,” Mr. Smith told me. “We’re playing the system, and we’re playing the system pretty well.”

Posted on June 14, 2012 at 12:27 PMView Comments

Teaching the Security Mindset

In 2008, I wrote about the security mindset and how difficult it is to teach. Two professors teaching a cyberwarfare class gave an exam where they expected their students to cheat:

Our variation of the Kobayashi Maru utilized a deliberately unfair exam — write the first 100 digits of pi (3.14159…) from memory and took place in the pilot offering of a governmental cyber warfare course. The topic of the test itself was somewhat arbitrary; we only sought a scenario that would be too challenging to meet through traditional studying. By design, students were given little advance warning for the exam. Insurrection immediately followed. Why were we giving them such an unfair exam? What conceivable purpose would it serve? Now that we had their attention, we informed the class that we had no expectation that they would actually memorize the digits of pi, we expected them to cheat. How they chose to cheat was entirely up to the student. Collaborative cheating was also encouraged, but importantly, students would fail the exam if caught.

Excerpt:

Students took diverse approaches to cheating, and of the 20 students in the course, none were caught. One student used his Mandarin Chinese skills to hide the answers. Another built a small PowerPoint presentation consisting of three slides (all black slide, digits of pi slide, all black slide). The idea being that the student could flip to the answer when the proctor wasn’t looking and easily flip forwards or backward to a blank screen to hide the answer. Several students chose to hide answers on a slip of paper under the keyboards on their desks. One student hand wrote the answers on a blank sheet of paper (in advance) and simply turned it in, exploiting the fact that we didn’t pass out a formal exam sheet. Another just memorized the first ten digits of pi and randomly filled in the rest, assuming the instructors would be too lazy to
check every digit. His assumption was correct.

Read the whole paper. This is the conclusion:

Teach yourself and your students to cheat. We’ve always been taught to color inside the lines, stick to the rules, and never, ever, cheat. In seeking cyber security, we must drop that mindset. It is difficult to defeat a creative and determined adversary who must find only a single flaw among myriad defensive measures to be successful. We must not tie our hands, and our intellects, at the same time. If we truly wish to create the best possible information security professionals, being able to think like an adversary is an essential skill. Cheating exercises provide long term remembrance, teach students how to effectively evaluate a system, and motivate them to think imaginatively. Cheating will challenge students’ assumptions about security and the trust models they envision. Some will find the process uncomfortable. That is
OK and by design. For it is only by learning the thought processes of our adversaries that we can hope to unleash the creative thinking needed to build the best secure systems, become effective at red teaming and penetration testing, defend against attacks, and conduct ethical hacking activities.

Here’s a Boing Boing post, including a video of a presentation about the exercise.

Posted on June 13, 2012 at 12:08 PMView Comments

Lessons in Trust from Web Hoaxes

Interesting discussion of trust in this article on web hoaxes.

Kelly’s students, like all good con artists, built their stories out of small, compelling details to give them a veneer of veracity. Ultimately, though, they aimed to succeed less by assembling convincing stories than by exploiting the trust of their marks, inducing them to lower their guard. Most of us assess arguments, at least initially, by assessing those who make them. Kelly’s students built blogs with strong first-person voices, and hit back hard at skeptics. Those inclined to doubt the stories were forced to doubt their authors. They inserted articles into Wikipedia, trading on the credibility of that site. And they aimed at very specific communities: the “beer lovers of Baltimore” and Reddit.

That was where things went awry. If the beer lovers of Baltimore form a cohesive community, the class failed to reach it. And although most communities treat their members with gentle regard, Reddit prides itself on winnowing the wheat from the chaff. It relies on the collective judgment of its members, who click on arrows next to contributions, elevating insightful or interesting content, and demoting less worthy contributions. Even Mills says he was impressed by the way in which redditors “marshaled their collective bits of expert knowledge to arrive at a conclusion that was largely correct.” It’s tough to con Reddit.

[…]

If there’s a simple lesson in all of this, it’s that hoaxes tend to thrive in communities which exhibit high levels of trust. But on the Internet, where identities are malleable and uncertain, we all might be well advised to err on the side of skepticism.

Posted on May 23, 2012 at 12:32 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.