Why Take9 Won’t Improve Cybersecurity

There’s a new cybersecurity awareness campaign: Take9. The idea is that people—you, me, everyone—should just pause for nine seconds and think more about the link they are planning to click on, the file they are planning to download, or whatever it is they are planning to share.

There’s a website—of course—and a video, well-produced and scary. But the campaign won’t do much to improve cybersecurity. The advice isn’t reasonable, it won’t make either individuals or nations appreciably safer, and it deflects blame from the real causes of our cyberspace insecurities.

First, the advice is not realistic. A nine-second pause is an eternity in something as routine as using your computer or phone. Try it; use a timer. Then think about how many links you click on and how many things you forward or reply to. Are we pausing for nine seconds after every text message? Every Slack ping? Does the clock reset if someone replies midpause? What about browsing—do we pause before clicking each link, or after every page loads? The logistics quickly become impossible. I doubt they tested the idea on actual users.

Second, it largely won’t help. The industry should know because we tried it a decade ago. “Stop. Think. Connect.” was an awareness campaign from 2016, by the Department of Homeland Security—this was before CISA—and the National Cybersecurity Alliance. The message was basically the same: Stop and think before doing anything online. It didn’t work then, either.

Take9’s website says, “Science says: In stressful situations, wait 10 seconds before responding.” The problem with that is that clicking on a link is not a stressful situation. It’s normal, one that happens hundreds of times a day. Maybe you can train a person to count to 10 before punching someone in a bar but not before opening an attachment.

And there is no basis in science for it. It’s a folk belief, all over the Internet but with no actual research behind it—like the five-second rule when you drop food on the floor. In emotionally charged contexts, most people are already overwhelmed, cognitively taxed, and not functioning in a space where rational interruption works as neatly as this advice suggests.

Pausing Adds Little

Pauses help us break habits. If we are clicking, sharing, linking, downloading, and connecting out of habit, a pause to break that habit works. But the problem here isn’t habit alone. The problem is that people aren’t able to differentiate between something legitimate and an attack.

The Take9 website says that nine seconds is “time enough to make a better decision,” but there’s no use telling people to stop and think if they don’t know what to think about after they’ve stopped. Pause for nine seconds and… do what? Take9 offers no guidance. It presumes people have the cognitive tools to understand the myriad potential attacks and figure out which one of the thousands of Internet actions they take is harmful. If people don’t have the right knowledge, pausing for longer—even a minute—will do nothing to add knowledge.

The three-part suspicion, cognition, and automaticity model (SCAM) is one way to think about this. The first is lack of knowledge—not knowing what’s risky and what isn’t. The second is habits: people doing what they always do. And third, using flawed mental shortcuts, like believing PDFs to be safer than Microsoft Word documents, or that mobile devices are safer than computers for opening suspicious emails.

These pathways don’t always occur in isolation; sometimes they happen together or sequentially. They can influence each other or cancel each other out. For example, a lack of knowledge can lead someone to rely on flawed mental shortcuts, while those same shortcuts can reinforce that lack of knowledge. That’s why meaningful behavioral change requires more than just a pause; it needs cognitive scaffolding and system designs that account for these dynamic interactions.

A successful awareness campaign would do more than tell people to pause. It would guide them through a two-step process. First trigger suspicion, motivating them to look more closely. Then, direct their attention by telling them what to look at and how to evaluate it. When both happen, the person is far more likely to make a better decision.

This means that pauses need to be context specific. Think about email readers that embed warnings like “EXTERNAL: This email is from an address outside your organization” or “You have not received an email from this person before.” Those are specifics, and useful. We could imagine an AI plug-in that warns: “This isn’t how Bruce normally writes.” But of course, there’s an arms race in play; the bad guys will use these systems to figure out how to bypass them.

This is all hard. The old cues aren’t there anymore. Current phishing attacks have evolved from those older Nigerian scams filled with grammar mistakes and typos. Text message, voice, or video scams are even harder to detect. There isn’t enough context in a text message for the system to flag. In voice or video, it’s much harder to trigger suspicion without disrupting the ongoing conversation. And all the false positives, when the system flags a legitimate conversation as a potential scam, work against people’s own intuition. People will just start ignoring their own suspicions, just as most people ignore all sorts of warnings that their computer puts in their way.

Even if we do this all well and correctly, we can’t make people immune to social engineering. Recently, both cyberspace activist Cory Doctorow and security researcher Troy Hunt—two people who you’d expect to be excellent scam detectors—got phished. In both cases, it was just the right message at just the right time.

It’s even worse if you’re a large organization. Security isn’t based on the average employee’s ability to detect a malicious email; it’s based on the worst person’s inability—the weakest link. Even if awareness raises the average, it won’t help enough.

Don’t Place Blame Where It Doesn’t Belong

Finally, all of this is bad public policy. The Take9 campaign tells people that they can stop cyberattacks by taking a pause and making a better decision. What’s not said, but certainly implied, is that if they don’t take that pause and don’t make those better decisions, then they’re to blame when the attack occurs.

That’s simply not true, and its blame-the-user message is one of the worst mistakes our industry makes. Stop trying to fix the user. It’s not the user’s fault if they click on a link and it infects their system. It’s not their fault if they plug in a strange USB drive or ignore a warning message that they can’t understand. It’s not even their fault if they get fooled by a look-alike bank website and lose their money. The problem is that we’ve designed these systems to be so insecure that regular, nontechnical people can’t use them with confidence. We’re using security awareness campaigns to cover up bad system design. Or, as security researcher Angela Sasse first said in 1999: “Users are not the enemy.”

We wouldn’t accept that in other parts of our lives. Imagine Take9 in other contexts. Food service: “Before sitting down at a restaurant, take nine seconds: Look in the kitchen, maybe check the temperature of the cooler, or if the cooks’ hands are clean.” Aviation: “Before boarding a plane, take nine seconds: Look at the engine and cockpit, glance at the plane’s maintenance log, ask the pilots if they feel rested.” This is obviously ridiculous advice. The average person doesn’t have the training or expertise to evaluate restaurant or aircraft safety—and we don’t expect them to. We have laws and regulations in place that allow people to eat at a restaurant or board a plane without worry.

But—we get it—the government isn’t going to step in and regulate the Internet. These insecure systems are what we have. Security awareness training, and the blame-the-user mentality that comes with it, are all we have. So if we want meaningful behavioral change, it needs a lot more than just a pause. It needs cognitive scaffolding and system designs that account for all the dynamic interactions that go into a decision to click, download, or share. And that takes real work—more work than just an ad campaign and a slick video.

This essay was written with Arun Vishwanath, and originally appeared in Dark Reading.

Posted on May 30, 2025 at 7:05 AM19 Comments

Comments

Andrew L Duane May 30, 2025 8:08 AM

One thing inside browsers that seems like it could help (and was alluded to in the article) is a warning when a link looks like it points to a URL but the underlying URL itself does not match. That would block a whole set of typo-squatting style attacks, but not all. There may be some additions possible, like checking common substitutions like 1 for i and so on. This seems like low-hanging fruit.

This could also be done where the visible link doesn’t look like a URL, but that would catch a lot of “click HERE” to sign up style links.

Who? May 30, 2025 8:27 AM

I do not think this advice is so bad. Indeed, usually it takes less than a second to me to follow an hyperlink but these are links I know, on web pages I used in the last years.

The advice is, as I understand it, for hyperlinks we found on “uncharted territory”. On these cases, I usually take quite a few more than nine seconds to doble think what I am doing.

Any email I receive is uncharted territory too, in most cases. In any case, I read those emails on nmh(7), a fork of RAND MH, so I need to take some time to follow an hyperlink in any case.

Person McPersonface May 30, 2025 8:47 AM

I agree software vendors can do better. Web browsers should use colors and UI elements in the address bar and when hovering over links to help the user parse URLs. Show what’s the TLD, what’s the main domain, what’s the subdomains, etc. Be clear about whether you have visited this site before or not, and don’t leave it up to the CSS. Be clear about whether you have this domain in your bookmarks or not. The URL should be right in your face when hovering over a link, not down in a corner in small type. Show a big warning when the displayed URL and actual URL don’t match.

Email clients should never hide email addresses. Show them, help the user parse them like the URLs in browsers. As mentioned, show the user when they have never received mail from an address before. Hint when sender name and email matches poorly.

Educate the users, but don’t blame them, indeed! And be prepared for the inevitable breaches.

Who? May 30, 2025 8:52 AM

Pause for nine seconds and… do what?

A lot of things:

  • Check link spelling. Should we take the time to type the link by hand instead of clicking on it if spelling is ok?
  • Check hyperlinklink, usually displayed at the bottom of the window, matches its description in the displayed text (in a browser).
  • Check the link points to a reasonable source to get an asset.
  • Check if it is something we can reasonably expect to receive, in the case it is an email; should we contact the sender by other means in case she/he is known to me to verify the authenticity of the link?
  • Check if it is better getting that file from another source, from an official repository, or by other channel.

Nothing on this list requires technical knowledge, only common sense.

Some weeks ago, a friend commented he reached by mistake a fraudulent web page for a well-known on-line store. He had a bad feeling about that page from the start (e.g. they were selling old items, nothing released in the last months, and not allowing the most usual and secure payment gateways like PayPal, as those links were disabled), but he only stopped at the last step. So thinking twice works, even if it is not foolprof.

Steve Friedl May 30, 2025 9:52 AM

I saw it written once that any advice starting with “If people would just…” is doomed to fail, because it inevitably is against human nature.

Personally I have no trouble inspecting links and being safe (etc) but I have an elderly friend who will never, in his remaining years, learn how to read what a link is and to avoid infecting his machine even if he takes 9 seconds.

Andy Lundell May 30, 2025 10:40 AM

Campaigns like this are primarily a fancy form of victim-blaming. And victim-blaming is a form of giving up and accepting crime.

Nobody would deny that more education is more better, but if somebody’s first and primary reaction to a crime is to insist that potential victims should be better educated, then they’ve decided that the crime is acceptable so long as it only affects people “who deserve it.”

Jim May 30, 2025 10:57 AM

Good article. I can’t help but be puzzled, though, that they chose the awkward “Take9” rather than the alliterative and easy “Take Ten.” It’s a one second difference and unlike Take9 it doesn’t sound like a mispronunciation of the TEC-9 semiautomatic pistol.

Clive Robinson May 30, 2025 11:51 AM

@ Bruce, ALL,

At the end the article says,

“But—we get it—the government isn’t going to step in and regulate the Internet. These insecure systems are what we have.”

It’s not exactly correct / compleat.

It ends with “we have” which is true enough, but the reason for that is it should be “they want” or,

“we have what they want us to have.”

The “they” being those who won’t legislate or regulate, and of course those behind the elected and unelected legislators, who pay / bribe them,

“not to legislate or regulate.”

On a very related matter in the UK it’s no secret that Financial institutions want to “externalise risk” and have been lobbying in various ways as have the usual coterie of money grabbing grifters wanting total “rent seeking” over all citizens money.

Few however are aware of just what “paid for legislation” is being brought about… Some have been individualy documented about, but this gives an overview on ten of them (from which you can find more detailed information),

https://m.youtube.com/watch?v=DNiCJYUhXhM

Oh and this is “not new” when Tony Blair was PM they looked into making “local council / tax rates / home tax based not on it’s real value, or what you might be able to rent it out for, but on what your neighbours spent on their credit cards or earned.

So if you were retired or on fixed income at a low rate as something like 70% of adults are, what you would get charged for your council tax would be based on what your flashiest neighbour spent… So if they were maxing out their cards and fast runing into unrecoverable debt, you their neighbours would get sucked into penury and bankruptcy…

They were going to try it out in Northern Ireland, but they were told to stop and consider what “the terrorists” might do that would make the London Pole Tax Riots look like “a kiddies picnic”… So they dropped the shelved the plans. But as you can see they’ve not been forgotten just changed and put quietly back into being implemented.

And that’s the point behind “not legislating or regulating” to fix the Internet, but make it even more intrusive into peoples finances and lives.

Andy May 30, 2025 5:59 PM

The challenge to the average user to evaluate the risk of their decisions is exacerbated by decisions like corporations’ use of thurd party URL shorteners and Microsoft’s outlook client on mobile obfuscating the real URI when present. The former issue makes me think of comments in the recent topic on surveys and polls, where so many of them including several Ive used recently asking for feedback about a sensitive medical procedure refer to an unknown third party for feedback. The latter around clients obfuscating links I should have access to full knowledge of … there are expensive third party tools that mostly only the most affluent companies can afford that add to the hold/hover-over a just-in-time anti-phishing menu option to dynamically evaluate the page. But these capabilities are locked to the techno-cognocenti and the rich. So it does not matter much if you take 9 or 5 or 20; the data to make an informed decision is hidden to the user.

shagggz May 31, 2025 1:29 AM

This psyop of offloading of responsibility onto the end user / consumer is much like the plastics industry’s campaign to do the same regarding the scam of “recycling.”

A Nonny Bunny May 31, 2025 3:08 PM

Think about email readers that embed warnings like “EXTERNAL: This email is from an address outside your organization” or “You have not received an email from this person before.” Those are specifics, and useful.

Actually, that’s not useful at all, when, as in our case, practically all company mail comes from an external address.
The funniest thing was when they did a phishing test, and I could figure out where to report the email, even though there was a link right there in the email in the “this is an external email” warning header. After seeing it a million times it was just mentally invisible.

anon June 1, 2025 3:19 AM

CDNs exacerbate the problem, like someone above has already stated. Except for CDNs, it would be possible to use DNS to only permit access to whitelisted hosts/domains. If someone clicks on a link to, for example, micrrosoft.com, they won’t be able to reach it because its not whitelisted.

anonymous June 1, 2025 12:11 PM

Problematically, if you do your own sanity checking on URLs that one’s meant to click. Say, if family send a link to a system that they’ve paid and one can therefore use, once you do a deep dive into such a system (from experience), no-one else is concerned about the security implications and so deliberately ignores them even though if you read a commercial company’s ToS/conditions, those always tend to argue in favour of not clicking, not signing up when not extremely necessary.

So, I also don’t know how to fix this, but it’s also a disconnect in society.

Clive Robinson June 1, 2025 1:57 PM

@ anonymous, ALL,

With regards,

“… but it’s also a disconnect in society.”

As intended by all the Silicon Valley Mega Corps and so on down the tree. As is said these days[1],

“You have to shake the tree to see what falls…”

But simply it’s part of “psychological 101” on Interrogation, and means,

“If you sow chaos and confusion people tend to say or do more than they should or would if you used either threats or coercion.”

Thus the level of PPI obtained is higher, but importantly the way it’s obtained is not directly attributable to the people “harvesting the fruit” because,

“Every one expects the system to be broken, not malicious.”

Except for those that have had good reason to think otherwise. In the military especially with front line troops there is an expression that used to get told over and over[2],

“Once is happenstance.
Twice is coincidence.
The third time it’s enemy action.

Which brings forth another saying that got “faked” for a TV series by mashing together from other Scottish-Canadian sayings of,

1, “Owls don’t hoot” when it rains.
2, If you here the sound of rain on a dry-day, “Bring in the cows and put out the dog”.
3, “Pull through and prime” –your gun– “the enemy are on the rise”.

(I’ve translated them to modern english and filled in the missing bits “Dae ye ken?”)

It goes back to times of cattle rustling in the lowlands and glens, and the Red coats approaching. Where those of ill intent used fake owl hoots for signaling. Also thrown fine gravel or dry dirt to make sentries reveal themselves by moving.

[1] Apparently derived from,

“Mary Cassatt’s quote,

‘I think that if you shake the tree, you ought to be around when the fruit falls to pick it up’

Is a metaphorical reflection on responsibility, accountability, and commitment. This statement recommends that if you take an action that starts a procedure or creates effects, you need to also exist to deal with and handle the results of those actions.”

It should be noted in the Internet age, that it’s not those that “do the shake” that “get the fruit” they are just the “penny a day labour” who put in all the effort for near no reward.

[2] It’s attributed to Ian Fleming in his 1959 “James Bond/007” book “Goldfinger”. Though if you know what he did during WWII (Came up with plans to assassinate Hitler amongst other things). Then it may well be derived from the quips of others in the “same game” as there were a lot of Oxford and Cambridge types doing the same thing alongside him.

lurker June 1, 2025 8:51 PM

@anonymous
“… it’s also a disconnect in society.”

Amen, brother. I have just been fighting a usually reliable online supplier of bike parts. I need to return a faulty item and they want me to set up an account with a third party ticket clipper …

moi June 2, 2025 11:53 AM

@Bruce
“The problem is that we’ve designed these systems to be so insecure that regular, nontechnical people can’t use them with confidence.”

yea, yea. But Tech Bros think that people who do not know “computers” are ninnies. Never mind that they may be highly trained physicians, nurses, lawyers, etc in their own fields

Rontea June 11, 2025 1:07 PM

The Take9 campaign’s advice to pause for nine seconds before clicking links or downloading files seems impractical and may not effectively address the root causes of cyberattacks, which often stem from a lack of knowledge and flawed mental shortcuts.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.