The bluestreak cleaner wrasse (Labroides dimidiatus) operates an underwater health spa for larger fish. It advertises its services with bright colours and distinctive dances. When customers arrive, the cleaner eats parasites and dead tissue lurking in any hard-to-reach places. Males and females will sometimes operate a joint business, working together to clean their clients. The clients, in return, dutifully pay the cleaners by not eating them.
That’s the basic idea, but cleaners sometimes violate their contracts. Rather than picking off parasites, they’ll take a bite of the mucus that lines their clients’ skin. That’s an offensive act—it’s like a masseuse having an inappropriate grope between strokes. The affronted client will often leave. That’s particularly bad news if the cleaners are working as a pair because the other fish, who didn’t do anything wrong, still loses out on future parasite meals.
Males don’t take this sort of behaviour lightly. Nichola Raihani from the Zoological Society of London has found that males will punish their female partners by chasing them aggressively, if their mucus-snatching antics cause a client to storm out.
At first glance, the male cleaner wrasse behaves oddly for an animal, in punishing an offender on behalf of a third party, even though he hasn’t been wronged himself. That’s common practice in human societies but much rarer in the animal world. But Raihani’s experiments clearly show that the males are actually doing themselves a favour by punishing females on behalf of a third party. Their act of apparent altruism means they get more food in the long run.
Entries Tagged "natural security"
Page 4 of 5
The essay is about veganism and plant eating, but I found the descriptions of plant security countermeasures interesting:
Plants can’t run away from a threat but they can stand their ground. “They are very good at avoiding getting eaten,” said Linda Walling of the University of California, Riverside. “It’s an unusual situation where insects can overcome those defenses.” At the smallest nip to its leaves, specialized cells on the plant’s surface release chemicals to irritate the predator or sticky goo to entrap it. Genes in the plant’s DNA are activated to wage systemwide chemical warfare, the plant’s version of an immune response. We need terpenes, alkaloids, phenolics—let’s move.
“I’m amazed at how fast some of these things happen,” said Consuelo M. De Moraes of Pennsylvania State University. Dr. De Moraes and her colleagues did labeling experiments to clock a plant’s systemic response time and found that, in less than 20 minutes from the moment the caterpillar had begun feeding on its leaves, the plant had plucked carbon from the air and forged defensive compounds from scratch.
Just because we humans can’t hear them doesn’t mean plants don’t howl. Some of the compounds that plants generate in response to insect mastication—their feedback, you might say—are volatile chemicals that serve as cries for help. Such airborne alarm calls have been shown to attract both large predatory insects like dragon flies, which delight in caterpillar meat, and tiny parasitic insects, which can infect a caterpillar and destroy it from within.
Enemies of the plant’s enemies are not the only ones to tune into the emergency broadcast. “Some of these cues, some of these volatiles that are released when a focal plant is damaged,” said Richard Karban of the University of California, Davis, “cause other plants of the same species, or even of another species, to likewise become more resistant to herbivores.”
There’s more in the essay.
It’s hard work being prey. Watch the birds at a feeder. They’re constantly on alert, and will fly away from food—from easy nutrition—at the slightest movement or sound. Given that I’ve never, ever seen a bird plucked from a feeder by a predator, it seems like a whole lot of wasted effort against not very big a threat.
Assessing and reacting to risk is one of the most important things a living creature has to deal with. The amygdala, an ancient part of the brain that first evolved in primitive fishes, has that job. It’s what’s responsible for the fight-or-flight reflex. Adrenaline in the bloodstream, increased heart rate, increased muscle tension, sweaty palms; that’s the amygdala in action. And it works fast, faster than consciousnesses: show someone a snake and their amygdala will react before their conscious brain registers that they’re looking at a snake.
Fear motivates all sorts of animal behaviors. Schooling, flocking, and herding are all security measures. Not only is it less likely that any member of the group will be eaten, but each member of the group has to spend less time watching out for predators. Animals as diverse as bumblebees and monkeys both avoid food in areas where predators are common. Different prey species have developed various alarm calls, some surprisingly specific. And some prey species have even evolved to react to the alarms given off by other species.
Evolutionary biologist Randolph Nesse has studied animal defenses, particularly those that seem to be overreactions. These defenses are mostly all-or-nothing; a creature can’t do them halfway. Birds flying off, sea cucumbers expelling their stomachs, and vomiting are all examples. Using signal detection theory, Nesse showed that all-or-nothing defenses are expected to have many false alarms. “The smoke detector principle shows that the overresponsiveness of many defenses is an illusion. The defenses appear overresponsive because they are ‘inexpensive’ compared to the harms they protect against and because errors of too little defense are often more costly than errors of too much defense.”
So according to the theory, if flight costs 100 calories, both in flying and lost eating time, and there’s a 1 in 100 chance of being eaten if you don’t fly away, it’s smarter for survival to use up 10,000 calories repeatedly flying at the slightest movement even though there’s a 99 percent false alarm rate. Whatever the numbers happen to be for a particular species, it has evolved to get the trade-off right.
This makes sense, until the conditions that the species evolved under change quicker than evolution can react to. Even though there are far fewer predators in the city, birds at my feeder react as if they were in the primal forest. Even birds safe in a zoo’s aviary don’t realize that the situation has changed.
Humans are both no different and very different. We, too, feel fear and react with our amygdala, but we also have a conscious brain that can override those reactions. And we too live in a world very different from the one we evolved in. Our reflexive defenses might be optimized for the risks endemic to living in small family groups in the East African highlands in 100,000 BC, not 2009 New York City. But we can go beyond fear, and actually think sensibly about security.
Far too often, we don’t. We tend to be poor judges of risk. We overreact to rare risks, we ignore long-term risks, we magnify risks that are also morally offensive. We get risks wrong—threats, probabilities, and costs—all the time. When we’re afraid, really afraid, we’ll do almost anything to make that fear go away. Both politicians and marketers have learned to push that fear button to get us to do what they want.
One night last month, I was awakened from my hotel-room sleep by a loud, piercing alarm. There was no way I could ignore it, but I weighed the risks and did what any reasonable person would do under the circumstances: I stayed in bed and waited for the alarm to be turned off. No point getting dressed, walking down ten flights of stairs, and going outside into the cold for what invariably would be a false alarm—serious hotel fires are very rare. Unlike the bird in an aviary, I knew better.
You can disagree with my risk calculus, and I’m sure many hotel guests walked downstairs and outside to the designated assembly point. But it’s important to recognize that the ability to have this sort of discussion is uniquely human. And we need to have the discussion repeatedly, whether the topic is the installation of a home burglar alarm, the latest TSA security measures, or the potential military invasion of another country. These things aren’t part of our evolutionary history; we have no natural sense of how to respond to them. Our fears are often calibrated wrong, and reason is the only way we can override them.
This essay first appeared on DarkReading.com.
During chase scenes, movie protagonists often make their getaway by releasing some sort of decoy to cover their escape or distract their pursuer. But this tactic isn’t reserved for action heroes—some deep-sea animals also evade their predators by releasing decoys—glowing ones.
Karen Osborn from the Scripps Institute of Oceanography has discovered seven new species of closely related marine worms (annelids) that use this trick. Each species pack up to four pairs of “bombs” near their heads—simple, fluid-filled globes that the worms can detach at will. When released, the “bombs” give off an intense light that lasts for several seconds.
The plant caladium steudneriifolium pretends to be ill so mining moths won’t eat it.
She believes that the plant essentially fakes being ill, producing variegated leaves that mimic those that have already been damaged by mining moth larvae. That deters the moths from laying any further larvae on the leaves, as the insects assume the previous caterpillars have already eaten most of the leaves’ nutrients.
Cabbage aphids arm themselves with chemical bombs:
Its body carries two reactive chemicals that only mix when a predator attacks it. The injured aphid dies. But in the process, the chemicals in its body react and trigger an explosion that delivers lethal amounts of poison to the predator, saving the rest of the colony.
M.melanotarsa is a jumping spider that protects itself from predators (like other jumping spiders) by resembling an ant. Earlier this month, Ximena Nelson and Robert Jackson showed that they bolster this illusion by living in silken apartment complexes and travelling in groups, mimicking not just the bodies of ants but their social lives too.
Now Nelson and Robert are back with another side to the ant-spider’s tale – it also uses its impersonation for attack as well as defence. It also feasts on the eggs and youngsters of the very same spiders that its ant-like form protects it from. It is, essentially, a spider that looks like an ant to avoid being eaten by spiders so that it itself can eat spiders.
My previous post about security stories from the insect world.
The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.
Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)
Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.
danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.
Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”
Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.
Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.
Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)
Fascinating bit of evolutionary biology:
So how did natural selection equip men to solve the adaptive problem of other men impregnating their sexual partners? The answer, according to Gallup, is their penises were sculpted in such a way that the organ would effectively displace the semen of competitors from their partner’s vagina, a well-synchronized effect facilitated by the “upsuck” of thrusting during intercourse. Specifically, the coronal ridge offers a special removal service by expunging foreign sperm. According to this analysis, the effect of thrusting would be to draw other men’s sperm away from the cervix and back around the glans, thus “scooping out” the semen deposited by a sexual rival.
Evolution is the result of a struggle for survival, so you’d expect security considerations to be important.
Beet armyworm caterpillars react to the sound of a passing wasp by freezing in place, or even dropping off the plant. Unfortunately, armyworm intelligence isn’t good enough to tell the difference between enemy aircraft (the wasps that prey on them) and harmless commercial flights (bees); they react the same way to either. So by producing nectar for bees, plants not only get pollinated, but also gain some protection against being eaten by caterpillars.
The small hive beetle lives by entering beehives to steal combs and honey. They home in on the hives by detecting the bees’ own alarm pheromones. They also track in yeast that ferments the pollen and releases chemicals that spoof the alarm pheromones, attracting more beetles and more yeast. Eventually the bees abandon the hive, leaving their store of pollen and honey to the beetles and yeast.
Mountain alcon blue caterpillars get ants to feed them by spoofing a biometric: the sounds made by the queen ant.
Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.
Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.
These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?
Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.
Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.
These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.
This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.
Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.
This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.
Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.
Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.
Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.
We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.
This essay originally appeared in The Wall Street Journal.
I have long been enamored with security trade-offs in the natural world:
A 3D video tracking system revealed that although the bees became very accurate at detecting the camouflaged spiders—they also became increasingly wary.
“When they come in to inspect flowers, they spend a little bit longer hovering in front of them when they know a camouflaged spider is present,” said Dr Ings.
With this “trade-off”, the bees may lose valuable foraging time—but they reduce the risk of becoming the crab spider’s next meal.
Sidebar photo of Bruce Schneier by Joe MacInnis.