Essays Tagged "Communications of the ACM"
Page 1 of 2
The security literature is filled with risk pathologies, heuristics that we use to help us evaluate risks. I’ve collected them from many different sources.
|Exaggerated Risks||Downplayed Risks|
|Beyond one’s control||More under control|
|Externally imposed||Taken willingly|
|Talked about||Not discussed|
|Intentional or man-made||Natural|
|Immediate||Long-term or diffuse|
|Sudden||Evolving slowly over time|
|Affecting them personally||Affecting others|
|New and unfamiliar…|
Reports are coming in torrents. Criminals are known to have downloaded personal credit information of over 145,000 Americans from ChoicePoint’s network. Hackers took over one of Lexis Nexis’ databases, gaining access to personal files of 32,000 people. Bank of America Corp. lost computer data tapes that contained personal information on 1.2 million federal employees, including members of the U.S. Senate. A hacker downloaded the names, Social Security numbers, voicemail and SMS messages, and photos of 400 T-Mobile customers, and probably had access to all of their 16.3 million U.S. customers. In a separate incident, Paris Hilton’s phone book and SMS messages were hacked and distributed on the Internet…
Two-factor authentication isn’t our savior. It won’t defend against phishing. It’s not going to prevent identity theft. It’s not going to secure online accounts from fraudulent transactions. It solves the security problems we had 10 years ago, not the security problems we have today.
The problem with passwords is that it is too easy to lose control of them. People give their passwords to other people. People write them down, and other people read them. People send them in email, and that email is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. Passwords are also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can never be sure who is typing in that password…
Considerable confusion exists between the different concepts of secrecy and security, which often causes bad security and surprising political arguments. Secrecy usually contributes only to a false sense of security.
In June 2004, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission requires telephone companies to report large disruptions of telephone service, and wants to extend that to high-speed data lines and wireless networks. DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures…
Many discussions of voting systems and their relative integrity have been primarily technical, focusing on the difficulty of attacks and defenses. This is only half of the equation: it’s not enough to know how much it might cost to rig an election by attacking voting systems; we also need to know how much it would be worth to do so. Our illustrative example uses the most recent available U.S. data, but is otherwise is not intended to be specific to any particular political party.
In order to gain a clear majority of the House in 2002, Democrats would have needed to win 13 seats that went to Republicans. According to Associated Press voting data, Democrats could have added 13 seats by swinging 49,469 votes. This corresponds to changing just over 1% of the 4,310,198 votes in these races and under 1/1000 of the 70 million votes cast in contested House races. The Senate was even closer: switching 20,703 votes in Missouri and New Hampshire would have provided Democrats with the necessary two seats…
Paperless voting machines threaten the integrity of democratic process by what they don't do.
Voting problems associated with the 2000 U.S. Presidential election have spurred calls for more accurate voting systems. Unfortunately, many of the new computerized voting systems purchased today have major security and reliability problems.
The ideal voting technology would have five attributes: anonymity, scalability, speed, audit, and accuracy (direct mapping from intent to counted vote). In the rush to improve the first four, accuracy is being sacrificed. Accuracy is not how well the ballots are counted; it’s how well the process maps voter intent into counted votes and the final tally. People misread ballots, punch cards don’t tabulate properly, machines break down, ballots get lost. Mistakes, even fraud, happen…
Underwriters Laboratories (UL) is an independent testing organization created in 1893, when William Henry Merrill was called in to find out why the Palace of Electricity at the Columbian Exposition in Chicago kept catching on fire (which is not the best way to tout the wonders of electricity). After making the exhibit safe, he realized he had a business model on his hands. Eventually, if your electrical equipment wasn’t UL certified, you couldn’t get insurance.
Today, UL rates all kinds of equipment, not just electrical. Safes, for example, are rated based on time to crack and strength of materials. A “TL-15” rating means that the safe is secure against a burglar who is limited to safecracking tools and 15 minutes’ working time. These ratings are not theoretical; employed by UL, actual hotshot safecrackers take actual safes and test them. Applying this sort of thinking to computer networks — firewalls, operating systems, Web servers — is a natural idea. And the newly formed Center for Internet Security (no relation to UL) plans to implement it…
In the future, the computer security industry will be run by the insurance industry. I don’t mean insurance companies will start selling firewalls, but rather the kind of firewall you use–along with the kind of authentication scheme you use, the kind of operating system you use, and the kind of network monitoring scheme you use–will be strongly influenced by the constraints of insurance.
Consider security and safety in the real world. Businesses don’t install alarms in their warehouses because it makes them safer; they do it because they get a break in their insurance rates. Hotels and office buildings don’t install sprinkler systems because they’re concerned about the welfare of their tenants, but because building codes and insurance policies demand it. These are all risk management decisions, and the risk-taker of last resort is the insurance industry…
Open any popular article on public-key infrastructure (PKI) and you’re likely to read that a PKI is desperately needed for E-commerce to flourish. Don’t believe it. E-commerce is flourishing, PKI or no PKI. Web sites are happy to take your order if you don’t have a certificate and even if you don’t use a secure connection. Fortunately, you’re protected by credit-card rules.
The main risk in believing this popular falsehood stems from the cryptographic concept of “non-repudiation”.
Under old, symmetric-key cryptography, the analog to a digital signature was a message authentication code (MAC). If Bob received a message with a correct MAC, he could verify that it hadn’t changed since the MAC was computed. If only he and Alice knew the key needed to compute the MAC and if he didn’t compute it, Alice must have. This is fine for the interaction between them, but if the message was “Pay Bob $1,000,000.00, signed Alice” and Alice denied having sent it, Bob could not go to a judge and prove that Alice sent it. He could have computed the MAC himself…
Public-key infrastructure (PKI), usually meaning digital certificates from a commercial or corporate certificate authority (CA), is touted as the current cure-all for security problems.
Certificates provide an attractive business model. They cost almost nothing to manufacture, and you can dream of selling one a year to everyone on the Internet. Given that much potential income for CAs, we now see many commercial CAs, producing literature, press briefings and lobbying. But, what good are certificates? In particular, are they any good for E-mail? What about free certificates, as with PGP?…
Sidebar photo of Bruce Schneier by Joe MacInnis.