Entries Tagged "laws"

Page 29 of 33

U.S. Crypto Export Controls

Rules on exporting cryptography outside the United States have been renewed:

President Bush this week declared a national emergency based on an “extraordinary threat to the national security.”

This might sound like a code-red, call-out-the-national-guard, we-lost-a-suitcase-nuke type of alarm, but in reality it’s just a bureaucratic way of ensuring that the Feds can continue to control the export of things like computer hardware and encryption products.

And it happens every year or so.

If Bush didn’t sign that “national emergency” paperwork, then the Commerce Department’s Bureau of Industry and Security would lose some of its regulatory power. That’s because Congress never extended the Export Administration Act after it lapsed (it’s complicated).

President Clinton did the same thing. Here’s a longer version of his “national emergency” executive order from 1994.

As a side note, encryption export rules have been dramatically relaxed since the oppressive early days of Janet “Evil PCs” Reno, Al “Clipper Chip” Gore, and Louis “ban crypto” Freeh. But they still exist. Here’s a summary.

To be honest, I don’t know what the rules are these days. I think there is a blanket exemption for mass-market software products, but I’m not sure. I haven’t a clue what the hardware requirements are. But certainly something is working right; we’re seeing more strong encryption in more software—and not just encryption software.

Posted on August 5, 2005 at 7:17 AMView Comments

Dog Poop Girl

Here’s the basic story: A woman and her dog are riding the Seoul subways. The dog poops in the floor. The woman refuses to clean it up, despite being told to by other passangers. Someone takes a picture of her, posts it on the Internet, and she is publicly shamed—and the story will live on the Internet forever. Then, the blogosphere debates the notion of the Internet as a social enforcement tool.

The Internet is changing our notions of personal privacy, and how the public enforces social norms.

Daniel Solove writes:

The dog-shit-girl case involves a norm that most people would seemingly agree to—clean up after your dog. Who could argue with that one? But what about when norm enforcement becomes too extreme? Most norm enforcement involves angry scowls or just telling a person off. But having a permanent record of one’s norm violations is upping the sanction to a whole new level. The blogosphere can be a very powerful norm-enforcing tool, allowing bloggers to act as a cyber-posse, tracking down norm violators and branding them with digital scarlet letters.

And that is why the law might be necessary—to modulate the harmful effects when the norm enforcement system gets out of whack. In the United States, privacy law is often the legal tool called in to address the situation. Suppose the dog poop incident occurred in the United States. Should the woman have legal redress under the privacy torts?

If this incident is any guide, then anyone acting outside the accepted norms of whatever segment of humanity surrounds him had better tread lightly. The question we need to answer is: is this the sort of society we want to live in? And if not, what technological or legal controls do we need to put in place to ensure that we don’t?

Solove again:

I believe that, as complicated as it might be, the law must play a role here. The stakes are too important. While entering law into the picture could indeed stifle freedom of discussion on the Internet, allowing excessive norm enforcement can be stifling to freedom as well.

All the more reason why we need to rethink old notions of privacy. Under existing notions, privacy is often thought of in a binary way ­ something either is private or public. According to the general rule, if something occurs in a public place, it is not private. But a more nuanced view of privacy would suggest that this case involved taking an event that occurred in one context and significantly altering its nature ­ by making it permanent and widespread. The dog-shit-girl would have been just a vague image in a few people’s memory if it hadn’t been for the photo entering cyberspace and spreading around faster than an epidemic. Despite the fact that the event occurred in public, there was no need for her image and identity to be spread across the Internet.

Could the law provide redress? This is a complicated question; certainly under existing doctrine, making a case would have many hurdles. And some will point to practical problems. Bloggers often don’t have deep pockets. But perhaps the possibility of lawsuits might help shape the norms of the Internet. In the end, I strongly doubt that the law alone can address this problem; but its greatest contribution might be to help along the development of blogging norms that will hopefully prevent more cases such as this one from having crappy endings.

Posted on July 29, 2005 at 4:21 PMView Comments

Cisco Harasses Security Researcher

I’ve written about full disclosure, and how disclosing security vulnerabilities is our best mechanism for improving security—especially in a free-market system. (That essay is also worth reading for a general discussion of the security trade-offs.) I’ve also written about how security companies treat vulnerabilities as public-relations problems first and technical problems second. This week at BlackHat, security researcher Michael Lynn and Cisco demonstrated both points.

Lynn was going to present security flaws in Cisco’s IOS, and Cisco went to inordinate lengths to make sure that information never got into the hands of the their consumers, the press, or the public.

Cisco threatened legal action to stop the conference’s organizers from allowing a 24-year-old researcher for a rival tech firm to discuss how he says hackers could seize control of Cisco’s Internet routers, which dominate the market. Cisco also instructed workers to tear 20 pages outlining the presentation from the conference program and ordered 2,000 CDs containing the presentation destroyed.

In the end, the researcher, Michael Lynn, went ahead with a presentation, describing flaws in Cisco’s software that he said could allow hackers to take over corporate and government networks and the Internet, intercepting and misdirecting data communications. Mr. Lynn, wearing a white hat emblazoned with the word “Good,” spoke after quitting his job at Internet Security Systems Inc. Wednesday. Mr. Lynn said he resigned because ISS executives had insisted he strike key portions of his presentation.

Not being able to censor the information, Cisco decided to act as if it were no big deal:

In a release shortly after the presentation, Cisco stated, “It is important to note that the information Lynn presented was not a disclosure of a new vulnerability or a flaw with Cisco IOS software. Lynn’s research explores possible ways to expand exploitations of known security vulnerabilities impacting routers.” And went on to state “Cisco believes that the information Lynn presented at the Blackhat conference today contained proprietary information and was illegally obtained.” The statement also refers to the fact that Lynn stated in his presentation that he used a popular file decompressor to ‘unzip’ the Cisco image before reverse engineering it and finding the flaw, which is against Cisco’s use agreement.

The Cisco propaganda machine is certainly working overtime this week.

The security implications of this are enormous. If companies have the power to censor information about their products they don’t like, then we as consumers have less information with which to make intelligent buying decisions. If companies have the power to squelch vulnerability information about their products, then there’s no incentive for them to improve security. (I’ve written about this in connection to physical keys and locks.) If free speech is subordinate to corporate demands, then we are all much less safe.

Full disclosure is good for society. But because it helps the bad guys as well as the good guys (see my essay on secrecy and security for more discussion of the balance), many of us have championed “responsible disclosure” guidelines that give vendors a head start in fixing vulnerabilities before they’re announced.

The problem is that not all researchers follow these guidelines. And laws limiting free speech do more harm to society than good. (In any case, laws won’t completely fix the problem; we can’t get laws passed in every possible country security researchers live.) So the only reasonable course of action for a company is to work with researchers who alert them to vulnerabilities, but also assume that vulnerability information will sometimes be released without prior warning.

I can’t imagine the discussions inside Cisco that led them to act like thugs. I can’t figure out why they decided to attack Michael Lynn, BlackHat, and ISS rather than turn the situation into a public-relations success. I can’t believe that they thought they could have censored the information by their actions, or even that it was a good idea.

Cisco’s customers want information. They don’t expect perfection, but they want to know the extent of problems and what Cisco is doing about them. They don’t want to know that Cisco tries to stifle the truth:

Joseph Klein, senior security analyst at the aerospace electronic systems division for Honeywell Technology Solutions, said he helped arrange a meeting between government IT professionals and Lynn after the talk. Klein said he was furious that Cisco had been unwilling to disclose the buffer-overflow vulnerability in unpatched routers. “I can see a class-action lawsuit against Cisco coming out of this,” Klein said.

ISS didn’t come out of this looking very good, either:

“A few years ago it was rumored that ISS would hold back on certain things because (they’re in the business of) providing solutions,” [Ali-Reza] Anghaie, [a senior security engineer with an aerospace firm, who was in the audience,] said. “But now you’ve got full public confirmation that they’ll submit to the will of a Cisco or Microsoft, and that’s not fair to their customers…. If they’re willing to back down and leave an employee … out to hang, well what are they going to do for customers?”

Despite their thuggish behavior, this has been a public-relations disaster for Cisco. Now it doesn’t matter what they say—we won’t believe them. We know that the public-relations department handles their security vulnerabilities, and not the engineering department. We know that they think squelching information and muzzling researchers is more important than informing the public. They could have shown that they put their customers first, but instead they demonstrated that short-sighted corporate interests are more important than being a responsible corporate citizen.

And these are the people building the hardware that runs much of our infrastructure? Somehow, I don’t feel very secure right now.

EDITED TO ADD: I am impressed with Lynn’s personal integrity in this matter:

When Mr. Lynn took the stage yesterday, he was introduced as speaking on a different topic, eliciting boos. But those turned to cheers when he asked, “Who wants to hear about Cisco?” As he got started, Mr. Lynn said, “What I just did means I’m about to get sued by Cisco and ISS. Not to put too fine a point on it, but bring it on.”

And this:

Lynn closed his talk by directing the audience to his resume and asking if anyone could give him a job.

“In large part I had to quit to give this presentation because ISS and Cisco would rather the world be at risk, I guess,” Lynn said. “They had to do what’s right for their shareholders; I understand that. But I figured I needed to do what’s right for the country and for the national critical infrastructure.”

There’s a lawsuit against him. I’ll let you know if there’s a legal defense fund.

EDITED TO ADD: The lawsuit has been settled. Some details:

Michael Lynn, a former ISS researcher, and the Black Hat organisers agreed to a permanent injunction barring them from further discussing the presentation Lynn gave on Wednesday. The presentation showed how attackers could take over Cisco routers, a problem that Lynn said could bring the Internet to its knees.

The injunction also requires Lynn to return any materials and disassembled code related to Cisco, according to a copy of the injunction, which was filed in US District Court for the District of Northern California. The injunction was agreed on by attorneys for Lynn, Black Hat, ISS and Cisco.

Lynn is also forbidden to make any further presentations at the Black Hat event, which ended on Thursday, or the following Defcon event. Additionally, Lynn and Black Hat have agreed never to disseminate a video made of Lynn’s presentation and to deliver to Cisco any video recording made of Lynn.

My hope is that Cisco realized that continuing with this would be a public-relations disaster.

EDITED TO ADD: Lynn’s BlackHat presentation is on line.

EDITED TO ADD: The FBI is getting involved.

EDITED TO ADD: The link to the presentation, above, has been replaced with a cease-and-desist letter. A copy of the presentation is now here.

Posted on July 29, 2005 at 4:35 AMView Comments

Automatic Surveillance Via Cell Phone

Your cell phone company knows where you are all the time. (Well, it knows where your phone is whenever it’s on.) Turns out there’s a lot of information to be mined in that data.

Eagle’s Realty Mining project logged 350,000 hours of data over nine months about the location, proximity, activity and communication of volunteers, and was quickly able to guess whether two people were friends or just co-workers….

He and his team were able to create detailed views of life at the Media Lab, by observing how late people stayed at the lab, when they called one another and how much sleep students got.

Given enough data, Eagle’s algorithms were able to predict what people—especially professors and Media Lab employees—would do next and be right up to 85 percent of the time.

This is worrisome from a number of angles: government surveillance, corporate surveillance for marketing purposes, criminal surveillance. I am not mollified by this comment:

People should not be too concerned about the data trails left by their phone, according to Chris Hoofnagle, associate director of the Electronic Privacy Information Center.

“The location data and billing records is protected by statute, and carriers are under a duty of confidentiality to protect it,” Hoofnagle said.

We’re building an infrastructure of surveillance as a side effect of the convenience of carrying our cell phones everywhere.

Posted on July 28, 2005 at 4:09 PM

UK Police and Encryption

From The Guardian:

Police last night told Tony Blair that they need sweeping new powers to counter the terrorist threat, including the right to detain a suspect for up to three months without charge instead of the current 14 days….

They also want to make it a criminal offence for suspects to refuse to cooperate in giving the police full access to computer files by refusing to disclose their encryption keys.

On Channel 4 News today, Sir Ian Blair was asked why the police wanted to extend the time they could hold someone without charges from 14 days to 3 months. Part of his answer was that they sometimes needed to access encrypted computer files and 14 days was not enough time for them to break the encryption.

There’s something fishy going on here.

It’s certainly possible that password-guessing programs are more successful with three months to guess. But the Regulation of Investigatory Powers (RIP) Act, which went into effect in 2000, already allows the police to jail people who don’t surrender encryption keys:

If intercepted communications are encrypted (encoded and made secret), the act will force the individual to surrender the keys (pin numbers which allow users to decipher encoded data), on pain of jail sentences of up to two years.

Posted on July 27, 2005 at 3:00 PMView Comments

Domestic Terrorism (U.S.)

Nice MSNBC piece on domestic terrorism in the U.S.:

The sentencing of Eric Rudolph, who bombed abortion clinics, a gay bar and the Atlanta Olympics, ought to be a milestone in the Global War on Terror. In Birmingham, Ala., on Monday he got life without parole. Next month he’ll stack up a couple more life terms in Georgia, which is the least he deserves. (He escaped the death penalty only because he made a deal to help law-enforcement agents find the explosives he had hidden while on the run in North Carolina.) Rudolph killed two people, but not for want of trying to kill many more. In his 1997 attack on an Atlanta abortion clinic, he set off a second bomb meant to take out bystanders and rescue workers. Unrepentant, of course, Rudolph defended his actions as a moral imperative: “Abortion is murder, and because it is murder I believe deadly force is needed to stop it.” The Birmingham prosecutor declared that Rudolph had “appointed himself judge, jury and executioner.”

Indeed. That’s what all terrorists have in common: the four lunatics in London earlier this month; the 19 men who attacked America on September 11, 2001; Timothy McVeigh in Oklahoma City, and many others. They were all convinced they had noble motives for wreaking their violence. Terrorists are very righteous folks. Which is why the real global war we’re fighting, let’s be absolutely clear, should be one of our shared humanity against the madness of people like these; the rule of man-made laws on the books against the divine law they imagine for themselves. It’s the cause of reason against unreason, of self-criticism against the firm convictions of fanaticism.

David Neiwert has some good commentary on the topic. He also points to this U.S. News and World Report article.

Posted on July 25, 2005 at 9:04 PMView Comments

Secure Flight

Last Friday the GAO issued a new report on Secure Flight. It’s couched in friendly language, but it’s not good:

During the course of our ongoing review of the Secure Flight program, we found that TSA did not fully disclose to the public its use of personal information in its fall 2004 privacy notices as required by the Privacy Act. In particular, the public was not made fully aware of, nor had the opportunity to comment on, TSA’s use of personal information drawn from commercial sources to test aspects of the Secure Flight program. In September 2004 and November 2004, TSA issued privacy notices in the Federal Register that included descriptions of how such information would be used. However, these notices did not fully inform the public before testing began about the procedures that TSA and its contractors would follow for collecting, using, and storing commercial data. In addition, the scope of the data used during commercial data testing was not fully disclosed in the notices. Specifically, a TSA contractor, acting on behalf of the agency, collected more than 100 million commercial data records containing personal information such as name, date of birth, and telephone number without informing the public. As a result of TSA’s actions, the public did not receive the full protections of the Privacy Act.

Get that? The TSA violated federal law when it secretly expanded Secure Flight’s use of commercial data about passengers. It also lied to Congress and the public about it.

Much of this isn’t new. Last month we learned that:

The federal agency in charge of aviation security revealed that it bought and is storing commercial data about some passengers—even though officials said they wouldn’t do it and Congress told them not to.

Secure Flight is a disaster in every way. The TSA has been operating with complete disregard for the law or Congress. It has lied to pretty much everyone. And it is turning Secure Flight from a simple program to match airline passengers against terrorist watch lists into a complex program that compiles dossiers on passengers in order to give them some kind of score indicating the likelihood that they are a terrorist.

Which is exactly what it was not supposed to do in the first place.

Let’s review:

For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.

EPIC has more background information.

Back in January, Secure Flight was intended to just be a more efficient system of matching airline passengers with terrorist watch lists.

I am on a working group that is looking at the security and privacy implications of Secure Flight. Before joining the group I signed an NDA agreeing not to disclose any information learned within the group, and to not talk about deliberations within the group. But there’s no reason to believe that the TSA is lying to us any less than they’re lying to Congress, and there’s nothing I learned within the working group that I wish I could talk about. Everything I say here comes from public documents.

In January I gave some general conclusions about Secure Flight. These have not changed.

One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

What has changed is the scope of Secure Flight. First, it started using data from commercial sources, like Acxiom. (The details are even worse.) Technically, they’re testing the use of commercial data, but it’s still a violation. Even the DHS started investigating:

The Department of Homeland Security’s top privacy official said Wednesday that she is investigating whether the agency’s airline passenger screening program has violated federal privacy laws by failing to properly disclose its mission.

The privacy officer, Nuala O’Connor Kelly, said the review will focus on whether the program’s use of commercial databases and other details were properly disclosed to the public.

The TSA’s response to being caught violating their own Privacy Act statements? Revise them:

According to previous official notices, TSA had said it would not store commercial data about airline passengers.

The Privacy Act of 1974 prohibits the government from keeping a secret database. It also requires agencies to make official statements on the impact of their record keeping on privacy.

The TSA revealed its use of commercial data in a revised Privacy Act statement to be published in the Federal Register on Wednesday.

TSA spokesman Mark Hatfield said the program was being developed with a commitment to privacy, and that it was routine to change Privacy Act statements during testing.

Actually, it’s not. And it’s better to change the Privacy Act statement before violating the old one. Changing it after the fact just looks bad.

The point of Secure Flight match airline passengers against lists of suspected terrorists. But the vast majority of people flagged by this list simply have the same name, or a similar name, as the suspected terrorist: Ted Kennedy and Cat Stevens are two famous examples. The question is whether combining commercial data with the PNR (Passenger Name Record) supplied by the airline could reduce this false-positive problem. Maybe knowing the passenger’s address, or phone number, or date of birth, could reduce false positives. Or maybe not; it depends what data is on the terrorist lists. In any case, it’s certainly a smart thing to test.

But using commercial data has serious privacy implications, which is why Congress mandated all sorts of rules surrounding the TSA testing of commercial data—and more rules before it could deploy a final system—rules that the TSA has decided it can ignore completely.

Commercial data had another use under CAPPS-II In that now-dead program, every passenger would be subjected to a computerized background check to determine their “risk” to airline safety. The system would assign a risk score based on commercial data: their credit rating, how recently they moved, what kind of job they had, etc. This capability was removed from Secure Flight, but now it’s back:

The government will try to determine whether commercial data can be used to detect terrorist “sleeper cells” when it checks airline passengers against watch lists, the official running the project says….

Justin Oberman, in charge of Secure Flight at TSA, said the agency intends to do more testing of commercial data to see if it will help identify known or suspected terrorists not on the watch lists.

“We are trying to use commercial data to verify the identities of people who fly because we are not going to rely on the watch list,” he said. “If we just rise and fall on the watch list, it’s not adequate.”

Also this Congressional hearing (emphasis mine):

THOMPSON: There are a couple of questions I’d like to get answered in my mind about Secure Flight. Would Secure Flight pick up a person with strong community roots but who is in a terrorist sleeper cell or would a person have to be a known terrorist in order for Secure Flight to pick him up?

OBERMAN: Let me answer that this way: It will identify people who are known or suspected terrorists contained in the terrorist screening database, and it ought to be able to identify people who may not be on the watch list. It ought to be able to do that. We’re not in a position today to say that it does, but we think it’s absolutely critical that it be able to do that.

And so we are conducting this test of commercially available data to get at that exact issue.: Very difficult to do, generally. It’s particularly difficult to do when you have a system that transports 1.8 million people a day on 30,000 flights at 450 airports. That is a very high bar to get over.

It’s also very difficult to do with a threat described just like you described it, which is somebody who has sort of burrowed themselves into society and is not readily apparent to us when they’re walking through the airport. And so I cannot stress enough how important we think it is that it be able to have that functionality. And that’s precisely the reason we have been conducting this ommercial data test, why we’ve extended the testing period and why we’re very hopeful that the results will prove fruitful to us so that we can then come up here, brief them to you and explain to you why we need to include that in the system.

My fear is that TSA has already decided that they’re going to use commercial data, regardless of any test results. And once you have commercial data, why not build a dossier on every passenger and give them a risk score? So we’re back to CAPPS-II, the very system Congress killed last summer. Actually, we’re very close to TIA (Total/Terrorism Information Awareness), that vast spy-on-everyone data-mining program that Congress killed in 2003 because it was just too invasive.

Secure Flight is a mess in lots of other ways, too. A March GAO report said that Secure Flight had not met nine out of the ten conditions mandated by Congress before TSA could spend money on implementing the program. (If you haven’t read this report, it’s pretty scathing.) The redress problem—helping people who cannot fly because they share a name with a terrorist—is not getting any better. And Secure Flight is behind schedule and over budget.

It’s also a rogue program that is operating in flagrant disregard for the law. It can’t be killed completely; the Intelligence Reform and Terrorism Prevention Act of 2004 mandates that TSA implement a program of passenger prescreening. And until we have Secure Flight, airlines will still be matching passenger names with terrorist watch lists under the CAPPS-I program. But it needs some serious public scrutiny.

EDITED TO ADD: Anita Ramasastry’s commentary is worth reading.

Posted on July 24, 2005 at 9:10 PMView Comments

Stealing WiFi Access

Interesting:

Police have arrested a man for using someone else’s wireless Internet network in one of the first criminal cases involving this fairly common practice.

Near as I can tell, there was no other criminal activity involved. The man who used someone else’s wireless wasn’t doing anything wrong it it; he was just using the Internet.

Posted on July 13, 2005 at 12:39 PMView Comments

Security Risks of Street Photography

Interesting article on the particular art form of street photography. One ominous paragraph:

More onerous are post-9/11 restrictions that have placed limits on photographing in public settings. Tucker has received e-mails from professionals detained by authorities for photographing bridges and elevated trains. “There are places where photographing people on the street may become illegal,” observes Westerbeck.

Sad.

Posted on July 13, 2005 at 8:38 AMView Comments

New York Times on Identity Theft

I got some really good quotes in this New York Times article on identity theft:

Which is why I wish William Proxmire were still on the case. What we need right now is someone in power who can put the burden for this problem right where it belongs: on the financial and other institutions who collect this data. Let’s face it: by the time even the most vigilant consumer discovers his information has been used fraudulently, it’s already too late. “When people ask me what can the average person do to stop identity theft, I say, ‘nothing,'” said Bruce Schneier, the chief technology officer of Counterpane Internet Security. “This data is held by third parties and they have no impetus to fix it.”

Mr. Schneier, though, has a solution that is positively Proxmirian in its elegance and simplicity. Most of the bills that have been filed in Congress to deal with identity fraud are filled with specific requirements for banks and other institutions: encrypt this; safeguard that; strengthen this firewall.

Mr. Schneier says forget about all that. Instead, do what Congress did in the 1970’s—just put the burden on the financial industry. “If we’re ever going to manage the risks and effects of electronic impersonation,” he wrote recently on CNET (and also in his blog), “we must concentrate on preventing and detecting fraudulent transactions.” And the only way to do that, he added, is by making the financial institutions liable for fraudulent transactions.

“I think business ingenuity is top notch,” Mr. Schneier said in an interview. “And I think if you make it their problem, they will solve it.”

Yes, he acknowledged, letting consumers off the hook might cause them to be less vigilant. But that is exactly what Senator Proxmire did and to great effect. Forcing the financial institutions to bear the entire burden will cause them to tighten up their procedures until the fraud is under control. Maybe they will invest in complex software. But maybe they’ll take simpler measures as well, like making it a little less easy than it is today to obtain a credit card. Best of all, once people see these measures take effect—and realize that someone else is responsible for fixing the problems—their fear will abate.

As Senator Proxmire understood a long time ago, fear is the great enemy of commerce. Maybe this time, the banks will finally understand that as well.

Posted on July 12, 2005 at 5:14 PMView Comments

1 27 28 29 30 31 33

Sidebar photo of Bruce Schneier by Joe MacInnis.