Entries Tagged "essays"

Page 47 of 47

Academic Freedom and Security

Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world’s premier cryptography conference. It’s held every August in Santa Barbara.

This year, 400 people from 30 countries came to listen to dozens of talks. Lu Yi was not one of them. Her paper was accepted at the conference. But because she is a Chinese Ph.D. student in Switzerland, she was not able to get a visa in time to attend the conference.

In the three years since 9/11, the U.S. government has instituted a series of security measures at our borders, all designed to keep terrorists out. One of those measures was to tighten up the rules for foreign visas. Certainly this has hurt the tourism industry in the U.S., but the damage done to academic research is more profound and longer-lasting.

According to a survey by the Association of American Universities, many universities reported a drop of more than 10 percent in foreign student applications from last year. During the 2003 academic year, student visas were down 9 percent. Foreign applications to graduate schools were down 32 percent, according to another study by the Council of Graduate Schools.

There is an increasing trend for academic conferences, meetings and seminars to move outside of the United States simply to avoid visa hassles.

This affects all of high-tech, but ironically it particularly affects the very technologies that are critical in our fight against terrorism.

Also in August, on the other side of the country, the University of Connecticut held the second International Conference on Advanced Technologies for Homeland Security. The attendees came from a variety of disciplines—chemical trace detection, communications compatibility, X-ray scanning, sensors of various types, data mining, HAZMAT clothing, network intrusion detection, bomb diffusion, remote-controlled drones—and illustrate the enormous breadth of scientific know-how that can usefully be applied to counterterrorism.

It’s wrong to believe that the U.S. can conduct the research we need alone. At the Connecticut conference, the researchers presenting results included many foreigners studying at U.S. universities. Only 30 percent of the papers at CRYPTO had only U.S. authors. The most important discovery of the conference, a weakness in a mathematical function that protects the integrity of much of the critical information on the Internet, was made by four researchers from China.

Every time a foreign scientist can’t attend a U.S. technology conference, our security suffers. Every time we turn away a qualified technology graduate student, our security suffers. Technology is one of our most potent weapons in the war on terrorism, and we’re not fostering the international cooperation and development that is crucial for U.S. security.

Security is always a trade-off, and specific security countermeasures affect everyone, both the bad guys and the good guys. The new U.S. immigration rules may affect the few terrorists trying to enter the United States on visas, but they also affect honest people trying to do the same.

All scientific disciplines are international, and free and open information exchange—both in conferences and in academic programs at universities—will result in the maximum advance in the technologies vital to homeland security. The Soviet Union tried to restrict academic freedom along national lines, and it didn’t do the country any good. We should try not to follow in those footsteps.

This essay was originally published in the San Jose Mercury News

Posted on October 1, 2004 at 9:44 PMView Comments

Academic Freedom and Security

Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world’s premier cryptography conference. It’s held every August in Santa Barbara.

This year, 400 people from 30 countries came to listen to dozens of talks. Lu Yi was not one of them. Her paper was accepted at the conference. But because she is a Chinese Ph.D. student in Switzerland, she was not able to get a visa in time to attend the conference.

In the three years since 9/11, the U.S. government has instituted a series of security measures at our borders, all designed to keep terrorists out. One of those measures was to tighten up the rules for foreign visas. Certainly this has hurt the tourism industry in the U.S., but the damage done to academic research is more profound and longer-lasting.

According to a survey by the Association of American Universities, many universities reported a drop of more than 10 percent in foreign student applications from last year. During the 2003 academic year, student visas were down 9 percent. Foreign applications to graduate schools were down 32 percent, according to another study by the Council of Graduate Schools.

There is an increasing trend for academic conferences, meetings and seminars to move outside of the United States simply to avoid visa hassles.

This affects all of high-tech, but ironically it particularly affects the very technologies that are critical in our fight against terrorism.

Also in August, on the other side of the country, the University of Connecticut held the second International Conference on Advanced Technologies for Homeland Security. The attendees came from a variety of disciplines—chemical trace detection, communications compatibility, X-ray scanning, sensors of various types, data mining, HAZMAT clothing, network intrusion detection, bomb diffusion, remote-controlled drones—and illustrate the enormous breadth of scientific know-how that can usefully be applied to counterterrorism.

It’s wrong to believe that the U.S. can conduct the research we need alone. At the Connecticut conference, the researchers presenting results included many foreigners studying at U.S. universities. Only 30 percent of the papers at CRYPTO had only U.S. authors. The most important discovery of the conference, a weakness in a mathematical function that protects the integrity of much of the critical information on the Internet, was made by four researchers from China.

Every time a foreign scientist can’t attend a U.S. technology conference, our security suffers. Every time we turn away a qualified technology graduate student, our security suffers. Technology is one of our most potent weapons in the war on terrorism, and we’re not fostering the international cooperation and development that is crucial for U.S. security.

Security is always a trade-off, and specific security countermeasures affect everyone, both the bad guys and the good guys. The new U.S. immigration rules may affect the few terrorists trying to enter the United States on visas, but they also affect honest people trying to do the same.

All scientific disciplines are international, and free and open information exchange—both in conferences and in academic programs at universities—will result in the maximum advance in the technologies vital to homeland security. The Soviet Union tried to restrict academic freedom along national lines, and it didn’t do the country any good. We should try not to follow in those footsteps.

This essay was originally published in the San Jose Mercury News

Posted on October 1, 2004 at 9:44 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

1 45 46 47

Sidebar photo of Bruce Schneier by Joe MacInnis.