Entries Tagged "UK"

Page 15 of 19

Technological Arbitrage

This is interesting. Seems that a group of Sri Lankan credit card thieves collected the data off a bunch of UK chip-protected credit cards.

All new credit cards in the UK come embedded come with RFID chips that contain different pieces of user information, in order to access the account and withdraw cash the ATMs has to verify both the magnetic strip and the RFID tag. Without this double verification the ATM will confiscate the card, and possibly even notify the police.

They’re not RFID chips, they’re normal smart card chips that require physical contact—but that’s not the point.

They couldn’t clone the chips, so they took the information off the magnetic stripe and made non-chip cards. These cards wouldn’t work in the UK, of course, so the criminals flew down to India where the ATMs only verify the magnetic stripe.

Backwards compatibility is often incompatible with security. This is a good example, and demonstrates how criminals can make use of “technological arbitrage” to leverage compatibility.

EDITED TO ADD (8/9): Facts corrected above.

Posted on August 9, 2006 at 6:32 AMView Comments

Britain Adopts Threat Levels

Taking a cue from a useless American idea, the UK has announced a system of threat levels:

“Threat levels are designed to give a broad indication of the likelihood of a terrorist attack,” the intelligence.gov.uk website said in a posting. “They are based on the assessment of a range of factors including current intelligence, recent events and what is known about terrorist intentions and capabilities. This information may well be incomplete and decisions about the appropriate security response are made with this in mind.”

Unlike the previous secret grading system offering seven levels of threat, the new system has been simplified to five, starting with “low,” meaning an attack is unlikely, to “critical,” meaning an attack is expected imminently. Unlike American threat assessments, the British system is not color-coded.

The current level is “severe”:

“Severe” is the second-highest threat level, but the Web site did not say what kind of attack was likely. The assessment is roughly the same as it has been for a year.

I wrote about the stupidity of this sort of system back in 2004:

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans?as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

[…]

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared.

[…]

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

The Bush administration used this system largely as a political tool. Perhaps Tony Blair has the same idea.

Crossposted to the ACLU blog.

Posted on August 2, 2006 at 4:01 PMView Comments

Complexity and Terrorism Investigations

Good article on how complexity greatly limits the effectiveness of terror investigations. The stories of wasted resources are all from the UK, but the morals are universal.

The Committee’s report accepts that the increasing number of investigations, together with their increasing complexity, will make longer detention inevitable in the future. The core calculation is essentially the one put forward by the police and accepted by the Government – technology has been an enabler for international terrorism, with email, the Internet and mobile telephony producing wide, diffuse, international networks. The data on hard drives and mobile phones needs to be examined, contacts need to be investigated and their data examined, and in the case of an incident, vast amounts of CCTV records need to be gone through. As more and more of this needs to be done, the time taken to do it will obviously climb, and as it’s ‘necessary’ to detain the new breed of terrorist early in the investigation before he can strike, more time will be needed between arrest and charge in order to build a case.

All of which is, as far as it goes, logical. But take it a little further and the inherent futility of the route becomes apparent – ultimately, probably quite soon, the volume of data overwhelms the investigators and infinite time is needed to analyse all of it. And the less developed the plot is at the time the suspects are pulled in, the greater the number of possible outcomes (things they ‘might’ be planning) that will need to be chased-up. Short of the tech industry making the breakthrough into machine intelligence that will effectively do the analysis for them (which is a breakthrough the snake-oil salesmen suggest, and dopes in Government believe, has been achieved already), the approach itself is doomed. Essentially, as far as data is concerned police try to ‘collar the lot’ and then through analysis, attempt to build the most complete picture of a case that is possible. Use of initiative, experience and acting on probabilities will tend to be pressured out of such systems, and as the data volumes grow the result will tend to be teams of disempowered machine minders chained to a system that has ground to a halt. This effect is manifesting itself visibly across UK Government systems in general, we humbly submit. But how long will it take them to figure this out?

[…]

There is clearly a major problem for the security services in distinguishing disaffected talk from serious planning, and in deciding when an identified group constitutes a real threat. But the current technology-heavy approach to the threat doesn’t make a great deal of sense, because it produces very large numbers of suspects who are not and never will be a serious threat. Quantities of these suspects will nevertheless be found to be guilty of something, and along the way large amounts of investigative resource will have been expended to no useful purpose, aside from filling up 90 days. Overreaction to suggestions of CBRN threats is similarly counter-productive, because it makes it more likely that nascent groups will, just like the police, misunderstand the capabilities of the weapons, and start trying to research and build them. Mischaracterising the threat by inflating early, inexpert efforts as ‘major plots’ meanwhile fosters a climate of fear and ultimately undermines public confidence in the security services.

The oft-used construct, “the public would never forgive us if…” is a cop-out. It’s a spurious justification for taking the ‘collar the lot’ approach, throwing resources at it, ducking out of responsibility and failing to manage. Getting back to basics, taking ownership and telling the public the truth is more honest, and has some merit. A serious terror attack needs intent, attainable target and capability, the latter being the hard bit amateurs have trouble achieving without getting spotted along the way. Buying large bags of fertiliser if you’re not known to the vendor and you don’t look in the slightest bit like a farmer is going to put you onto MI5’s radar, and despite what it says on a lot of web sites, making your own explosives if you don’t know what you’re doing is a good way of blowing yourself up before you intended to. If disaffected youth had a more serious grasp of these realities, and had heard considerably more sense about the practicalities, then it’s quite possible that fewer of them would persist with their terror studies. Similarly, if the general public had better knowledge it would be better placed to spot signs of bomb factories. Bleached hair, dead plants, large numbers of peroxide containers? It could surely have been obvious.

Posted on July 14, 2006 at 7:25 AMView Comments

Getting a Personal Unlock Code for Your O2 Cell Phone

O2 is a UK cell phone network. The company gives you the option of setting up a PIN on your phone. The idea is that if someone steals your phone, they can’t make calls. If they type the PIN incorrectly three times, the phone is blocked. To deal with the problems of phone owners mistyping their PIN—or forgetting it—they can contact O2 and get a Personal Unlock Code (PUK). Presumably, the operator goes through some authentication steps to ensure that the person calling is actually the legitimate owner of the phone.

So far, so good.

But O2 has decided to automate the PUK process. Now anyone on the Internet can visit this website, type in a valid mobile telephone number, and get a valid PUK to reset the PIN—without any authentication whatsoever.

Oops.

EDITED TO ADD (7/4): A representitive from O2 sent me the following:

“Yes, it does seem there is a security risk by O2 supplying such a service, but in fact we believe this risk is very small. The risk is when a customer’s phone is lost or stolen. There are two scenarios in that event:

“Scenario 1 – The phone is powered off. A PIN number would be required at next power on. Although the PUK code will indeed allow you to reset the PIN, you need to know the telephone number of the SIM in order to get it – there is no way to determine the telephone number from the SIM or handset itself. Should the telephone number be known the risk is then same as scenario 2.

“Scenario 2 – The phone remains powered on: Here, the thief can use the phone in any case without having to acquire PUK.

“In both scenarios we have taken the view that the principle security measure is for the customer to report the loss/theft as quickly as possible, so that we can remotely disable both the SIM and also the handset (so that it cannot be used with any other SIM).”

Posted on July 3, 2006 at 2:26 PM

UK Report on July 7th Terrorist Bombings

About the Intelligence and Security Committee:

Parliamentary oversight of SIS, GCHQ and the Security Service is provided by the Intelligence and Security Committee (ISC), established by the Intelligence Services Act 1994. The Committee examines the expenditure, administration and policy of the three Agencies. It operates within the ‘ring of secrecy’ and has wide access to the range of Agency activities and to highly classified information. Its cross­party membership of nine from both Houses is appointed by the Prime Minister after consultation with the Leader of the Opposition. The Committee is required to report annually to the Prime Minister on its work. These reports, after any deletions of sensitive material, are placed before Parliament by the Prime Minister. The Committee also provides ad hoc reports to the Prime Minister from time to time. The Chairman of the Intelligence and Security Committee is the Right Honourable Paul Murphy. The Committee is supported by a Clerk and secretariat in the Cabinet Office and can employ an investigator to pursue specific matters in greater detail.

They have released the “Intelligence and Security Committee Report into the London Terrorist Attacks on 7 July 2005,” and the UK government has issued a response.

Posted on May 31, 2006 at 11:19 AMView Comments

Security Risks of Airline Passenger Data

Reporter finds an old British Airways boarding pass, and proceeds to use it to find everything else about the person:

We logged on to the BA website, bought a ticket in Broer’s name and then, using the frequent flyer number on his boarding pass stub, without typing in a password, were given full access to all his personal details – including his passport number, the date it expired, his nationality (he is Dutch, living in the UK) and his date of birth. The system even allowed us to change the information.

Using this information and surfing publicly available databases, we were able – within 15 minutes – to find out where Broer lived, who lived there with him, where he worked, which universities he had attended and even how much his house was worth when he bought it two years ago. (This was particularly easy given his unusual name, but it would have been possible even if his name had been John Smith. We now had his date of birth and passport number, so we would have known exactly which John Smith.)

Notice the economic pressures:

“The problem here is that a commercial organisation is being given the task of collecting data on behalf of a foreign government, for which it gets no financial reward, and which offers no business benefit in return,” says Laurie. “Naturally, in such a case, they will seek to minimise their costs, which they do by handing the problem off to the passengers themselves. This has the neat side-effect of also handing off liability for data errors.”

Posted on May 9, 2006 at 1:17 PMView Comments

Shell Suspends Chip & Pin in the UK

According to the BBC:

Petrol giant Shell has suspended chip-and-pin payments in 600 UK petrol stations after more than £1m was siphoned out of customers’ accounts.

This is just sad:

“These Pin pads are supposed to be tamper resistant, they are supposed to shut down, so that has obviously failed,” said Apacs spokeswoman Sandra Quinn.

She said Apacs was confident the problem was specific to Shell and not a systemic issue.

A Shell spokeswoman said: “Shell’s chip-and-pin solution is fully accredited and complies with all relevant industry standards.

That spokesperson simply can’t conceive of the fact that those “relevant industry standards” were written by those trying to sell the technology, and might possibly not be enough to ensure security.

And this is just after APACS (that’s the Association of Payment Clearing Services, by the way) reported that chip-and-pin technology reduced fraud by 13%.

Good commentary here. See also this article. Here’s a chip-and-pin FAQ from February.

EDITED TO ADD (5/8): Arrests have been made. And details emerge:

The scam works by criminals implanting devices into chip and pin machines which can copy a bank card’s magnetic strip and record a person’s pin number.

The device cannot copy the chip, which means any fake card can only be used in machines where chip and pin is not implemented – often abroad.

This is a common attack, one that I talk about in Beyond Fear: falling back to a less secure system. The attackers made use of the fact that there is a less secure system that is running parallel to the chip-and-pin system. Clever.

Posted on May 8, 2006 at 12:41 PMView Comments

Cubicle Farms are a Terrorism Risk

The British security service MI5 is warning business leaders that their offices are probably badly designed against terrorist bombs. The common modern office consists of large rooms without internal walls, which puts employees at greater risk in the event of terrorist bombs.

From The Scotsman:

The trend towards open-plan offices without internal walls could put employees at increased risk in the event of a terrorist bomb, MI5 has warned business leaders. The advice comes as the Security Service steps up its advice to companies on how to prepare for an attack. MI5 has produced a 40-page leaflet, “Protecting Against Terrorism”, which will be distributed to large businesses and public-sector bodies across Britain. Among the guidance in the pamphlet is that bosses should consider the security implications of getting rid of internal walls.

Open-plan offices are increasingly popular as businesses seek to improve communication and cooperation between employees. But MI5 points out that there are potential risks, too. “If you are converting your building to open-plan accommodation, remember that the removal of internal walls reduces protection against blast and fragments,” the leaflet says.

All businesses should make contingency plans for keeping staff safe in the event of a bomb attack, the Security Service advises. Instead of automatically evacuating staff, companies are recommended to gather workers in a designated “protected space” until the location of the bomb can be confirmed. “Since glass and other fragments may kill or maim at a considerable distance from the centre of a large explosion, moving staff into protected spaces is often safer than evacuating them on to the streets,” the leaflet cautions. Interior rooms with reinforced concrete or masonry walls often make suitable protected spaces, as they tend to remain intact in the event of an explosion outside the building, employers are told. But open-plan offices often lack such places, and can have other effects on emergency planning: “If corridors no longer exist then you may also lose your evacuation routes, assembly or protected spaces, while the new layout will probably affect your bomb threat contingency procedures.” Companies converting to open-plan are told to ensure that there is no significant reduction in staff protection, “for instance by improving glazing protection.”

Posted on March 31, 2006 at 5:14 AMView Comments

London Rejects Subway Scanners

Rare outbreak of security common sense in London:

London Underground is likely to reject the use of passenger scanners designed to detect weapons or explosives as they are “not practical”, a security chief for the capital’s transport authority said on 14 March 2006.

[…]

“Basically, what we know is that it’s not practical,” he told Government Computing News. “People use the tube for speed and are concerned with journey time. It would just be too time consuming. Secondly, there’s just not enough space to put this kind of equipment in.”

“Finally there’s also the risk that you actually create another target with people queuing up and congregating at the screening points.”

Posted on March 23, 2006 at 1:39 PMView Comments

1 13 14 15 16 17 19

Sidebar photo of Bruce Schneier by Joe MacInnis.