Friday Squid Blogging: Squid vs. Owlfish

This video is pretty fantastic:

The narrator does a great job at explaining what’s going on here, blow by gross blow, but here are the highlights:

  • Black-eyed squid snares owlfish with its two tentacles, which are tipped with hooks and suckers, and reels it in.
  • Black-eyed squid gnaws away at the owlfish’s spinal cord using its very sharp beak.
  • Owlfish is wearing a suit of large, shaggy scales, which it proceeds to shed in an effort to loosen the black-eyed squid’s eight-armed grip.
  • Owlfish’s scale trick doesn’t work, squid burrows deeper into its back muscles, rotating it around [like] a cob of corn
  • Owlfish dies with a gaping, red, meaty hole in its back and the drinks are on black-eyed squid because he’s feeling pretty great right now.

Posted on February 21, 2014 at 4:33 PM93 Comments

Comments

AC February 21, 2014 5:12 PM

Not squid-related, but I’d to hear your thoughts on protecting computers and routers against malware that infects flash memory. You’d think that a simple safeguard would be to require a special DIP switch or jumper setting before firmware can be reflashed. Results of web searches are not promising. It seems that protective features like that are not common in PC motherboards. How do you defend against malware that tries to infect built-in flash memory in devices?

Anura February 21, 2014 5:34 PM

@AC

It’s not the answer you want to hear, but the best way to protect against motherboard malware is to take steps to avoid malware infection all together; avoid opening attachments from untrusted sources, don’t run as root/admin, keep software up-to-date, disable unused browser plugins (and run NoScript and HTTPS Everywhere if you can), application isolation (SELinux/AppArmor), etc., etc., etc. If you are being specifically targeted by a competent organization, individual, or group of individuals, that might not be enough; definitely not if they have the capability to gain physical access to your machine.

If you are comfortable working with electronics, I’m sure you can find a way to manually modify components so they can’t be written to, but I’m personally not comfortable with that.

Clive Robinson February 21, 2014 5:41 PM

@ AC,

The simple answer is “not much”.

There are two levels of manufacturer you have to deal with,

1, Motherboard and I/O board PCB design.
2, Semiconductor manufacture of chips on the PCBs.

If the “Flash” is in a seperate chip there may be a write enable pin that can be cut or lifted from the PCB. Unfortunatly it only works with “parellel addressed” chips not “serial addressed” chips.

If however the “Flash” is in a SoC with one or more CPU’s and I/O devices like serialy addressed chips there is little you can do to prevent the memory being over written.

Mike Amling February 21, 2014 5:45 PM

Have I missed something or have all Snowden’s documents been about what we might call data acquisition? I don’t recall seeing anything about taking discrete logarithms, factoring RSA moduli, linear or differential cryptanalysis, weak ECC groups, breaking hashes, subverting TLS, or anything else I’d call really cryptographic.

If so, have the released documents been selective, or was Snowden selective, or did he not have access to all of NSA, or does NSA not do cryptography? 🙂

The documents von Dem Spiegel also seem to about data acquisition and exfiltration.

AC February 21, 2014 5:51 PM

@Anura

All the measures you mentioned are fine and good, but you’d still want the kind of defense I’m looking for, as a backstop. If the integrity of embedded firmware can be assured, you can easily bring a suspected-to-be-compromised system to a known good state by wiping or replacing the harddrive and reinstalling the system from trustworthy media. If you don’t have confidence in the integrity of the low-level firmware of your device, you just can’t have confidence in the device as a whole.

AC February 21, 2014 5:59 PM

@Clive

It’s disheartening to know that you came to the same conclusion as I did. I’ve looked at a very small number of BIOS chips. They were all of the “serial addressed” variety. There’s no write-enable pin that you can force to some logic level to make the chip write-protected.

kashmarek February 21, 2014 7:42 PM

The lack of “write-enable” protections switch is most likely a deliberate part of the chip/chipset/bios/flash-memory design such that the owner of a computer CAN’T prevent flash memory invasion. It all comes down to the infamous “no engineer asked for elimination of write-eanble protection” statement (ie, dictated by management).

Viv February 21, 2014 8:05 PM

Most consumer devices won’t use separate flash storage for program and data. You want to protect your program storage, but disabling flash writes would also block configuration settings from persisting through a power cycle.
These are solvable problems, but the product designer would have to see meaningful return for increasing their BOM cost by separating the program and data storage devices.
Perhaps regularly alternate between the last couple of firmware releases? If you re-flash every n-weeks and verify that the reported revision is the one you re-flashed to, it would make it much less likely that you aren’t running the firmware you think….

kashmarek February 21, 2014 8:11 PM

It’s time to break up the NSA

http://www.cnn.com/2014/02/20/opinion/schneier-nsa-too-big/index.html?hpt=hp_t4

Is it over-reach, or just plain criminal?

Much of this has seeds in the cold war days, where the enemy was pretty well known and the cost was pretty high for the results required (in many cases, lives). Yet, the need was fairly clearly defined, and the objectives kept narrow and within the range of meaningful value.

As electronic devices got better, faster, cheaper, and easier to use for things well beyond the original objectives, indeed, such things were used well beyond the original objectives. Note: in this realm, there had to be a tipping point where such electronics would be of extreme value to spy agencies if they could sucker everyone into becoming compliant suppliers of the data these devices collect (why was the ARPA network opened up to the public as the Internet; why was the military GPS opened up to public use; why does this country have such tight controls on communications via a small number of large compliant network providers and organizations?)

9/11 and the follow-up Patriot Act, a badly designed and implemented piece of legislation, was the major launch point that in turn, made citizens of this country the targets rather than the ones to be protected. Indeed, like the body counts from Vietnam (a propaganda measure of success), it is easier to spy on your own citizens and claim successes for the spying efforts, than to do the job that should have been intended.

Most of what the NSA does is effectively questionable behavior. They won’t allow such things to be done to themselves, and that alone identifies them as self protected to avoid oversight, because they would expect to be out of a job or prosecuted (if that can be done any more). They have become what the opposition spy agencies were during the cold war. That is, we have found the enemy and it is us.

exjimmy February 22, 2014 12:38 AM

Anyone know What happened to the post

NEBULA: NSA Exploit of the Day
Schneier on Security (schneier) – 2/21/2014, 9:54

that was in my RSS feed?

Laura February 22, 2014 6:23 AM

I read something from John Mcafee the other day about some external security and encryption device he is working on. Anyone has news about it?

Anonymous Coward February 22, 2014 9:39 AM

Apple neglected the part about verifying a certificate when setting up an HTTPS connection:

http://support.apple.com/kb/HT6147
https://www.imperialviolet.org/2014/02/22/applebug.html

Oopsie!

Problem was a doubled `goto fail;’ line. The first one is conditional on err != 0; the second one is unconditional and skips the verification code but leaves err = 0 signifying success.

Mistake? Sabotage? Who knows, but I’m sure some heads will roll at Apple — and perhaps heads will roll elsewhere too.

Figureitout February 22, 2014 10:41 AM

Craig Heffner over at devttys0 is really ripping to shreds the Linksys WRT120N router; I suppose if a company finds out he’s going thru their product, they may send him an envelope and ask “to be gentle”… Recently he found the “encryption” employed on configuration files in the firmware was a XOR w/ 0xFF. He found this by first having a known plaintext and ciphertext, the initial PW is “admin”, and his test passcode of “aa”. Both starting w/ lower case “a”. His “hunch” turned out to be correct, like w/ mathematicians knowing how to solve certain problems fairly quickly, these hunches come w/ experience of reverse engineering.

http://www.devttys0.com/2014/02/cracking-linksys-crypto/

Clive Robinson February 22, 2014 10:56 AM

@ Anonymous Coward,

Opps indead 😉

The shown code is like one ofthose classic errors you –supposadly– get when some one is debuging and they only comment out or delete half a block…

One of C’s little problems is closing braces and semicolons can be equivalent to the compiler..

For those to young to know C 😉 the if statment can be either,

if (condition == fail) {

    goto end

}

And,

if (condition == fail)

    goto fail;

Are the same to the compiler, however if you delete or comment out the “if” line in the first one the compiler will complain whereas it won’t with the second. Most coders however go for the second form not the first, and hidden errors arise.

Knowing this gives you a nice way to put in a security flaw and make it look accidental…

Nick P February 22, 2014 1:31 PM

Another epic fail by Apple’s security people. Even more appropriate that their coding strategy was to “goto FAIL.” 🙂

Bruce Schneier February 22, 2014 3:45 PM

“Have I missed something or have all Snowden’s documents been about what we might call data acquisition? I don’t recall seeing anything about taking discrete logarithms, factoring RSA moduli, linear or differential cryptanalysis, weak ECC groups, breaking hashes, subverting TLS, or anything else I’d call really cryptographic.”

You’re right. I’m going to check again when I next get down to Rio, but from my last visit I concluded that all of Snowden’s documents are from the SIGINT side of the NSA, not the COMSEC side.

And nothing from CyberCommand either.

Bruce Schneier February 22, 2014 3:45 PM

“Anyone know What happened to the post

NEBULA: NSA Exploit of the Day
Schneier on Security (schneier) – 2/21/2014, 9:54

that was in my RSS feed?”

I accidentally published it, and then I removed it. It’ll be back, when it’s its turn.

B

Nick P February 22, 2014 4:58 PM

@ Bruce Schneier

Re COMSEC absence

That is interesting. I think it’s the case because Snowden only worked the SIGINT side. He and the rest of his coworkers would only have had access to systems related to that function. Even if overprivileged, we can assume NSA wouldnt be connecting totally unrelated stuff to that network.

COMSEC is handled by Information Assurance Directorate. These are different people, systems, maybe even locations. Matter of fact, I can’t recall a Type 1 cryptosystem with Booz Allen’s name on it. The main groups with access to internal data on GOTS crypto are govt affiliated universities, military groups, and the big defense contractors making Type 1 devices.

So, in a nutshell, COMSEC designs were developed far enough away from Snowden that he had no access to them.

Bonus thought: Snowden probably wouldnt have stolen COMSEC material anyway as its sole purpose is to protect highly sensitive material. Morally justifying leaks on a massive spy apparatus and its uses is one thing. Leaking of classified networks protection code and techniques on the other hand would be “aiding and abetting” our enemies.

COMSEC February 23, 2014 3:44 AM

er… it would be helping us all secure our systems. If the NSA’s security is well done, and I assume it most probably is, then they shouldn’t care that much about it being in the public domain, at least as far as being attacked is concerned (they might have a concern about others being able to protect themselves from that info, but wouldn’t that mean there’s a moral justification to leaking it ?)

yesme February 23, 2014 4:59 AM

@Clive Robinson,

If you want to check out whether this bug is also present in other SSL/TLS implementations, see this link.

Clive Robinson February 23, 2014 5:42 AM

@ COMSEC,

    they might have a concern about others being able to protect themselves from that info, but wouldn’t that mean there’s a moral justification to leaking it ?

Not under US legislation that uses the premise that “encryption etc” are “munitions”, remember even shield cloth used to reduce EM radiation is considered under the “etc” for even US citizens. Basicaly the rest of the world had to drag the US into responsible EMC behaviour and the FCC still lags way behind on this which is why quite a few US exporters have issues in other parts of the world.

You also have to remember that from the USG position morals are “verbotten” for all. Right now the USG Spin is to call Ed Snowden a traitor and the likes of Clapper and Alexander and their pet poodle Obama are looking for any way to make that charge stick (which currently it appears they can’t).

Whilst the Snowden Revelations show much misbehaviour by the USG so far the only “aid to the enemy” the revelations has been “encryption works”…

Everything else can be put together by a thoughtfull person with a pass grade on highschool physics from publicaly available information, much of which has been mentioned on this blog repeatedly long prior to the Snowden Revelations.

Look at it this way, you should know by simple observation that a window alows EM radiation in the visable and IR ranges. Likewise you could go online and find from various web sites the absorbtion spectrum and transmission spectrum of “crown glass” (ie astronomy and energy efficiency sites). So you could be expected to know that a camera or thermal imager will “see through” the window if you cared to think about it. Now if I tell you the TAO has developed a thermal imager that looks like a window box of flowers, I’m not telling you anything you could not easily have worked out for yourself in a few moments.

If however I also told you this window box device had a very poor rejection of other IR radiation wavelengths and could thus be effectivly jamed with the IR diode in a TV remote control flashed at a given rate, then I would be revealing something you would probably would not be able to work out for yourself. Thus revealing the IR diode trick would potentialy be giving “aid” to USG –potential– enemies.

But would it realy?

Although the “unholy-trinity” I’ve mentioned above have alluded to the fact that “methods” have been released by the Snowden Revelations and that revealing methods can aid terrorists, what they won’t or more probably can not do is give examples that will “pass muster” with regards the revelations. Journalists trot out unnamed sources that say unprovable things like “we’ve seen changes” which are unquantified nonsense which any half baked journalist with a words to fill deadline can make up and often do.

You need to take a step or two back and look at what is known historicaly.

The simple fact is untill fairly recently “terrorists” were being used by the “super powers” to fight “proxy wars” as part of this both sides gave training to the terrorists about “methods” that would be used against them and how to avoid them. We know that the terrorists so mistrust both comercial and open source encryption products that we think they may have developed their own. We also know they are well aware of the problems involving phones and other electronic communications because of what the USG released about AQ and other OpSec when political point scoring came to the fore pre-election time. And as can be easily shown such point scoring actually caused significant damage to other nations intel assets.

The Snowden Revelation “grevious fault” is not revealing “methods” but doing it in a way that shows US Citizens just how baddly they have been manipulated and spied upon by their own government and thus reveal a small part of the truth about the “Unholy-Trinity” and hols them up to public ridicule, emmbarisment and potentialy much loss of lucrative self enrichment.

Whilst I’m sure Clapper is not going to have a poor retirment he has been held up publicaly as a naive, which will make quite a few shareholders regard him like “poison ivy” thus quite a few companies who might well have offered him a sincure board position will not want him on their boards or as some journalists put it “get into bed” with them.

But with the likes of Clapper it’s rarely about the money, it’s usually about the power and influance and having people give them respect because of it. He beleives himself to be a “king maker” and to be shown publicaly to be a naive, kind of ruins the image he wants to project. And I suspect that what is going to hurt the “unholy-trinity” the most is the thought of the “Nixon Effect” or “McCarthy Effect” and how history will look on them as being not great men but those Anti-American criminals too stupid to not get caught.

Clive Robinson February 23, 2014 6:51 AM

@ YesMe,

Ouch…

I’ve just spent a little time reading through some of the 200 or so other comments, and what can I say…

It does re-enforce one of my prejudices about code reviews, in that “they are only as good as those doing them” and “managment use the best to cut code not to review code in order to reduce costs”.

I don’t know how this bug got into the code or by whom but it also re-enforces another of my prejudices about testing code. Few places I know of make test harnesses as they develop code (heck they don’t even comment code). Often the test harness is writen to the specification which means the harness is “positive test” not “negative test” biased in part due to the way many specifications are written.

Worse they tend to have only “one test harness” that is used on the compleate final code, and does not get updated with code development unless the harness gets broken by the code. Whilst this works for positive tests it fails miserably for negative tests.

The major major problem though of “one test harness” is complexity. Complexity rises as a significant power of the number of unconstrined tests in the code. There is no possibility of writing one test harness to test the entire complexity of even a moderate piece of worth while code.

The way to deal with complexity is to break the code into small pieces with clearly defined boundries and fully specifed interfaces with all errors checked and all exceptions handled. Each small piece of code gets three test harnesses, the first is the mainly positive functional tests the second is the mainly negative errors and third the mainly negative exceptions test. Only when the small piece of code passess these “local” tests can it be added to the main code tree to get subject to further “regional” and “global” test harnesses.

It’s this sort of indepth testing is one of several things that differentiates “engineering” from “artisanal” code production. And as I’ve said before engineering came out of Victorian and earlier artisanal “boiler making” which was not just inefficient but killed people and thus caused legislation to be passed to stop the deaths.

AlanS February 23, 2014 10:24 AM

Couple of events relating to collection and use of location data that I don’t believe have been discussed here this week:

The Massachusetts Supreme Court decided this week in Commonwealth v. Augustine that police are required to get a warrant to obtain cell site location information. Link to EFF summary here. The MA court cites U.S vs. Jones (2012) and other recent cases in coming to its decision. They also decided that the third party doctrine (Smith vs. Maryland, 1979) wasn’t applicable to CSLI.

Also discussed on the EFF site is DHS use of license plate data. WaPo revealed earlier this week that DHS has canceled plans to build a national license-plate tracking system. However, there are already many local police department databases as well as massive private databases that the DHS has been using for years according to documents obtained by the ACLU.

Nick P February 23, 2014 11:24 AM

@ COMSEC

“If the NSA’s security is well done, and I assume it most probably is, then they shouldn’t care that much about it being in the public domain, at least as far as being attacked is concerned”

Not true at all. Most security in the open has been compromised at some point, often by attackers with specs/code. The NSA prefers to design strong cryptosystems and keep their use restricted to minimize odds enemies will hack them. Secure design + physical protection + obfuscation = better security in practice. Obfuscations are also how I defeated top notch attackers in the past.

Figureitout February 23, 2014 12:06 PM

Clive Robinson && others interested RE: IR diodes
–Just last night I messed w/ the arduino and IR b/c I’d been meaning to check out this IR receiver USB-plugin w/ Sony Vaio computers. Having problems w/ the receiver and can’t find much to any info on it, even on the PCB there’s little to nothing, no manufacturer on the chip even… If anyone knows how I can hack it in anyway, please let me know b/c it’s of little use to me now besides a nice box and maybe a spare IR receiver and a split USB cord…The part is a Sony IR receiver (PCVA-IR5U).

Library can be found here:

http://www.righto.com/2009/08/multi-protocol-infrared-remote-library.html

The guy is known in the open maker/hacker community. And the circuit is so easy b/c that’s what arduino does. It correctly ID’d my Sony remote, but also they’ve reverse engineered the protocol and give the hex codes as well. Lots of fun, just need to get it set up on one of my smaller computers and start sniffing protocols.

Whether someone wants to use it for malice (more like defense from it) or for fun, check it out. It’d be nice if you just state the frequency rather than beat around the bush btw.

pianissimo February 23, 2014 12:28 PM

@ Clive Robinson:

The Apple bug is egregious because any code coverage tool would have found it.
For those not hip to software development in-speak, a code coverage tool analyzes a program and identifies those sections that are conditionally executed. It is run against a test suite that injects tests into the program, and will tell you whether the tests exercise (or ‘cover’) all the sections and all the possible conditional states.

The buggy Apple file contains dead code, which a coverage tool would report as a serious problem. Even without dead code, the test suite would need to provide instances of invalid or unsigned certificate signatures to achieve 100% coverage.

So the take home message is that Apple were caught, once again, falling short of best practices in the industry.

Figureitout February 23, 2014 12:56 PM

Not a very positive review of cyberstreetwise.com, a website by the UK gov meant to change attitudes towards security. Then they make in of course javascript, a language known for its security (I cringe at all the JS on that page) and it looks like a website for gradeschool children who still watch Barney and Power Rangers; way to take it seriously…

http://www.theregister.co.uk/2014/02/14/cyber_streetwise/

Software needs set chunks of code that are considered secure standards, so future coders have some actual knowledge they can take to heart and rely on rather than certain rules that apply sometimes and crazy bugs. I’m talking like assembler or even machine language code that is secure. You have this in hardware and electrical engineering, you have tools you can rely on.

Unix does this a little, feels like something you can actually have a little confidence in…

Benni February 23, 2014 1:13 PM

obama told that merkel would no longer be monitored.

Well now nsa has order to monitor merkel’s friends more closely:
http://www.sueddeutsche.de/politik/us-geheimdienste-nsa-ueberwacht-innenminister-de-maizire-1.1896464

“we have the order not to let any information slip through after Merkel is no longer monitored” told the spy and goes on how they monitor the interior minister Thomas de Maiziere. Apparently, Merkel has asked De Maiziere several times for advice, in one phone call she asked “what should I think”.

The article also says that they closely monitor 320 persons in germany that have either economical or polical influence. They explicitely mention the monitoring of the german company SAP.

I would be very interested whether they monitor some scientists too.

For this surveillance, nsa has 297 employees.

Benni February 23, 2014 3:53 PM

It wonders me why this problem with apple came out now. Is there nobody who tested this before?

It makes me wonder, whether the windows crypto api with its nsa key could contain similar bugs. Can anyone test whether windows appropriately verifys the security certificates? How does windows behave in a situation where ios with that bug failed

Clive Robinson February 23, 2014 4:33 PM

OFF Topic :

It would appear that Micro$haft have decided that blatant profitering from Public Sector organisations funded by tax is the way to deal with WinXP users after April.

Put simply M$ want 200USD / seat for “extended security support” for the rest of 2014 400USD for the next year and 800USD for the year after.

It would appear that many large organisations are stuck on XP “because they made the mistake of developing major mission critical systems using Micro$haft technology” which us Micro$haft propriatary extensions in the XP browser whch are not supported in Win7 and beyond…

As is not untypical of major government projects some have been in development-delivery phase for more than five years and some are still being activly developed in what is very obsoleate technology. What is worse for some of these government systems vast amounts of money have been handed over for what are esentialy “lemon technology”. But bad as that may be some are much worse some systems for which governments have paid for to be developed don’t actually “own the code” they have paid for… So they are at the mercy of the development companies to get the required changes made… As some of these companies felt agreeved at the level of money made previously it’s been estimated that some companies will “hold to ransom” and ask for similar sums of money.

Thus for an organisation like the UK’s National Health Service (NHS) with something aproaching 1 million XP machines the bill could be in excess of 500USD/seat or half a billion dolars in the first year…

Clive Robinson February 23, 2014 4:55 PM

@ COMSEC,

Security by obscurity does work for physical security and will continue to do so for the forseable future.

Where it goes wrong is people try and map physical security ideas that work onto information security where they sometimes don’t work.

The reason they sometimes work and sometimes don’t is because of underlying assumptions or axioms.

If people don’t understand the axioms properly then they will run into a world of hurt sooner rather than later.

Clive Robinson February 23, 2014 6:11 PM

@ Figureitout,

    It’d be nice if you just state the frequency rather than beat around the bush btw

Which frequency, that of the IR radiation or that of the pulses?

Both are dependent on what sort of system you are attacking. Most IR diodes are “Near IR” whilst thermal imagers tend to focus on “Mid IR”. When pulsing, the frequency to use, is often related to the sensors refresh rate. (remember thermal imagers with high refresh rates are ITAR. “Dual Use” and thus restricted technology).

Thermal imagers come in a variety of forms and each has it’s individual problems or weaknesses you can exploit. But as a general rule you need to have access to the equipment to experiment to find out which is best… (and thermal imaging devices can often be spotted with other thermal imagers such are the nature of all active devices, and “heat pumps” often have significant differences to ambient temprature of their suroundings)

That said there is the “blunderbus” technique of throwing a lot of energy at a thermal imager and either overload or cook it’s front end. One way to do this against uncooled thermal image sensors is with IR Laser diodes though this carries a significant health warning. Medium power IR CW Laser diodes can be difficult to get hold of and difficult to build drive circuits for.

There is however a “backdoor” way to get hold of them pre-built… Green diode Lasers are not fundemental frequency output, what they generaly are is a multiplier crystal driven by a much higher power IR laser diode (ie aprox 50mW of IR gives maybe 2-3mW of green light out of the crystal). However to be effective at overloading a thermal imager you need to use a beam collomator to stop undue beam divergance.

Another way to “Fritz” some thermal imagers is with a “black light” heater and crumpled up aluminium foil as a reflector. Also CCTV near IR spotlights can mess with some other thermal imagers.

D Biafore February 23, 2014 6:15 PM

@Clive Robinson
Micro$haft should take assume the responsibility for their own product. I guess they want to force users to upgrade.

Although it could be worse. Micro$haft users could simply “Go Google”, adopt Android, and get even less assurances of any sort of security, integrity, upgradability or any other “ity”. But as a result it could be less expensive…

Nick P February 23, 2014 8:19 PM

@ COMSEC

It’s called obfuscation and it’s highly effective if combined with other methods. Here’s you an example of two targets. I ask you to tell me which will be easier to attack. Case study is drive encryption.

  1. System that uses 256bit AES produced from a password using a standard derivation algorithm. Source for crypto, OS, and firmware are available. Architecture is x86.
  2. System running unknown chips, firmware, OS, and crypto software. Crypto uses proven algorithms but which and what combination are unknown. It might also partly derive key from something on device.

So, your goal is a stealthy, remote exploit. Target 1 uses open standards and software. Truecrypt on Linux on x86 has been beaten in many ways so we already know target 1 is at risk. Target 2 is my crypto strategy. The attacker doesnt even know where to start. They will need physical possession or some other 0-day otherwise they risk detection. This “obscurity” is beneficial enough that it’s mandatory for NSA Suite A algorithms.

Figureitout February 23, 2014 9:12 PM

Clive Robinson
–Why not both? And to be clear, I’m not attacking, I’m defending from an initial attack; call it a counterattack or whatever, but it’s an attack after being attacked. For me too, I’m an experimenter, I experiment almost 24/7 or am thinking about a new experiment I want to try. Sometimes it leads me to stupid things like piercing batteries when it clearly says, “Do Not Pierce Dumbf*ck”; b/c I wanted to see the battery cells. All that happened was a puff of smoke (likely some cancerous dangerous smoke) and a very hot battery. I don’t like chemistry experimenting too b/c I don’t want to find some poisonous gas or explosion. Othertimes it leads me to finding a reliable method for pleasuring females that crosses all racial/personality barriers; it’s a simple physical algorithm that I’m sure you’ve mastered yourself. 🙂

So getting physical access to equipment is out of the question as I’m done doing illegal things (well, breaking into places), I’m not going to get to them prebuilt, and I’m not going to try it when I know I’m being watched. One thing I do want to try, is the IR receivers above stoplights in the middle of the night when no one is around so there’s no risk to anyone. I finally got to see one of those systems in action, and they’re at a lot of stoplights…Microwaves are cooler and more reliable IMO, but IR is cool b/c it’s different.

COMSEC
–Nick P’s right. I don’t want to get involved in a big argument about obscurity vs. openness and security. Security is about obscuring info from an attacker, plain and simple. Gaining trust is about openness. So a closed security system could be very secure, but a lot of people won’t trust it b/c they would have to start cracking it to verify. If you know what algorithm it is, you got a nice headstart.

Based on some router firmware analysis being done at devttys0.com, we can see some systems are so trivially insecure that even a rookie like myself could’ve caught a xor of 255. The fact that a company like Cisco is doing that though means either the company is going in the sh*tter or that is borderline criminal negligence.

One thing he messed up a little, is “unknown firmware, chips, OS, and crypto” b/c it will be known to at least one person besides himself. Having to discover a security system “fresh” as opposed to “known” are 2 very different experiences.

Figureitout February 23, 2014 9:23 PM

Ok, so I’ve been talking a lot lately about “building a computer” from scratch as a way to combat this onslaught of exploits against all our technology. Well, I think it would be a better resource to people to lay out very clearly, from day 1, what to do to get secure technology. When I get the time that I want, we need a guide or a manual for people who find their computers are all infected, which means your router, which means your MAC addresses, your email addresses, maybe your real identity and physical address, financial information, the type of car you drive and on and on.

So what we need is, a guide to restoring sanity and security to those that have been attacked by evil people. Otherwise what we are going to see is all the side channels that have been opened will simply allow an attacker in your new system.

This is one of the many of my thoughts, but in a sense, you may have to take on a whole new identity or trash all of your old life to be sure. Which leads to a life out in the country…Seems like another one of my impossible thoughts, but this is reality. What keeps popping up in my head is a slicing sound to cut out the attackers and get a clean slate; and all the money and trusted associates it will take to achieve…

Nick P February 23, 2014 10:36 PM

@ figureitout

“One thing he messed up a little, is “unknown firmware, chips, OS, and crypto” b/c it will be known to at least one person besides himself. Having to discover a security system “fresh” as opposed to “known” are 2 very different experiences.”

I was talking about arbitrary black hats. There are a number of people still in the threat profile that might have that knowledge. However, they have that knowledge for the open system too so nothing changes there. Also, remember I am an advocate of A1-class system generation requirement. A1 systems were produced from code and vetted tools onsite. My obfuscations might be automated randomizations and such, even choice of OS, that run during customer install. So, even the customer or company providing the software might not know what’s inside the final image. Those chips supporting memory and disk encryption might make it near impossible. So, it is possible to have a system that’s a combination of proven components without knowing anything about the system. Cool, huh?

(Note: I’d also rather not know anything about the system. Showing my torturer that my designs are zero knowledge and that they’ll need physical access might spare me some pain. It might prevent an attack on me period.)

Figureitout February 23, 2014 11:11 PM

Nick P
–Yeah, it’s cool, huh. But I want to see it w/ my eyes and see how it actually works, what are the designs, where is it being made, how, etc. B/c I don’t get how you create something that suddenly becomes hidden…And, well, I’m talking about active agent attacks that live in your neighborhood and have legal rights to break in your home. If we can defeat those threats, the blackest of hats don’t concern me at all. And these threats continue to torture me skirting all legal concerns and doing nothing w/ regards to national security.

Being attacked on the internet means jack to me, just like the attacks that shutdown my computer at my school that shut it down twice unprovoked while I’m working on calculus Maple computer projects, obviously it’s the DELL backdoors and insecure network at my school; combined w/ attacking my home internet that allowed that attack. It’s the hardware implants, physical break-ins, TEMPEST attacks, and then physically following me and trying to befriend me to get clues that gets to me slightly.

Here, I’m going to launch another test for the agents. I’m wiping a HDD again extremely hard, it’s taking like 150 hours or something, I don’t know. Let’s see if the agents break in my home tomorrow again and try to mess w/ it. Losers, I’ll have a sensor set up to detect if they do. I’m guessing they have nothing better to do.

Clive Robinson February 24, 2014 1:38 AM

@ Nick P,

    So, it is possible to have a system that’s a combination of proven components without knowing anything about the system. Cool, huh?

Hmm…

Twitch of curtain stage left and stage wisper voice saying “how do you test it?”

The problem is one of any system being built, the engineering aproach is to test individual components an charecterise them. Then knowing the limitations of each component move forward with the design.

The problem however is “stability” and “response” especialy where any feedback or feedforward path exists.

A simple example is in power suppply design for Intrinsic safety. You are required to design the system such that the failure of any two components does not cause the system to become potentialy dangerous. The usual solution is to use multiple resistors in series and multiple zenner diodes in parallel where you open circuit or short components as your arguments for safety, however it’s grossely inefficient and unworkable for more than very very simple systems. For active power supplies you use to current regulators and two voltage regulators. Unfortunatly these contain inherant feedback or feedforward loops which will fight each other if not designed to stop it. The problem is that you need a fast response time but also stability to prevent oscillation etc. This is not a simple engineering problem. One thing you find is that the order you put components in series matters that is voltage regulator A and regulator B have to be in the AB order, put them in the BA order and the overal system oscillates.

Thus testing components on their own will not tell you the result of putting multiple components together in random order… So you have to fully test the final system for all edge cases and that means having test points within the system, which in the case of security can alow information to leak.

It looks at first sight like a Catch-22 situation… and it is unless you know the hidden assumptions exist and what they are… but to do that you first have to find then and then charecterise them which means you have to test them…

COMSEC February 24, 2014 2:39 AM

OK, I agree that adding obscurity will give you a stronger system, or at least one that’s not less secure, all other things being equal. But the underlying system is a secure one, and that is what gives you most of the security. The obscurity layer gives you an extra bump in security, but presumably not very much, right ? And that’s what would have been leaked here, not any keys to the underlying system, which means most of the security would have been preserved.

However, I’d missed one thing, and that’s the “tamper evident” bit you mention. I see how obscurity might help a lot there.

Clive Robinson February 24, 2014 4:22 AM

@ COMSEC,

    The obscurity layer gives you an extra bump in security, but presumably not very much, right ?

You’ld be surprised, it all depends on how you see things.

Think of the type of opponent, whilst obscurity will be marginal against a significantly resourced and determind opponent who only has you in their sights, it’s rare to be in that position.

At the other more frequent end you are just one of millions in a very target rich environment, any likely attacker is going to follow the principle of “lowest hanging fruit” and at the slightest resistance try the next door along. Thus to that attacker a door that sticks is as secure against them as that of the best vault in the world that is fully locked.

In the middle you have those who use information to target those they will apply their resources to attack. If you obscure how the world sees you in either direction you are likely to be left alone.

That is no matter how rich you are if you live like a tramp/bum and live and socialise with tramps/bums and take care to hide any signs of wealth then you are not going to be attacked by sophisticated crooks. Likewise you might not have anything worth stealing or being attacked for but if you give the apperance of having defenses beyond the attackers abilities they will probably leave you alone.

It’s this latter principle most countries use to prevent themselves being attacked by other usually adjacent hostile nations.

But there are also a couple of other options, the first is the biblical “salted earth” option, which you can use when a potential attacker knows that you have something they want and you cannot fake sufficient resources to deter them attacking you. You make sure they beleive beyond doubt that rather than fight them you will devote your energies to destroying what they are after, thus although they will win it will be a piric victory.

The second option is called MAD at one end and Terrorism at the other, and is an extension of the “salted earth” option. This is where you make it clear that you do not care about any thing you own or what they opponent owns once hostilities start you will not stop untill there is nothing left to fight for or with.

We know in reality all of these options are not representative of the true reality, just the one you paint in your potential opponents head. And what is not immediatly obvious is how important the options are, the simple fact is all human social interaction works on variations of them.

Thus obscurity / obfiscation are one of the bed rocks of our existance and the principle behind which markets work in reality. If any party develops the ability to strip them away society ceases to exist and the world in effect becomes fully determanistic, which is very undesirable.

Evan February 24, 2014 5:41 AM

@Nick P:
The advantage with using open standards, at least in theory, is that publicly available encryption algorithms are reviewed and vetted by a number of experts before they pass muster. I’m no cryptographer, but I think I have enough understanding that I could design my own substitution cipher and write the code for it but what are the odds it would actually provide better security than existing options for a given key length? We know, for example, that S-boxes in DES are more secure than most random configurations would be, partly because of NSA involvement with the process. Rolling your own might help you avoid being caught in data-collection dragnets as easily, but targeted attacks on your system or data will be easier.

One way, I suppose, to think of a “good” crypto algorithm is that it minimizes the effectiveness of knowing other characteristics of the system it’s running on, because it leaks as little information about intermediate states as possible.

altjira February 24, 2014 8:06 AM

I’m late to the Friday night post so I don’t know if anyone will see this, but I just got my latest Pipeline & Gas Journal and there are a series of articles on cybersecurity. I’m guessing these are by pipeline SCADA pros for other pipeline people, so I would be interested to exposing it to the critical eye of you folks.

Preventing Network Security Threats at Sub-Contractor Level

Defense in Depth: Reliable Security

What Managers Should Know about Pipeline SCADA Cybersecurity

Cybersecurity: How Much is Enough?

Benfits of Network Level Security at RTU Level”

Autolykos February 24, 2014 8:20 AM

@Clive Robinson:
I think the code snippet is pretty indicative of bad coding habits, and the bug could’ve been prevented or easily found by any of the following:

  • Using return or throw instead of goto
    Then, the faulty code would have always ended with an error, and won’t make it past the first test. I’m not completely opposed to goto, though. It may be the bulldozer of programming, but sometimes a bulldozer is just the right tool for the job.

  • Only leaving out brackets when the command is written in the same line, and always putting the opening bracket in the same line as the if (K&R-Style).
    That way, the code will either still be correct when you outcomment a condition (for one-liners) or fail to compile because of the excess closing bracket.

  • Using the control structure designed for the job, namely switch and case
    You need to make sure you’re always finishing with break or return though, leading some people to prefer chained else if (which would also expose the error).

Autolykos February 24, 2014 8:34 AM

Cut out the piece about switch/case. That’s probably not useful for what they intended with their code. The clean way would be to re-factor the if clauses into a separate function and use return or throw.

Return does not fix February 24, 2014 10:39 AM

int foo()
{
int err=0;
if ((err=do_stuff())!=0)
return err;
return err;
if ((err=do_more_stuff())!=0)
return err;
more stuff…
}

Still buggy.

Nick P February 24, 2014 10:40 AM

@ COMSEC

“And that’s what would have been leaked here, not any keys to the underlying system, which means most of the security would have been preserved.”

In NSA’s case, they keep the algorithm or protocol they use secret. That’s it. If it was exposed, a hack might be easier. In case of my design, you’re right that leaking its details wouldn’t compromise the crypto part. However, someone with a 0-day might like to know what OS or software stack its running. Could help them plenty.

@ Evan

“The advantage with using open standards, at least in theory, is that publicly available encryption algorithms are reviewed and vetted by a number of experts before they pass muster. ”

Yes. That’s the main advantage.

” but targeted attacks on your system or data will be easier.”

That’s far from the truth. If you use something they’re unfamiliar with, targetted attacks are harder rather than easier. Additionally, the example you gave of rolling my own crypto algorithms isn’t in my posts as it requires domain knowledge I don’t have. My obfuscation approach was combining several proven primitives in safe ways. Then, the specific primitives are randomly chosen per system. That’s much easier to get right.

“One way, I suppose, to think of a “good” crypto algorithm is that it minimizes the effectiveness of knowing other characteristics of the system it’s running on, because it leaks as little information about intermediate states as possible.”

It should leak little to no information about intermediate states. However, the purpose of an encryption algorithm is to render unreadable plaintext you put into it. “Knowing other characteristics of the system it’s running on” is outside the scope of an encryption algorithm. An example of such a technique might be my putting a gateway in front of a system that filters out anything useful to OS fingerprinting or network recon tools. Might make a Windows box appear to be Solaris. That will give attackers plenty of time to waste. 🙂

Note: Encryption algorithms can be used to obscure OS info, but the algorithm itself isn’t designed for that. Examples include Aegis, SecureCore, SecureME, and CODESEAL architectures I previously posted here.

@ Clive Robinson

Nice points but those mainly apply to a whole engineered solution. My technique is only used on a small, simple part in software. You already forgot an easy solution to such a problem: black box functions with equivalent input and output behavior written in typesafe code. Constructing the primitives as such makes combining them that much easier. Each doing the same thing and running in a black box fashion let’s them be swapped out without a risk of any damage in the program. The type system helps catch the more obvious errors that can happen when auto generating or integrating code.

I do confess, though, that most systems I built this way I did by hand and only automated some algorithm choices/configuration. 😉

yesme February 24, 2014 12:05 PM

@Autolykos and @Return does not fix

We all know that C has some pitfalls.

However the main problem here is Quality Control.

Clive Robinson February 24, 2014 8:08 PM

@ Knott Whittingley,

    … it looked like my first try disappeared.

In the past I’d have said “check it on the “new comments” page. However just recently the following has appeared at the top of the page,

    Note: new comments may take a few minutes to appear on this page.

Which suggests something has recently put an increased load on the server.

Knott Whittingley February 24, 2014 9:40 PM

Clive,

Thanks, but it was a bit more subtle than that… I would have waited a few minutes, but it looked like my browser was simply flaking out, and I thought the action simply hadn’t been gotten to before it flaked.

It’s still good advice to wait a few minutes and see, though, and I should have done that.

Aspie February 24, 2014 11:46 PM

For interested parties in the UK the “governement” is launching the first (publicly admitted) patrol drone; article is here.

At nearly £16m apeice – apparently the UK bought 54 of them – the hydraulics stuck under the price will help fund a few extra yachts. Drones in the US typically fetch around $8m each – at the upper end. Unless they have these gold-plated I can’t see why the price is more than double.

Since in the UK there are only 20% of all CCTV cameras on earth it’s easy to see why the powers that be think that’s really not enough. Might as well stick a few in the sky as well. At least that way the footage can be accessed directly by Cheltenham and MI(n) rather than faffing around with that pesky legal stuff.

This shit is totally out of control here in the UK. I’m wondering if it’s possible to focus an EMP at a distance …

Autolykos February 25, 2014 2:31 AM

@Return does not fix:
It does if used correctly: Ending your function with return 0; only after everything completed without problems. If you absolutely need to initialize err beforehand, set it to an invalid value – it should never, ever, ever be zero, and doesn’t need to. Since you need a return at the end anyway, that (or nested ifs) is pretty much the only safe and sane way to do it IMHO.
But yeah, that piece of code is so staggeringly ugly and full of bad habits that I never expected to see anything like that in actual production code made by a skilled programmer. Even writing it at Saturday 3’o clock in the morning after a long, long week is no excuse for this.

Clive Robinson February 25, 2014 2:34 AM

@ Aspie,

<

ul>At least that way the footage can be accessed directly by Cheltenham and MI(n) rather than faffing around with that pesky legal stuff.

The two otherplaces you were probably trying to remember are “Hanslope Park” and “Vaxuall Cross”.

I don’t know if you live in the UK or not but last night I poped around to see someone and they had the TV on which their missus was watching on BBC1. On which there was a game show called “Pointless”, well after the first round of questions they get contestants to say where the comefrom and what they do. So there is this pair of lads one of whom said he was from Cheltenham and used to be a journalist but was now a Civil Servant. When the show host asked him what he did as a civil servant he made the big mistake of saying “I’m not alowed to say” at which point the co-host said “Have you noticed that whenever we have spies in they always go on podium four?” And the host and co-host carried on with banter about spies for a minute or so and the camera showed the contestants face and he looked mortified…

I’m assuming that as he was once a journalist he’s now an analyst at GCHQ… And I wonder what sort of a reception he’s going to get in the office today.

As for,

    I’m wondering if it’s possible to focus an EMP at a distance …

The answer is much the same as it would be for a laser, yes but you’ld need an appropriatly scaled colimonator to handle the power and bandwidth (which for an EMP pulse is close to DC to Daylight) which would make it very large… you’ld probably not get in the GCHQ “doughnut”. Thus I’d stick to a HERF solution up above 30GHz or CO2 laser, of which the latter would be easier to source.

Aspie February 25, 2014 5:14 AM

@Clive
Hah. Thanks for that. A good morning guffaw – essential to the health I feel.

Thing is the dolt had tons of time, not just on set but beforehand to make up something plausible. Perhaps he wanted to appear “mysterious” to big up an otherwise dull analysis job. Great to see Xander and Richard ripping the sh1t out of him.

Other locations: yeah, although Vauxhall cross is hardly unrecognisable from its appearance in 007 films. Hanslope pk might be a repository for old-timey MI3 ops.

Do you mean a collimator? I’m also thinking something akin to a waveguide for a cheap MASER – not looking to damage, just discourage. Even a reasonably powerful IR laser aimed directly at any CCD arrays in cameras can briefly overload and blank them out – I seem to recall this has been done “professionally” to temporarily blind cameras.

A CO2 laser might be overkill (in some ways, literally) – and they require tricky glass envelopes and brewster crystals either end. (misspent youth building lasers).

Ben February 25, 2014 6:33 AM

@AC, @Autolykos, @Clive

Unforgiveable, but not because of goto.

  1. Should have raised a warning “unreachable code detected”.
  2. Should have set Warnings as errors
  3. Should have used a Lint/stylecop tool

Autolykos February 25, 2014 7:34 AM

@Ben: Most compilers don’t raise “unreachable code” warnings anymore, even with -Wall or their equivalent. At least gcc and clang don’t, and I suspect MSVC doesn’t either. The option in gcc is still there btw, it just doesn’t do anything.
But just sticking to a good and safe coding style, with or without a program enforcing it, already counts for a lot. I personally prefer K&R (for the reasons outlined in my post above), but that’s largely a matter of taste (and source of holy wars), and most styles work quite well for preventing mistakes if applied consistently.

Nick P February 25, 2014 2:00 PM

How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations

(aka a closer look at Skeptical’s employer and playbook 😛 )

https://firstlook.org/theintercept/2014/02/24/jtrig-manipulation/

Fun reading. I think the existence of online disinformation programs, sabotage of source material, etc. considerably weakens opposition position in Snowden debate. One side is trying to get documents to find the truth or simply argue their opinions. The other side, govt side, is employing dirty tricks on opponents, their evidence and their businesses to push them in a certain direction. One side is inherently more trustworthy and the other should be put under a microscope by Congress.

It’s simple: you can’t trust people who are actively involved in disinformation and sabotage campaigns against US citizens without real oversight. They see lies as a necessary part of doing their business. They’ll just tell more lies if people are merely asking questions.

Nick P February 25, 2014 2:08 PM

@ Autolykos

How about starting with a coding standard like MISRA C, CERT C, JSF, etc.? They’re designed to prevent many problems. Exceptions can always be made if they’re necessary for the use case, legacy software, etc.

Anura February 25, 2014 2:53 PM

http://www.businessweek.com/articles/2014-02-21/neiman-marcus-hackers-set-off-60-000-alerts-while-bagging-credit-card-data

Ginger Reeder, a spokeswoman for Neiman Marcus, says the hackers were sophisticated, giving their software a name nearly identical to the company’s payment software, so any alerts would go unnoticed amid the deluge of data routinely reviewed by the company’s security team.

“These 60,000 entries, which occurred over a three-and-a-half month period, would have been on average around 1 percent or less of the daily entries on these endpoint protection logs, which have tens of thousands of entries every day,” Reeder says.

This is the problem with having alerts being routine. If unusual activity is not distinguished from usual activity, you might as well not have monitoring at all.

Skeptical February 25, 2014 6:28 PM

@Nick P: Hah! If I were intent on manipulation the last thing I would do is engage in open discussion. And if I were employed by any of the organizations at issue I certainly wouldn’t be commenting about these matters on a blog.

The Intercept article is mostly sensationalist paranoia that coasts on some very thin speculation. Essentially it describes documents that seem to speculatively rehearse the usual bag of “dirty tricks” for causing problems within an organization or for particular persons. Big deal. Parts of intelligence services are paid to do that type of thing.

What would be big news is if GCHQ or NSA were using such tactics against political organizations in the US or Britain.

But any evidence of that is completely lacking from the article. Instead we’re left with this bit of breathless prose:

Claims that government agencies are infiltrating online communities and engaging in “false flag operations” to discredit targets are often dismissed as conspiracy theories, but these documents leave no doubt they are doing precisely that.

Of course the idea that governments infiltrate terrorist sites and chat rooms isn’t “conspiracy theory” territory at all. It’s expected. So Greenwald must be referring to the idea that governments are infiltrating sites like Huffington Post, or the comments section of The Guardian, or something, to do these things.

It’s vintage Greenwald. Open with sensationalist implications, and then force the reader to walk through agitated prose to finally discover, at the other side, that the underlying facts are a lot less interesting.

Perhaps there will be a follow-up piece. New documents reveal military considering ways of killing people.

In any event, if Greenwald could write an article that respected the reader enough to weigh the facts and possible analyses on his own, without the loaded phrases and the strident tone of a legal brief, I think he’d be producing much higher quality stuff.

To Skeptical February 26, 2014 4:29 AM

Well, the police has been using such things in Britain. Several cases of police spies having infiltrated non violent political protest groups. Search for “Peter Francis”, one of these police who blew the whistle. There was at least a second one that exposed these things IIRC. The police went from denial to claiming it was unauthorized, to claiming it was only an exception, to eventually admitting it was done repeatedly.

The police gave identities of dead children to the spies, got some to have sex with protesters (or let them do so anyway), got the spies to wilfully lie in court, got some to do criminal damage in protests (well, I guess agent provocateurs are a bit commonplace, but still).

name.withheld.for.obvious.reasons February 26, 2014 7:44 AM

WOT, but necessary to divulge.

After reading the Army’s field manual FM 3-38, the need to address the “theory/doctrine” the underlies the operational behavior described is necessary. Organizations and individuals responsible for systems management and those with an interest in information technology must educate themselves on the implications and risks associated with a DoD-based operational standard. This is quite problematic, the definitions, authorities, and statues (largely missing) that allow this activity and operational discretion is insufficiently established and constitutes an abuse of governmental authority. My understanding is that this is a product resulting from the issued PPD 20 from the office of the President.

From the manual, Chapter 3 page 3-1, section 3-2, describing ‘Functions of Cyberspace Operations’
Cyberspace superiority is the degree of dominance in cyberspace by one force that permits the secure, reliable conduct of operations by that force, and its related land, air, maritime, and space forces at a given time and place without prohibitive interference by an adversary (JP 1-02). Such interference is possible because large portions of cyberspace are not under the control of friendly forces. Cyberspace superiority establishes conditions describing friendly force freedom of action while denying this same freedom of action to enemy and adversary actors. Ultimately, Army forces conduct CO to create and achieve effects in support of the commander’s objectives and desired end state.

This is disturbing:

  1. Determination of enemy/adversary, Julian Assange would probably be on the list…
  2. The in or through cyberspace, not described in this section, is defined as problematic to the Dod as cyberspace is not under the control of DoD or friendly forces.
  3. Lacking a geographic boundry, this manual makes cyber warfare operationally legimate within the United States–including attacking hackers with operational discretion including the use of lethal force.

I’m afraid there are other components of this manual that are just as distrubing–and–this gives justification (having to follow the trail of vagary) to NSA collection.

Mike February 26, 2014 10:23 AM

@Skeptical

You say: Hah! If I were intent on manipulation the last thing I would do is engage in open discussion. And if I were employed by any of the organizations at issue I certainly wouldn’t be commenting about these matters on a blog.

That came as a bit of a shock – surely quite the opposite would be true, particularly in relation to the matter being discussed/speculated about?

I mean, irrespective of whether or not this sort of thing actually goes on or not, I don’t think there’s any question that if someone were intent on manipulation then they most definitely would engage in open discussion and comment on such matters on a blog like this.

Isn’t that kind of mind-numbingly obvious? Maybe I’m misunderstanding you?

I don’t think I share the views of some people here regarding your intentions, but I’m starting to wonder.

Your remarks could easily be interpreted as someone grappling with a ‘podium four’ problem with one hand tied behind their back.

Skeptical February 26, 2014 12:16 PM

@ToSkeptical: Well, the police has been using such things in Britain. Several cases of police spies having infiltrated non violent political protest groups. Search for “Peter Francis”, one of these police who blew the whistle. There was at least a second one that exposed these things IIRC. The police went from denial to claiming it was unauthorized, to claiming it was only an exception, to eventually admitting it was done repeatedly.

Sure, I can think of instances where undercover police officers acted as agent provocateurs. And that’s unacceptable behavior.

But the documents covered in the Intercept article seem to concern much more thorough manipulation of a group or organization, such as planting false but embarrassing (or perhaps simply divisive) material on a member’s social media page. If these kinds of things were being committed on political groups, I think this would be a huge scandal and a serious problem.

But I didn’t see any evidence that the tactics discussed were being used on inappropriate targets. If there were such evidence, then this would be, in my view, an important story.

@Mike: I mean, irrespective of whether or not this sort of thing actually goes on or not, I don’t think there’s any question that if someone were intent on manipulation then they most definitely would engage in open discussion and comment on such matters on a blog like this.

It depends on the type of manipulation we’re talking about.

I do think that in-depth discussions can be a good vehicle for getting a better idea of the truth about something, or simply grasping a different perspective, if participants really want to do so. But it’s time-expensive and very limited in impact, and who knows what will emerge at the end of the discussion. It wouldn’t be my choice as a vehicle for manipulating opinion on a subject. And indeed, one rarely sees the people who want to do so engaging in such discussions.

Let me put it another way. An open discussion, ideally, reveals the source code behind different claims and beliefs. If it’s manipulative at all, it’s manipulative in the sense that showing the source code for a program is manipulative of your beliefs about the program. After seeing the code (let’s assume we also know that the code compiles properly into the program at issue), your beliefs might change; the program may seem more trustworthy, less trustworthy, more reliable, less so, etc.

But somehow I don’t think that’s what the people interested in “manipulation” really have in mind. 🙂

To Skeptical February 26, 2014 2:50 PM

When you say “this would be a huge scandal and a serious problem”, do you really mean “would be”, or do you mean “is” ?

One example, from the case I was talking about:

http://news.yahoo.com/uks-cameron-calls-inquiry-police-smears-183015536.html

This kind of thing apparently went on for on the order of a decade.

And another link about the burrowing of undercover spies in the midst of peaceful campaigners’ life:

http://www.globalpost.com/dispatch/news/regions/europe/united-kingdom/130711/britain-police-spying-scandal

Mike February 26, 2014 2:50 PM

@Skeptical: OK, I think I understand where you’re coming from now – though I think initial openness in discussion does not necessarily preclude manipulative intent; I have encountered a number of people in my time (in the context of business) with manipulative agendas who have made a very convincing and advantageous initial pretence of engaging in open discussion but achieved their goals by simply failing to respond and/or shutting-down when difficulties or inconsistencies eventually arose in their arguments – though, granted, that is not a sustainable strategy. If I were seeking to influence opinion in a forum like this I certainly wouldn’t start out all-guns-blazing/name-calling/sloganizing – many of the people here, myself included (and perhaps yourself also) are of a highly nerdy disposition and don’t respond well to all-guns-blazing. Softly-softly would surely be the way to go. However, it occurs to me that our nerdyness probably also excludes us from being considered of any wider social significance and thus means we are rather unlikely to be anywhere near the top of the list of worthwhile targets for manipulation anyway. The reality is that we’re hardly pivotal societal opinion formers here – right? No one ever listens to nerds – unless they want their computer/gadget/network fixing – or at least that’s what I’m told repeatedly by my normal friends. All those alleged sock-puppet tax dollars/pounds are surely likely being lavished on manipulating the really important opinion formers out there – such as those who frequent the comment section of the Guardian perhaps 😉

Clive Robinson February 26, 2014 4:18 PM

@ scared,

    What exactly is the improved security in this?

The article is lite on technical details, but if I understand what they are saying the idea is you have some kind of interface to a remote computer to control it’s keyboard and mouse inputs, the screen output is then displayed at the remote point to a video camera and it’s the output of the viseo camera you see.

At first sight it appears to be a good idea in that there is no way for malicious code to get back… but appearances can be deceiving…

A number of years ago somebody at the UK’s Cambridge Computing Labs linked up with a startup called –if my old brain remembers correectly– Chronos Technology to have an “outof band” authenticator which worked by the camera in a smart phone looking at a square of dots which acted as data input… I rather abruptly shot the idea down as the bandwidth of the video signal was enormous and exploitable.

It was not beleived to be a problem by the startup I belived it was. As it turns out in essence there is very little difference between their custom grip of colourd spots and a QR Code. So I was right they were wrong, meaningfull data / code could could cross by the video image alone.

So we actually know from QR Codes that data can be sent across video signals… so this system is potentialy broken by this fact…

ystem

rj07thomas February 27, 2014 8:11 AM

@AC, @Anura, @Clive Robinson: I’ve only recently had to ditch a desktop I built that was based on an AOpen AX-4GE Max motherboard. The motherboard failed, which was a pity tho’ it was 10 years old or so, so not really that surprising.

One of the features it boasted was a DieHard BIOS- not a tiny Bruce Willis on the motherboard, simply a write-protected copy of the BIOS on another chip. This was- as @AC was looking for- controlled by a simple jumper- position A meant normal BIOS, position B was for the DieHard chip. I think I used it once for some reason, but I was never aware of getting a flash ROM infection. It looks like ASUS have taken motherboards over from AOpen now, but I can’t see if they still sell boards with these chips on.

Benni February 27, 2014 10:16 AM

ever heard of the nsa program optic nerve?
newest from guardian:
http://www.theguardian.com/world/2014/feb/27/gchq-nsa-webcam-images-internet-yahoo

the program “optic nerve” from gchq covertly saved images of 1.8 millions of webcams of private users in six months.

Sexually explicit webcam material proved to be a particular problem for GCHQ, as one document delicately put it: “Unfortunately … it would appear that a surprising number of people use webcam conversations to show intimate parts of their body to the other person. it appears sometimes to be used for broadcasting pornography.” 

Benni February 27, 2014 4:27 PM

The interesting point on this webcam things are that this must be connected to the recent slides where gchq says they would be using sexually explicit material for mobbing company employees and talibans
https://www.eff.org/document/07252014-nbc-gchq-honey-trap-cyber-attack-2
https://www.eff.org/document/20140224-intercept-training-covert-online-operations
https://www.eff.org/document/20140218-intercept-gchq-sigdev
https://www.eff.org/document/07022014-nbc-gchq-honey-trap-cyber-attack
So they get their sexually explicit material by setting up some honey trap of a prostitute and a web cam chat. and then they give this to the taliban colleagues, or they blackmail a software engineer, saying him they give this pictures to his wife, if he does not introduce a certain line of code into a crypto api….

Clive Robinson February 28, 2014 2:43 AM

@ Nick P,

I don’t know if you’ve seen this,

http://www.washingtonpost.com/blogs/wonkblog/wp/2014/02/27/the-incredible-stock-picking-ability-of-sec-employees/

Baisicaly SEC employees appear to be “insider trading” by their “sell patterns”.. but the “official reason” it’s all ok is a hoot.

It also raises a “side channel” question, in that if you could get timely access to infomation on SEC employee sales you could use it to your very considerable advantage…

name.withheld.for.obvious.reasons February 28, 2014 9:14 AM

Wonder how SCOTUS is reacting to being spied upon by the public–tit-for-tat I say. Besides, what’s there to see in the failed institution?

Nick P February 28, 2014 12:53 PM

@ Clive Robinson

That’s some good work by the researchers. I’m entirely unsurprised as there is plenty of financial corruption in US govt. That they are doing trades that SEC should know about, yet ignores, is surprising. I would have thought only top people at SEC could get away with that.

Thing about our system, Clive, is that there’s rules about what corruption is permissible and what isn’t. For instance, taking a bribe to pass a law is a punishable offense. However, taking campaign contributions then passing a law later benefiting those contributors is effectively legal. And standard operating procedure with US lawmakers. The corruption that’s allowed is also usually top elected officials and business, not middle level people.

So, that SEC isn’t acting on the information is unusual unless the people making these trades are high up. In that case, SEC ignoring the information would be entirely typical. 🙂

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.