Comments

Dan January 18, 2013 8:27 AM

Great commentary. Not to mention that the FBI will have a hard time preventing the use of open source software that circumvents any mandated wiretap capabilities anyway, so all they’d be doing is providing (yet) another potential exploit opportunity in the commercial products that they can legislate, as well as driving manufacturers out of the USA.

Robin Wilton January 18, 2013 9:05 AM

In reply to Dan’s comment about open source s/w: right… and the same logic applied to attempts to weaken commercially available DES products in the 90s by insisting on partial key escrow. It was obvious that the measure could be simply bypassed. Eventually it died.

Derpmaster January 18, 2013 10:27 AM

Just wait until they outlaw owning a box and the only legal computer is a dumb terminal running high fee licensed cloud software that checks for thoughtcrime every 15min interval

RH January 18, 2013 10:29 AM

@Derpmaster: just wait until said dumb terminal is embedded in the skulls of every voting citizen in the nation.

FBI-internal-Snafu January 18, 2013 12:29 PM

The base issue is that the FBI would rather legislate back doors than overhaul HR hiring handicaps that were baked in during Hoover’s era.

As is, if a prospective hacker hire is in any way honest about ever using either illegal or semi-legal performance enhancing or recreational drugs, FBI bobby socks era rules prevent the hire. From my own experience that means their pool of top notch hires is effectively zero. The pride of those obsessive enough to acquire those skills is typically too high for them to seriously bother lying about what they took to stay awake for 30 hours…particularly for the currently very questionable honor of an FBI job.

That ties into another FBI problem. Their workaround to their HR issues have cost them whatever good will their LEA role might otherwise have given them with right minded members of the hacking scene. That workaround being to not quite hire hackers by indefinitely suspending charges on those they manage to spot (either by luck or because the hackers in question are second rate). The FBI then uses the threat of those charges to force those they find into working for subsistence wages as contractor-captives.

Suspect part of the FBI’s problems with bungling many cases arises from the fact that those captives are hardly inspired to do their best work.

dragonfrog January 18, 2013 1:35 PM

@Dan “the FBI will have a hard time preventing the use of open source software that circumvents any mandated wiretap capabilities anyway”

They don’t need to prevent it – it will languish unused just fine without the FBI’s help.

I use TextSecure on my phone – it’s a fantastic open source text message encryption program, using the open specification OTR encryption. I know exactly one other user of this software – my wife, who uses it because I pestered her into it. With absolutely every other person I communicate with, TextSecure provides only encrypted storage of text messages at my end – they’re sent in the clear, and stored in the clear on the other person’s phone.

Ben January 18, 2013 3:18 PM

Most people don’t particularly care about encryption; don’t seem to see the harm in the government reading their messages. Their wife or their boss, sure, but even pre-WW2 encryption would stop most of them.

The adoption of encryption in a non-government, non-trivial sense was always primarily driven by businesses who wanted to protect their secrets. Even there, encrypted email tends to be a licensed product attached to specific people for specific purposes rather than the norm. The release of regulations on encryption, and its more wide spread adoption, was attached to the expansion of the digital economy.

And, I suspect, it’s not until we can demonstrate to companies that they face persistent threats to all unsecured communications that encryption is going to become more acceptable. It’s gonna have to start costing people cold hard cash before they’re really interested.

Chrys January 18, 2013 3:34 PM

Does anyone else enjoy the irony of the U.S. Goverment’s right hand, (Congress), intoning ominously about the “threat” of China’s government secretly getting Huawei to place backdoors on their equipment for spying, while the left hand (FBI, NSA, etc.) pounds the tables, demanding such capabilities be given them by legal force?

martinr January 18, 2013 4:17 PM

“The sad truth is that no one knows how to build secure software for the real world.”

That is not true. Building secure software is actually quite achievable. The key is educating both, the software architects, and the software developers/programmers. If you start out with crappy design and crappy implementation, you’ll quickly and persistently be wasting significant resources on software patch logistics. Designing and programming with a very low error rate is possible, e.g. with bounty schemes like these
http://en.wikipedia.org/wiki/Knuth_reward_check
and it is practical in commercial software development.

Management doesn’t like this, because it slows down initial shipment of new features (the “Version 1.0 must sell, Version 2.0 must work” kind of software). Management also often prefers hire-and-fire, than having to pay some respect to employees which need a fair amount of training, and which will be harder to replace. So what you can often see, are attempts to make the “million monkey approach to software development” more feasible, “Secure Libraries”, ASLR, automated Code scanning tools, etc. being used as “panacea”, rather than using code review and code scanning to gauge developer skill levels to plan your training.

Changing a piece of code because a code scanner complains is monkey business. Rewriting a piece of code so that it becomes easily readable, efficient and robust, and having the codescanner find only 99% false positives, is what happens as by-product of educated engineering.

MingoV January 18, 2013 4:51 PM

The FBI refuses to release statistics on what proportion of its wire taps provided the primary evidence that led to arrests and convictions (for serious crimes, not crap such as lying to a federal officer). My suspicion is that it is lower than 1 in 500. The FBI is no longer a federal law enforcement agency. It now is a domestic spying agency.

Clive Robinson January 19, 2013 6:39 AM

@ Chrys,

Does anyone else enjoy the irony of the U.S Goverment’s…

Yes, but then their spin is “We say we don’t trust their spies where as ours are God fearing…”. There is an old joke about the seal in the floor of the entrance hall of the CIA which says “In God We Trust” being incomplete because the second line says “All others we check”.

@ Martinr,

Building secure software is actually quite achievable.

Err that depends on your definition of “secure” and “achievable”.

Yes it is very easy to build software that is marginally more “secure” than the typical or norm we see today with most comercial applications, (the reasons for which you have partialy identified). However I’m not aware of any commercial application that does not have defects/bugs which potentialy are attack vectors, likewise I’ve yet to see anything other than trivial code that does not have defects in some form (usually not dealing correctly with input data or exceptions). We know from MS’s endevors that taking existing code and re-working it to be more secure is usually harder than starting from scratch.

The simple fact is that the cost of making code secure is too resourse intensive. Lets say it costs X to get an application 90% secure (by whatever measure). To get it to 99% costs 10X, 99.9% 100X and so on, we have known this about bugs since the 1970’s if not considerably earlier. Back then there were reasonable excuses of hard resource limits due to cost with a byte of core store costing 10-50cents, similar with storage, it’s why we had the Y2K issue, these days however hard resources are not particularly an isssue even very low cost embedded devices.

The upshot is though that the level of security we get is actually based not on what is possible theoreticaly or for that matter practically but a normal distrubtion of development costs.

But there is no such thing as absolute security, it does not exist, and we know beyond doubt it’s not possible. From a simple perspective we know that we only know a small fraction of the number of ways a system can be deficient. We also know that even though known defects might at first appear to be not exploitable as attack vectors the reality is the human mind is inventive and as time goes on somebody eventualy finds a way to use them as vectors or parts of vectors.

Derpmaster January 19, 2013 4:09 PM

Any backdoor will be found by criminals, curious hackers and Chinese spy agencies. Deliberately sabotaging your own infrastructure like the telecoms is like handing a foreign spy agency the keys to your country.

Jake January 20, 2013 9:45 AM

This is really “Guilty until proven innocent, and you’re still guilty”

It’s like saying your local police department requires a master key for all locks sold in the city.

Absurd reasoning. If backdoors are placed in these devices, there needs to be some people locked up, and recalls on the infected products!

John Campbell January 20, 2013 2:32 PM

Doesn’t it always come down to the one truly key question underpinning a civilization?

“Who do you trust?”

As I recall, James Burke commented, about the collapse of the original Roman Empire (not the Holy Roman Empire): “Better the barbarian you didn’t know than the tax collector you did.”

Joe Bob January 21, 2013 7:16 AM

“The FBI refuses to release statistics on what proportion of its wire taps provided the primary evidence that led to arrests and convictions (for serious crimes, not crap such as lying to a federal officer). My suspicion is that it is lower than 1 in 500. The FBI is no longer a federal law enforcement agency. It now is a domestic spying agency.”

Hasn’t it always been? They do an incredible job at solving some types of cases, and even the old, super corrupt FBI did a good job at that.

But there has been a big downside to the agency once termed “America’s Gestapo”…

Today is Martin Luther King Jr’s day… FBI, MLK… yeah, they used to wiretap Martin Luther King Jr illegally. They wiretapped a lot of people illegally. They extorted presidents and senators routinely.

Nobody ever paid for those crimes.

What happened to that FBI?

It is clearly still operational, and still getting away with very serious crimes, as this article speaks of.

Those guys – the dirty cops – they are always organized and they always work with the good cops.

There is money and power in the theft of information and this is something the FBI has always capitalized on. It is no different then in any other country, however — though I would hope some free nations truly are without such corruption.

We did see a lot of cop corruption busts in the 70s and improvements… but, when you have these guys still barking for unfair advantages on the global stage… you know all is not right in kansas.

Joe Bob January 21, 2013 7:27 AM

“Any backdoor will be found by criminals, curious hackers and Chinese spy agencies. Deliberately sabotaging your own infrastructure like the telecoms is like handing a foreign spy agency the keys to your country.”

They don’t care, not the sort that would ask to put in backdoors to all software made.

That sort of police officer is the same sort who longs for the nation they are policing to become a police state so they can rule as first citizens.

That sort of cop is usually placated by the bribery of power, though they are directly related to the dirty cop on the street. Both wear the uniform and the badge, both are fakes, Satan in a cop uniform.

martinr January 21, 2013 8:59 AM

@ Clive,

Building secure software is actually quite achievable.

Err that depends on your definition of “secure” and “achievable”.

The simple fact is that the cost of making code secure is too resourse intensive. Lets say it costs X to get an application 90% secure (by whatever measure). To get it to 99% costs 10X, 99.9% 100X and so on, we have known this about bugs since the 1970’s if not considerably earlier.

You don’t seem to have ever encountered high quality code from a skilled programmer.

Widespread misinformation of this kind is the primary reason why we have such a lot of bad code in the first place, and why we still have it today. It’s motivation appears to be to uphold the myth of “million monkey approach to software development is feasible”.

Vulnerabilities are bugs in software that can be exploited, or bugs in the underlying design that can be exploited. If you look past the monkey business proponents, you might find approaches like “Fagan inspection” to materially improve code and spec quality with much smaller effort than you allege. The inspection result can also be used to gauge code quality and programmer skills.

When dealing with code that is the result of monkey business, Fagan inspection will work remarkably well provide a signficant ROI.

But when you encounter a really good programmer, he will not only produce more production code per time than comparable code monkeys, and you will end up finding few, if any defects at all during Fagan inspection.

The key to achieving this is learning what kind of defects one is likely to produce, adopting coding style that either avoids the error or makes them obvious (immediate compiler errors), and to learn which particular tests or code review and which debugging will find those defects that a compiler can not see.

For a number of defects I encounter in software from others, it is fairly obvious that the developer who produced this never actually performed single-step debugging through his implementation prior to shipment, otherwise he would have seen the bug.

Clive Robinson January 21, 2013 12:38 PM

@ Martinr,

You don’t seem to have ever encountered high quality code from a skilled programmer

I will repeat again what I said but this time emphasise the part you obviously missed,

The simple fact is that the cost of making code secure is too resourse intensive. Lets say it costs X to get an application 90% secure (by whatever measure). To get it to 99% costs 10X, 99.9% 100X and so on, we have known this about bugs since the 1970’s if not considerably earlier.

That is you compare apples with apples not grapefruit you get this exponential result.

So if I take your best of breed programer and measure his code, I will find defects be it due to him or the specification etc, that is just the way it is with complex projects.

That said the cost of finding and sorting the bugs in any given programers code, goes up effectivly exponentialy irrespective of who they are or what methodology is used.

That is even if it’s just due to the time involved with finding the bugs the cost rises as each successivly more difficult bug is found, but no mater how hard and long you look the code will never hits 100% bug free. It might get close but it won’t make it all the way.

And that is not taking into account as yet unknown classes of defects, which when they are found will need to be checked for on all existing software.

This exponential effect is by the way, well known to those practicing in the domains of physics and physical engineering. Many code cutters argue otherwise but then scientificaly conducted tests tend to confirm what has already been found by previous scientific tests, the results tend not to wing it towards the gut hunches just because those with them wish it to be that way.

If you want to have a smile about this sort of thing go and look up Function Point Analysis.

Nick P January 21, 2013 1:39 PM

@ martinr

Let me take a stab at this tangent.

“You don’t seem to have ever encountered high quality code from a skilled programmer.”

It’s so rare it almost proves his point for him. Even the Linux kernel, with it’s low defect count, comes with vulnerabilities. The majority of code seems to fare far worse. The coding should also be done in a way that an assurance argument can be made for it. Most code isn’t. Things can get better. Microsoft’s SDL, for example, dramatically improved the security of their new software. It was also too expensive for most companies to adopt.

It’s good that you mention Fagan inspections. I’ve promoted that old technique many times on this blog. Cleanroom improved on it in areas of maintenance, predictability, and certified bug counts for warrantied code. Also interesting that developers never run their own code, yet it usually works. TSP/PSP, Predictable by Construction and Praxis’s Correct by Construction are all proven to greatly improve quality with empirical methods. More methods or successes appear each day.

The problem I see with your arguments is that you mix quality and security. They’re NOT the same thing. Certain quality issues do lead to security issues. Methodologies that reduce those quality issues will reduce security issues. You and I are on the same page on that part: high quality code makes software more resistant to attack. Of course, if that’s what it took for “secure” software, then the Orange Book would have said: this system must use Fagan inspection with the correct set of “defect” classification and proven use of it. It actually required a little more….

The essence of security is that unauthorized behavior won’t occur. That implies the system will always be in one of a known set of acceptable states. Security issues happen at levels from hardware to the app itself. One must also factor in risks such as malicious developers, repository control, distribution, initialization, configuration, and maintenance of secure state. You can be the best coder in the world. However, if you get any of the rest wrong the code quality will simply be something the attacker enjoys observing as he toys with his new machine. 😉

Here’s an old (start) to trustworthy software I wrote on this blog that was mainly focused on preventing subversion. I left out a few steps at the end to make it pertain to this conversation more.

“1. Requirements for a deliverable must be unambiguous and formal.
2. Every high-level design element must correspond to one or more requirements and this should be shown in documentation.
3. The security policy must be unambiguous, compatible with requirements, and be embedded in the design/implementation. Correspondence must be shown.
4. The Low Level implementation modules must correspond with high level design elements, at least one each.
5. The source code must implement the low level design with provably low defect process and avoid risky constructs/libraries.
6. The object code must be shown to correspond to the source code and no security-critical functionality lost during optimizations.
(DO-178B Level A requires this & there are tools to help.)
7. At least one trustworthy, independent evaluator must evaluate the claims and sign the source code to detect later modifications by the developers or repository compromise. This should also be done for updates.”

This was just the software. The TCB it runs on and all libraries it trusts must be secure. If they aren’t, they must be isolated from main application in a way that contains failures. That’s hard. The app must be securely configured, sanitize all input, use easily parsed protocols/storage, exist in predefined states, preferrably be written in a safe language, and have fail-safe crash strategy for unforseen errors with logging.

The high assurance security evaluations often required more. They wanted mathematical specification of requirements, security claims and design. The highest assurance systems wanted mathematical proof of security, correctness, and/or general assurance argument. They required loop-free layering (rare today), modularity (in style), strong focus on interfaces (in style), easily analysed implementaiton constructs (uncommon), extensive testing (in style), pen testing by pro attackers (uncommon to rare today), covert storage/timing channel mitigation (almost nonexistant, although attacks exist), repo software (in style for good dev’s), physical security of repo+artifacts (uncommon), and independent evaluation of all of this by trusted, qualified 3rd party (almost non-existent).

Note that this was security in the Orange Book days. This was what it took to call a software+system combination secure on what were basically time-sharing machines, dumb terminals, and simplified desktops/networks. Things have gotten more complicated and risky since then, although issues are similar. A few follow…

  1. Hardware
    a. Attacks on Intel SMM
    b. Attacks on TXT
    c. Malware in wild using processor errata (per Kaspersky)
    d. DNS subverted because software ignored cosmic ray bit flips
    e. DMA hardware (firewire attack)
    f. Overprivileged or hard to control hardware (e.g. USB HID)
    g. Perhipherals’ firmware is programmable and easier to attack
  2. Mainstream Operating Systems
    a. Huge amounts of kernel code. Kernel bugs followed.
    b. Huge amounts of trusted code that modifies OS state.
    c. So bloated that ports to embedded devices are bragworthy.
    d. Quality increased for many over time, but still tons of vulnerabilities.
    e. Still plenty of issues with interfaces and legacy libraries.
    f. They almost totally ignore covert channels.
  3. Middleware
    a. Middleware quality varies considerably.
    b. Documentation and actual behavior are often inconsistent.
    c. Many famous middleware are unnecessarily complicated.
    d. Combining secure code with insecure middleware often= insecure app
  4. Protocols
    a. Technically a form of middleware, but I give special treatment.
    b. Most common protocols designed pre-WWW and have inherent problems.
    c. Many companies ignore superior alternatives to preserve legacy.
    d. Hardcoded protocols eventually have issues and can’t be replaced.
    e. Complex protocols are hard to implement, yet used anyway.
    f. Many attempts at secure protocols are subject to fall-back attacks.
  5. Subversion
    a. Malicious developer allows compromise via clever, small change
    (See Myer’s on NFS subversion; obfuscated C contest; easter eggs)
    b. App compromised during build process.
    c. App compromised between user and developers.
    c1. Binary modified after build.
    c2. Modified during transmission.
    c3. Search results lead to backdoored versions.
    d. App compromised during installation by misconfig or malice.
    e. TCB compromised, then malware subverts app.
    f. Interactions of various software used to compromise one of them.

This is not a 100% comprehensive post on the issues. I’ve left out the most esoteric stuff like EMSEC. However, making a secure application involves ensuring the elimination of vulnerability across the entire lifecycle. It’s not as easy as a Fagan inspection or a brilliant coder. The more effective assurance arguments require a great deal of specialised expertise, time, and money. There are also many tradeoffs that one must make. The quote below gets to the bottom of why.

“If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys stuff. [important part] So today, making a computer secure requires imposing a “separation paradigm” on top of an architecture built to share. That is tough! Even when partly successful, the residual problem is going to be covert channels.”
-We Need Assurance, Brian Snow, NSA Technical Director

Old guard tried for decades to build secure systems/software. Certain simple, somewhat specialized systems went unbroken and seem secure. Others were just very hard to attack, limited damage and recovered well. Old principles strained due to issues like DMA and inherent difficulty of securing networked/web environment. The end result, which I promote, is that we can use proven methods to increase quality and assurance of software. We can’t claim the software is “absolutely secure,” as Clive noted. Anyone claiming an impenetrable or bug-free offering should be looked upon with great skepticism.

However, increasing assurance across the board eliminates the low hanging fruit current malware authors enjoy, stops the majority of attackers, reduces overall losses, and helps to gradually reduce everyone’s overall risk over time. Certain companies are taking the lead and producing very robust software. I hope more follow.

examplesPlease January 22, 2013 10:59 AM

@Nick P “Old guard tried for decades to build secure systems/software. […] Certain companies are taking the lead and producing very robust software.”

Can you please take the time to give the most possible complete list of examples of such past and present projects ? (I could name openbsd, djbdns rewarding $1000 for bugs, other projects by Daniel J. Bernstein, tex rewarding 2^BugreportNumber cents for each bug, but I don’t even know if these could be on your list).

@Nick P “1. Hardware […] 2. Mainstream Operating Systems”

This is mostly local holes, I would call a product with only such bugs a 100% secure system.

@Nick P “3. Middleware”

Can you please define Middleware ?

John Campbell January 22, 2013 11:47 AM

@Joe Bob: In your rant about the FBI– regardless of how true it may be– no matter how good a system you have, no matter how competent the checks and balances are built into the system, ALL systems have the same weakness:

They are operated by Human Beings.

It probably doesn’t help that human beings are all made of meat, either.

Nick P January 22, 2013 2:09 PM

@ examplesPlease

Reflections of 30 year history of key infosec research
http://www.csl.sri.com/~neumann/ieee31.pdf

That paper does a good job on the history. It lists the names of many exemplar secure systems and key pieces of research. Here are examples: BLACKER, SCOMP, KSOS, LOCK, KSOS, Boeing SNS (still exists), GEMSOS (still exists), XTS-400 STOP OS (still exists), SeaView DBMS, MLS LAN Concept, KeyKOS, VAX Security Kernel, etc. The biggest takeaway, other than cool projects to Google & learn from, is that you continually see this pattern: people would claim more INFOSEC than they could get; they failed to get it; they learned important lessons; the next project ignores those lessons, forgets them, barely tries, or (under 1%) successfully applies them in real software.

Modern examples of the 1% are Native Client, Bodacion Hydra firewall, Secure64’s SourceT, djbdns to a degree, and EAL5-7 vendors’ processes. The MULTOS CA achieved ITSEC E6 using Praxis’ Correct by Construction. Rockwell-Collins AAMP7G processor is secure hardware designed with another process. The Verisoft Project verified a processor, microkernel, base OS, compiler and some apps for functional correctness using formal methods. CompCert and seL4 were formally verified. CapDesk, E Language, and DARPA WebBrowser used capability based security. And so on. I bet most of their development methods look nothing like what you see in the corporate world.

Bell’s (of Bell-Lapadula model) Addendum
http://selfless-security.offthisweek.com/papers/Bell-LBA.pdf

Bell’s paper traces start to finish the key parts of high assurance evolution and loss of such capabilities. He blames the government for killing the market. He explains, mathematically & informally, why commercial software is provably insecure. All of his key worries have proven true theoretically, in experiments, and in practice in production. It’s not mere opinions.

“I could name openbsd, djbdns rewarding $1000 for bugs, other projects by Daniel J. Bernstein, tex rewarding 2^BugreportNumber cents for each bug, but I don’t even know if these could be on your list”

Rewarding for bugs doesn’t count. Companies put up rewards for bugs all the time, but most are ignored by talented black hats. Bug and vulnerability metrics count. Processes and techniques to minimize them count. By those statements, OpenBSD and DJBDNS are well-designed or probably have low bug count. They’re also somewhat obscure: the TLA’s and best attackers mostly ignore them. Additionally, they’re not made with high assurance methods so we know they’ll have problems. That said, I’ve actually posted DJB’s Lessons learned from Qmail paper here repeatedly. It’s worth reading.

“Can you please define Middleware ?”

The technical definition is software that lets other software work together. Here’s some technologies that fall under my usage of the term: XML, HTTP, WebSockets, SOAP, CORBA, RPC’s, Messaging Systems, certain crypto, etc. I also mentioned important libraries. This might include problematic things like compression, image processing, AV processing, string processing, etc. Each of these has had major issues and many haven’t been able to pass a high assurance evaluation due to complexity & too many covert channels.

“This is mostly local holes, I would call a product with only such bugs a 100% secure system.”

That statement is indefensible. I would almost think you’re trolling me with it. You’d seriously trust an application running on an OS that’s wide open? A poorly configured Windows NT system, maybe? MSDOS with no memory protection & poor concurrency? That every security standard requires a baseline of effort for OS and hardware protection indicates consensus is that it matters plenty.

You need to understand the concept of Trusted Computing Base (TCB). That’s every piece of software an app depends on to maintain its security properties. If any piece is a failure, then the entire system may fail because security is only as good as its weakest link. Additionally, we must protect what hackers attack frequently and easily. Operating systems, protocols and middleware fall into that category. Hence, an assurance argument that doesn’t include them is a false security claim.

Retrospective on VAX A1-class Security Kernel
http://www.cse.psu.edu/~tjaeger/cse543-f06/papers/vax_vmm.pdf

Much modern stuff is behind paywalls and I can’t legally share it. However, this old paper’s methods still go beyond what modern software do and illustrates what needs to be done. They address hardware, design, secure implementation, covert channels, etc. The had some snags b/c it was their first A1 project, of about 5 total in history. The part you need to look closely at is how its design is totally layered and the “VI. Assurance” section. Modern “secure virtualization” mostly doesn’t meet this level of assurance. Average code on mainstream OS is even less trustworthy.

http://c59951.r51.cf2.rackcdn.com/5085-1255-perrine.pdf
Tom Perrine on KSOS. He has nice points about design, assurance and the modern state of things. He also mentions something I noticed a while back: IT companies keep “inventing” technologies or solutions that are 10-30 years old. He lists examples. In some ways, modern stuff is still far behind what systems in his day could claim.

(MULTICS was nearly immune to buffer overflows, for example.)

So, good design, bottom-up security, proper layers/interfaces, EAL6-7 assurance activities, low-defect implementation, minimal complexity, no trust of external input, and independent evaluation by top notch Red Team. That’s the minimum for software independent parties can claim is secure and I believe it a bit. It’s also the govt standard. It’s also hard enough that US Govt often ignores it and just buys stuff like Windows. (shrugs)

examplesPlease January 25, 2013 5:00 AM

@Nick P: “I would almost think you’re trolling me with it.”

No, it was serious. I will trust a computer which is only running processes (and services and OS) exempt of holes in the layers you numbered 3., 4. and 5, provided the ennemy do not have physical access.

Well, the ennemy could exploit a hole in the firmware of the ethernet card which opens direct access to memory … fu..ing hardware.

examplesPlease January 25, 2013 5:03 AM

Thank you @Nick P for your detailed answer. I started to dig in the lists, looking for open source examples. If I find some, I will report back here.

Paranoid Android January 27, 2013 1:47 PM

One thing is warning about a “digital pearl harbour”. A very different one is making sure that it will happen (probably with some dark intentions)

Failpoint January 27, 2013 3:02 PM

I like NickP’s breakdown @martinr, but it could be summed up like this: Can you develop secure software and protocol on a crappy operating system? i.e. How absurd is “View Hidden Devices” in the Winbloze device manager? Microsoft has continuously added abstract layer on top of abstract layer, giving “security software” producers a job. Also, yes, failed ancient internet protocols and a history of inadequate router firmware security.

The FBI issue is tongue-in-cheek… 50% valid, 50% media hype and alarmist. If you read the connecting story of the teen email bomb threat, nothing special was required other than standard procedure in order to prosecute (a bad example). The FBI has since it’s incept, been the whipped pitbull of Washington back office politics. Pointless dossiers on people outspoken against privacy invasion and freedom violations, and Republicans throwing their hands up when agents dig through a Dem’s filing cabinet, exposing how the agency is monkey-in-the-middle.

I believe the consensus is in the lack of access to wiretapping records from the court room to the concerned citizen, even after a closed case. There is no citizen’s ubiqtorate / checks-and-balances in this case. If you target the gov’t policy and not the technology per-say, you can see the flaw in the system. We have to use independent media channels to shame the gov’t into understanding that in most cases, fighting fire with fire is unethical, creates distrust, and is unnecessary. Somebody define rogue state for me?

My points on the issue:
– Amendment 6 describes evidence disclosure and pre-trial discovery to be transferred to state law. Not educating the defendant on this right is a great way to black bag a case.
– Like corporate America, the gov’t knows that the average defendant will never have the money to counter-sue for rights violations.
– This goes beyond the obvious criminal threat which just so happens to be the only example the gov’t uses to validate it’s actions. Such is the case with subpoenas for access to social networking accounts… all of that work to try and mount character witness against lack of provable action or lack of physical evidence. Did the teen that threatened to bomb his school have a closet full of detcord and explosive? Hidden details. Judges don’t get to “just make an example” of someone.
– I live in a state where the death threat charge is only one year and/or 1,000 fine. That is a legal placeholder, in scientific terms, a zero. DUIs are considered more serious. A prime example in a contradiction to what the gov’t pushes post 9/11. Apply this non-updating of law to what the gov’t is trying to do… which is play catch-up without having to write articulated law to govern agency action.
– This gov’t just tried to cause a nuclear meltdown in Iran? How irresponsible and childish. Go to war or get off the pot. This is also a country that hires blackhats when they should be incarcerated. What kind of message does that send? Apparently, making enemies is big business.
– One last example is the gov’t going after college kids using bit torrent on campus. Going after the little guy doesn’t solve the problem. All they (U.S. and ICC) have to do is shut down dns servers that front the torrent metasearch websites. This country hemorrhages millions every year in lost media sale, and that means lost tax revenue. Again, a policy issue. Nailing script kiddies to the wall seems petty in light of this. Example after example of the gov’t not getting the bigger picture, which is finding a permanent solution that doesn’t involve sacrificing anyone’s rights.

This makes me sick:

Under a ruling this month by the 9th U.S. Circuit Court of Appeals, such surveillance — which does not capture the content of the communications — can be conducted without a wiretap warrant, because internet users have no “reasonable expectation of privacy” in the data when using the internet.
[http://www.wired.com/politics/law/news/2007/07/fbi_spyware]

How can you not be scared? The grim future is in drone wars, shotguns that pierce Dragonscale body armor, and Lockheed buying a D-Wave quantum computing system. Information warfare makes nuclear weapons seem either possible or pointless, I can not tell right now.

JohnP January 27, 2013 3:12 PM

Last time I checked, the FBI didn’t write or pass laws in the USA. Congress does, which is just as scary.

Failpoint January 27, 2013 7:00 PM

@JohnP

dogs and handlers sniff each other’s butts and crap on infosec’s front lawn. [place more bathroom humor here]

or when does good policy start, like *nix users waiting for Windows 33.5 for Workgroups… humanity could perish before Microsoft gets it. There is no cut-to-the-chase. These people make jobs for themselves, and I am waiting for software firewalls for data phones to crop up like a bad punchline in a horrible action flick.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.