Comments

Clive Robinson November 29, 2016 7:59 AM

One inyetesting point from the BoingBoing piece,

    [T]he ad promises that their botnet is a significant improvement on the earlier Mirai infections, equipped with IP-address spoofing features that make it harder for the botnet’s victims to block the incoming traffic.

The most common form of “Ip-address spoofing” for evading address based blocking is at the attack source. That is you put in false frequently changing source fields in the datagram.

This type of behaviour would be fairly simple to stop dead in it’s tracks, if the immediately upstream routers at the customer and ISP level “sanity checked” those fields and sent them to /dev/null rather than forwarding them.

The fact that a number don’t says volumes about their business model or expertise…

keiner November 29, 2016 8:04 AM

…although I think these frequent “typos” are on purpose to stay out of the line from search bot hits 😉

Z.Lozinski November 29, 2016 9:01 AM

Looks like someone already did. Deutsche Telekom’s DSL network in Germany was hit by a targeted attack 27-28 Nov 2016.

There is a notice from the Federal Office of Information Security:

https://www.bsi.bund.de/DE/Presse/Pressemitteilungen/Presse2016/Angriff_Router_28112016.html

Cyber attacks on Telekom: BSI calls for the implementation of appropriate protection measures
Location: Bonn

date: 11/28/2016

On November 27 and 28, 2016, Deutsche Telekom’s over 900,000 customer connections were affected by Internet and telephony failures. The Federal Office for Information Security (BSI) is in constant exchange with Deutsche Telekom to analyze this incident.

The BSI assigns this failure to a worldwide attack on selected remote management ports of DSL routers. This was done to infect the attacked devices with malicious software. These attacks have also been registered in the government network protected by the BSI, but they have remained inconsistent on the basis of effective protective measures. The National Cyber Defense Center is currently coordinating the reaction of the federal authorities under the leadership of the BSI.

“”The report on the situation of IT security in Germany, presented on 9 November, has highlighted the dangers of hacker attacks, particularly critical infrastructure must now act, “said” BSI President Arne Schönbohm.

There is also a customer notice from DT to customers, which recommends powering devices off and on (which will force a software update).

keiner November 29, 2016 9:31 AM

@Z Lo

Nope, that was the attempt to take over about 900.000 proprietary routers (Speedport plastic crabb) by a botnet, maybe Mirai. Attacked some “service port” (aka the NSA/GCHQ/BND backdoor) but malware was to bad to take over the hardware silently and messed the whole network up.

PS: I now go out and burn some US flags, before I have to go to jail for that! 😀

Rob November 29, 2016 11:08 AM

Clive Robinson,

How would you go about separating how the legitimate users from the illegitimate trying to access the target website?

ab praeceptis November 29, 2016 12:36 PM

Z.Lozinski

tr-069? Who would have thought that that one day other criminals (than the ignorant greedy bastards in telecoms/isps) would abuse that bloody open backdoor, too? I’m sooooo surprised.

What really surprised me is that there seems to not have been a note from KGB or at least some fancybear signature.

Maybe it’s about time to create laws that make those providers pay between 10$ and 50$ per affected user. Plus a 3-strike rule “If you fuck your customers 3 times, you’re out of business and in court”.

r November 29, 2016 12:39 PM

ab,

Is the reason you’re so upset about the delay in the election and everything else that you have a hot date with a stock tip set for december or what?

You’re so bitter.

Maybe that’s why Donald and everybody are upset about the delay, what do you all have riding on the next 90 days?

Ted November 29, 2016 12:51 PM

To add to @Z.Lozinski’s and others’ comments…

Johannes Ullirch of the Sans Institute says that the Mirai-botnet-variant recently affecting the modem/routers of Deutsch Telekom customers incorporates a new exploit that takes advantage of vulnerabilities in the TR-069 protocol, a protocol that ISPs use to remote configure modems and communicates using port 7547.

According to his research, a proof-of-concept was shown to be viable earlier this month on Eir’s D1000 routers. Eir is Ireland’s largest ISP. He says that the D1000 routers are Zyxel-built, and notes that most ISPs purchase off-the-shelf modems and rebrand them. Dr. Ullrich suggests that the exploit may take advantage of a shared library, but does not have an exact list of correlating devices yet.

He reports that the botnet is scanning the internet every 5 to 10 minutes. So if you connect a vulnerable modem, expect it to be affected in about 10 minutes. In concordance with Z.Lozinski’s earlier comment, Dr. Ullrich’s says the exploit can be cleared by rebooting your modem and also provides a Deutsch Telekom link for a firmware update.

Tweet with 5:56 minute podcast link: https://twitter.com/johullrich/status/803392797927739392

ab praeceptis November 29, 2016 1:07 PM

r

As you keep doing that: keep your obtrusive “worries” to yourself. I’m neither your friend, nor your your foe, nor your psycho patient.

You are, of course, free to feel whatever you please but this blog isn’t about “how r feels about me (or anyone else)”

When you approached me in a reasonable way and concering technical matters I was always polite and friendly. That should be good enough.

Clive Robinson November 29, 2016 6:00 PM

@ Rob,

How would you go about separating how the legitimate users from the illegitimate trying to access the target website?

It’s not an easy problem to solve at the website as in most cases there is no way to tell if the basic packets are legitimate or not, at that point in the network.

Likewise if the attackers computer is not sending out false addresses, incorrect protocols or datagram formats etc it’s not easy to tell no matter where on the network you instrument. Which is one of the original reasons for Bastion Hosts (firewalls) that understood not just the network level protocols but in some cases the application protocols as well (via wrappers).

Luckily most DoS attackes do involve the sending of false addresses, broken protocols, or the wrong sort of traffic or abnormal volumes of traffic. These behaviours can be fairly easily recognised by instruments just one or two hops away from the attacking host (ie in the ISP’s network).

Which suggests that the likes of ISPs are not doing such instrumentation for some reason, which also means they are not up with “best practice” in this regard.

At the end of the day most “network overloading” DoS attacks can only happen if the attacking computer is alowed to carry out the attack by sending out a lot of abnormal traffic. If the abnormal traffic was throttled or blocked by the ISP then the DoS and DDos issues would be a lot less if not cease to be an issue altogether. Likewise some protocols need to be changed to stop amplification attacks. The real question is why ISP’s are not doing this, and that is a whole different discussion which will drag in such things as “Net neutrality” and QoS etc which can have “sacred cow” status.

Rhys November 29, 2016 7:43 PM

If Amazon got into this game, I’m sure we could drive the price down. Or maybe we could repurpose the botnet to do climate change calculations. What does it cost at the moment to rent that sort of capacity on AWS for 2 weeks?

Drone November 29, 2016 9:03 PM

So the point of this post was to point out just how long it takes garbage scows like bOinGbOinG and Slashdot to regurgitate what they scrape off the Web?

Just Because You Can... November 29, 2016 11:35 PM

You can rent a 400,000-computer Murai botnet and DDoS anyone you like.

You can also probably buy a gun or a bow and arrow and shoot someone. Consequences matter. So do choices of how to cover cybersecurity issues.

keiner November 30, 2016 5:48 AM

…and the host of this blog is giving a keynote on security at a Telekom event in Frankfurt a day after these events. And the blog is down now and then. Strange coincidences these days…

Z.Lozinski November 30, 2016 7:44 AM

@ab praeceptis,

TR-069. A modern telecom network has millions of network nodes, and that is before we count the millions of devices the customers have. Remote management is essential to keep the cost of running the network down. The industry rule of thumb is that it costs USD 300-600 for an engineer to do a site visit.

Best practise is that you put a lot of effort into securing the management ports on all devices: interior nodes and customer devices. We can all think of ways to do this. Whitelist access to all management ports. Block all incoming traffic to the network destined for any management port on the network. If it isn’t coming from the telco/ISP own OAM network, drop it. (The Operations, Administration and Maintenance network is a dedicated internal network in a telco used to do all management functions. The OAM network or “management plane” is separate from customer IP traffic “the data plane”).

@Clive,

You’re right, that we need to encourage the industry to do more. Telcos and access ISPs are starting to see these attacks directed at their infrastructure. There is a real question of how the cost of dealing with these attacks should be apportioned. A few years ago (2007-2008) my experience was that “clean pipes” services were not accepted in the market. Heavyweight firewalls with the capacity to deal with 10G and 40G circuits pipes are not cheap. Think tens of millions of dollars. This takes us back to the discussion a while back about national level traffic protection and the Great British Fire Wall.

keiner November 30, 2016 8:51 AM

@Z.Lo.

“..Block all incoming traffic to the network destined for any management port on the network.”

So apparently Telekom missed the VERY basics of network security? Bruce should give ’em a free lecture, while he is on it in Fränkfort…

Clive Robinson November 30, 2016 9:21 AM

@ Z.Lozinski,

Heavyweight firewalls with the capacity to deal with 10G and 40G circuits pipes are not cheap.

No they are not, but also they are not a necessity for many things.

If you view an ISP from above they have customers downstream, and a hostile unauthenticated network upstream. In between they have their own network with known characteristics. They should not accept anything from upstream that uses their “managment ports” likewise they should configure their downstream systems not to accept anything that initiates a “managment port” command. Nor should they alow datagrams with known to be false source IP and or port information to be accepted into their own network from either upstream or down stream. Quite simple routers can do this and it would stop a lot of the DoS attacks at the source of the attack. Likewise there is traffic to various infrastructure services, that the ISP should provide to it’s customers, not alow some unautheticated third party upstream service provider to abuse.

I appreciate that not all can be done with cost effective routers and firewalls but quite a bit can be. Just getting the ISPs to “cull some of the sacred cows” and do those bits would be a very significant start.

Unfortunately as I’ve mentioned before there is the legaslitive problem of the “common carrier” status. The ISPs do not want to in any way risk that status because of the legal protections it provides them. Legaslitive bodies likewise do not want to change things either for other reasons, thus we have a form of status quo that suites nobody other than the cyber-criminals… As was once noted “Legislation that is not well thought out carries the risk of unintended consequencies” but likewise so does “Inaction in the face of marked change”.

The Internet with it’s “All you can eat fixed costs only model” is a major game changer as it destroys many of the assumptions of the “common carrier” model that long predates it. It also turns basic economic ideas that have the underlying assumptions of “distance cost metric” and “duplication cost metric” upside down. When these are “zero cost” for cyber-criminals, there is no tangible physical world limits that give us “localisation limit” or “force multiplier limit”, hence an attacker can become “an army of one” from not just “any location” but “as many locations as they wish”.

It’s this “zero cost to attackers” that is the big game changer and we need to change it. Thus the minimum change is to harden the infrustructure uniformly and thus put the cost of this into “fixed costs” for all ISPs. Unfortunatly in a “free market” “race for the bottom” the only way to do this is with legislation.

You will hear people scream out about these increased costs and how Governments must not “stifle the market”, but this is an obvious misdirection because virtually all goverments are using the “fear of terrorism” to vastly inflate an ISPs fixed costs for much much lesser gain with the enforced retention of customer meta-data.

I can understand why sometimes Bruce dispares of the situation and thus “promotes the use of sensible legislation”. But also I can see why others rail against legislation because of the general two faced behaviour of those providing the legislation to the legislators and trying to cover their actions by invoking the “unforseen consequences” excuse.

How we find our way out of this Byzantine Labyrinth of twisty little hidden agenders and empire building I do not honestly know. But to do nothing will only make things worse not just currently but when people finally do have to make changes. Because they will be big changes just like the Patriot Act, made as a knee jerk reaction without mraningfull analysis, that others have been patiently waiting for to put in their own little last minute highly self benificial fixes…

ab praeceptis November 30, 2016 9:41 AM

Z.Lozinski

Not really. I’m not at liberty to speak about some details but no, 10G or 40G firewalls are not in the tens of millions. In fact, I happen to know of 20G in and 20G out firewalls that are below 100.000$. And I’m all but certain (i’d need to look it up) today that would be true for 40+40 fw, too.

Moreover one needed to look at what a fw is. Quite a lot of the functionality normal users expect in a fw are, in fact, already on the edge routers.

Also, the problems of fiber operators are quite different from, say, hosters. The latter having some demands that simply don’t exist for the former.

As for tr-69: That’s a f*cking pain in the neck and a gaping open hole and that is known since about the dawn of light (in fibers *g).

Nowadays even stupid plastic cpe easily has the power needed to at least have something ssl/ssh based. Some of them mipsen and whatnot even have some crypto support (m ladders, for example).

So: No the problem is not that ISPs need some easy remote config capabilities – the problem is that they sh*t on millions and millions of customers.

Furthermore that crowd is well known since decades to be on the very poor side of security.

Nick P November 30, 2016 4:17 PM

@ ab

Im actually interested in what it cost for a 10G capable of inspecting every packet. Like the base for a protocol guard for IP, TCP, and HTTP/email/DNS/etc. I did find a 100G a year or teo ago that uses Tilera’s inside. Didn’t bother to ask for a price as I didnt think they’d reply haha.

Was hoping 10G’s came down due go 100G deployments, though. Also, that Achronix FPGA I post on occasion does 40G and 100G. I’ve heard its kits ard $10-20k.

Z.Lozinski November 30, 2016 4:50 PM

@Clive,

I agree, but how do we change it?

The economics have been obvious (if not well understood) since 1997, when The Economist‘s then editor Frances Cairncross wrote The Death of Distance. The last 19 years have been about this argument playing out with carriers, regulators and markets.

So what is the right balance between the common carrier argument and the argument that those with at-scale infrastructure are best placed to detect the Bad Guys and stop them?

I would argue that the regulators need either to place a duty on those in a position to fix this (e.g. the large scale access ISPs), or give some forbearance to those who show they are trying to solve the problem.

Z.Lozinski November 30, 2016 5:03 PM

@ab praeceptis,

I suspect we are probably both bound by NDAs on the actual costs to prevent incidents, so we must agree (politely) to differ.

The real point, on which I hope we agree, is that the large scale infrastructure provides do need to take some responsibility, and that the regulators must give them a break when they help to solve the problems. (And put pressure on them, if they are not helping resolve the problems.)

Where I will disagree with you, is that I believe that large scale operators today are putting significant effort into trying to defeat attacks like this one. I understand that they are not always successful.

@Bruce,

So, for Bruce. Do we need to move to a model like ICAO Annex 13 (the International Civil Aviation Authority’s rules on the investigation of aviation accidents and incidents) for the investigation of security incidents to critical infrastructure?

“he sole objective of the investigation of an accident or incident shall be the prevention of accidents and incidents. It is not the purpose of this activity to apportion blame or liability.”

Do we need to start treating ITOT and critical national infrastructure incidents in a way to prevent recurrences? And who should sponsor this? The National Academies?

Sancho_P November 30, 2016 5:55 PM

@Clive Robinson (@Z.Lozinski)

Nah, we don’t have to follow the babel babel with “net neutrality” and so.
That’s a red herring.
Also I see the “common carrier” status as a smokescreen, the issue isn’t in content but metadata.

It’s not my webcam making the attack, it’s my ISP.
My webcam may send120 stupid requests per second from my account’s dynamic IP, any (esp. DNS) server in the Net will not even notice.
However, my ISP is going to amplify that nonsense to probably >120k requests per second, and this is at attack level.

@Rob

“How would you go about separating how the legitimate users from the illegitimate trying to access the target website?”

I’m not sure if I understood your question, but:

When my ISP identifies my router and assigns it a public IP address,
AND
my router then sends packets with a different source IP,
that may be a hint?

  • They don’t have to go into the packet content to see what’s going on.
    – They don’t do it because it doesn’t pay to do so.
  • It doesn’t pay because our sovereigns are busy with stuffing money into their pockets.

All that discussions will be obsolete (= too late) when the big wireless provider catch up with the speed and need of IoT.

@ab praeceptis

My home router is provided by Movistar and TR-069 was disabled by default. Also there was an IP range defined to accept 069 from, if enabled, but I have changed that info.
Once I got (WiFi) problems and called them by phone. The guy told me to reset the router and I noticed getting a firmware update. After that TR-069 was still disabled, also the port closed, the nonsense still there to accept from source IP if enabled. There might be a time frame after reset for (TR-069???) actions without the need of anything always open.

Clive Robinson November 30, 2016 6:44 PM

@ Z.Lozinski,

I agree, but how do we change it?

That as they say is the question…

In all honesty I don’t actually know. Not because I don’t have ideas on it, but my “thinking hinky” only goes so far. Thus my horizon for “unintended consequences” is limited to my “experience”, and as an engineer by training and disposition, I lack the serious “cork screw” mentality that can hide a single word in half a thousand paged of regulations, that unravels them like a pulled thread on a jumper or stockings etc.

Thus my initial sugestion is “start the conversation” in an open way with the things we do know on the technical side to try to define the problem and potential solutions.

One thing we do know is it is far easier to stop the problem at or shortly after the attack source rather than at the target. Secondly detecting an attack in general is easier the less target specific it is, or the lower it is in the networking stack.

But we also know that detecting specific target application related attack vectors is usually not required in DoS attacks due to the shaping and volume of the traffic generated by the source(s).

This suggests as a technical solution, directing detected low level protocol errors to /dev/null close to the source is a good idea, and that QoS type throttling of unexpected high volumes of traffic might be likewise benificial. As would looking for simple signitures –like rapid starting of TCP handshake but not finishing it etc– that would not be expected in normal usage.

Which leaves us with high level or application specific attacks. They are too high level and target specific to be detected in a general sense thus the target it’s self is the best detector of that type of attack. However in most cases source end rather than target end solutions are prefered to reduce collateral damage.

Care needs to be excercised when looking at possible distant end solutions, and you have to be able to think hinky or like an attacker to spot problems with the ideas. To see why this is necessary the following scenario should give a good indication,

One apparently simple idea for a distant end solution, would be for the target to use some form of feedback to the first hop after the attack source, to “rate limit” or “quench” the attack traffic at source. There are existing mechanisms that could be pressed into service to do this.

However when you think hinky you can see such a simple quench mechanism could be used as the basis of a different style of DoS attack. Or worse it could be self defeating in that as an attacker you could send short bursts from many locations, that do not trigger the local “attack detection”, but triggering the target to rapidly send out quench messages, which do get detected as unusual traffic patterns one or two hops up from the target, which would cause an “attack detection” node to divert the targets legitimate quench messages to /dev/null…

When a potential technical solution is defined it can be tested and if sufficiently robust it can be put forward as a potential solution.

And this is where it starts geting the legislators involved. Whilst it is simple to legislate the collection of meta-data because it is in effect a passive recording activity. Legislating for an active prevention activity is very likely to have edge cases where sneaky lawsmiths can hide their little tricks.

Thus hidden agenders can be brought forth, which is just one reason the “net neutrality” argument makes an apparently logical solution. Only it dors not because of the advantage it gives to atackers…

Thus you need many people to consider how they would gain advantage from any legaslitive proposal to try to prevent it being triggering in what might appear as an “unexpected consequence” that provides direct or indirect benifit to some of the parties involved.

ab praeceptis November 30, 2016 6:45 PM

Nick P

Depends pretty much on what “inspecting” means. If it’s to mean to look for more than a couple of simple payload checks or non 1 shot patterns (i.e. combined multi-pattern rules) it gets hairy.
(leaving out special (and very expensive) solutions and looking at halfway standard solutions) what you want then is SerDes cap. solidly bigger than your target, multicore processor, regex engine, if any possible risc based, reasonably plenty cache and not necessarily much but fast mem.

My personal favourites for that kind of job and range (n * 10G, with single digit n) is freescale powerpc based boards. Usually they even have quite useful (non toy demo) eval boards, too. As for the price range I can’t help a lot because we don’t use off the shelf products but I would think that your 10k – 20k frame is plenty. I wouldn’t be surprised to find a ots 210G + 210G box at about or even lower than 10K (unless, of course, you want to buy a big name, too).

For higher ranges I’d think about multiple of those boards in a pcie variant running on a (another one but mainboard). But usually (at least in my experience and world view) it’s a better approach anyway to spread the workload rather than using “raw power monsters”. Among other niceties that appraoch gives you some resilience, too, for instance.

But that’s actually somewhat religious. Others might bet on fpgas. What I personally would not even consider is tileras or the like.

ab praeceptis November 30, 2016 6:50 PM

Z.Lozinski

Oh, of course, we’ll kepp it polite and even friendly.

As for “trying” … maybe some. My experience tells me that the customers security is pretty low in the priorities at most corps. What I see in security efforts is usually targetting their infrastructure.

Nick P November 30, 2016 7:12 PM

@ ab praeceptis

Hmm. Interesting. I recall running into Freescale’s interesting PPC boards when studying separation kernels. INTEGRITY was put on a bunch including QorIQ processors. They seemed neat. I’ll keep them in mind.

“Depends pretty much on what “inspecting” means”

Analyzing headers to make sure they’re sane. Optionally overwriting some of them for covert channel mitigation. Detecting DDOS at transport/link layers. Basic stuff that would be useful for a small ISP or enduser with 1Gbps lines showing up cheaper than ever these days. First line of defense that mainly filters out the riff-raff. Optionally per-packet authentication for port-knocking. Many of these networking-oriented CPU’s have ASIC offloading for basic TCP/IP and crypto. So, that part probably won’t have much effect on performance. Just the validation checks + monitoring.

Andy December 3, 2016 3:56 PM

The router one upstream is the key, hop, if you can access that, you don’t need router golding keys, just Mac spoof to by pass nat, but software has to take the lead, lock that down, then the rest can be followed upstream with privs

Knockpock December 9, 2016 9:11 AM

A large number of important nodes have had root access, the same pass, for a long time. Not many admins seem to no what the heck a domain or host is and that’s about the depth of knowledge. Faked resume, read manual, ran occasional updates and they think nothing went wrong.

As many managers don’t know a good deal from a bad deal and think a complex password that gets changed sucks, because how will they have access in their mind if admin leaves. The amount of large servers in call centers containing credit card details with worse than terrible passwords is ridiculous.

I’ve had more trouble guessing the passwords of public users systems they want fixed than the passwords on some of theses servers, and the managers love handing out the root pass, as they don’t know any better and couldn’t care less. It gets worse but…

anyhoot December 9, 2016 9:19 AM

Routers supplied by providers, some only update the firmware from the providers access upon request remotely to your router. Get it and throw it in the bin and get a real router. Yes they are more expensive, but you can update the firmware yourself with something other than what it came with.

If the manufacturer even bothers after years of requests to update it it’s out of date, so throw it away, melt it burn it, destroy it. You could reset it factory and leave it on the street for some poor sucker, they have an open unsecured discovered wireless network anyway with their name as the network and likely default passwd. Windows 10 likes to share your wireless password with your “friends”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.