Greg September 26, 2016 4:41 PM

It’s actually up for me if I use Tor, and keep choosing “new tor circuit for this site” and reloading it that way… yes, he’s under attack, but you can get to it that way, since you effectively keep trying to access it from different random places around the world… (imagine that.. tor use making it “better”?)

Another thing you have to do is make sure you have cleared your dns cache (which is a different procedure depending on what os you’re using)

See his twitter feed for latest info.

Eric September 26, 2016 5:22 PM

I am thinking that he is starting to step on some big toes here, and the people behind it aren’t happy.

I am hoping that they can come up with better countermeasures. These kinds of attacks are pretty juvenile and it mainly serves to get people ticked off.

Grauhut September 26, 2016 5:48 PM

If Google gives up and a really big, bad three letter agency steps in, or five of them, then we will know it was the biggest pr spam cyba offensive ever!

Google is now at least the biggest honeypot ever… 😉

Null September 26, 2016 5:49 PM

I believe he is only honey-potting. Even kids are good at hacking, they always fall behind the psychology.

Corp_Store_c54way_4.4.8.8 September 26, 2016 6:37 PM

Regarding Krebs on Security (KoS):

Thank you Brian, besides risking your life, livelihood and limb to help make for a better internet and world, for your timely input regarding your monthly patch Tuesday posts (a great resource for those still using windows).

Thanks for helping me trash Flash and move from Reader to Sumatra PDF and mupdf (with help from openbsd and Nick P., too).

And a shout-out to Akamai. For years I have accessed KoS through Tor without those annoying Cloudflare Craptchas. Thanks Akamai for supporting KoS on a pro-bono basis or as a gift.

Curious September 26, 2016 11:58 PM

Please correct me if I am wrong, but is the bad thing about Ddos’ing in a technical sense, that basically a single computer ends up multiplying incoming traffic for some other computer beyond its capability to deal with it?

If one ignore technical issues that might/will enable/enhance Ddos attacks, how is the industry planning to deal with such a traffic jam?

Is Ddos’ing not just a technical problem to be solved?

65535 September 27, 2016 12:56 AM

It looks like Krebs is back on line.

Krebs seems to indicate there is a huge conflict of interests via the cloud providers and ISP providers to both protect/profit the ddos site owners and also protect/profit from their customers success or misfortune. The cloud providers and ISP are playing both sides of the game. It is a hard thing to stop.

I am suspicious of nation states that know about these bot nets yet do nothing to stop them from harming their citizens. It feels like these nation states would love to use the booter services of the bot nets for their own ends [a cyber weapon although blunt is still a cyber weapon] – but this is just a guess.

Also, I would guess that nation state actors have super ‘shodan’ devices that can check and re-check each and every device on the IPv4 and IPv6 network. The nation state actors must be seeing what is happening to Krebs and evaluating how to leverage the damage to their advantage.

Clive Robinson September 27, 2016 1:20 AM

@ Bruce,

You write,

In fact, the site is down as I post this.

Having given the site link in the previous sentance +1 for making me smile.

That aside DDoSing has been a problem for some time, yet we appear to be at a loss as to how to deal with it. Further it appears that few have mentioned it may not possible to solve it and what that might mean.

Whilst there have been suggestions of ways to limit DDoSing, some atleast appear to be a case of “The cure is worse than the disease”.

This particular attack shows that the “passive defence” of just having “more bandwidth” is never realy going to be a solution even if the “attack amplification” techniques are some how magicaly removed over night.

For instance the seemingly obvious idea of “kill it at source”, whilst sounding logical is it’s self likely to become an attack vector via any signaling mechanism you would use to implement it. This is simply due to the Internet not having a reliable distributed trust mechanism as the problems with the proliferation of CAs has shown.

Likewise most other ideas that people have thought of have the seeds of being a worse problem within them.

Unfortunatly at the end of the day a DDoS requires very little to become a significant problem. Because all that is needed is a single bug that can be turned into an attach vector and an appropriate payload to provide the attack packets.

This means that anyone with a little luck and skill from a curious teen in their bedroom through to a state level attacker can do this as it falls neatly into the “Army of One” issue.

Attribution would be nearly impossible because you would have to consider if the “single bug” was accidental or deliberate.

That is to play a “Red Flag Gambit” let us assume that a major manufacturing nation of IoT or network appliances decided to “plan ahead” for cyber-war. Having heard the rantings of their most likely enemy about “kill switches” and “pulling the plug” they decide to “infiltrate an army”. Thus putting the bug and the ability to run a payload in all export devices they manufacture. Thus the delightfull situation that the potential enemy pays for their own defeat. Which is an interesting variation on “Economic Warfare”.

Whilst we are never likely to know who launched this current attack, or for that matter who was actually the real target –unless the perpetrator is daft enough to brag about it– there are several lessons in it that I suspect a number of people are going to be banging their head against the wall over.

The first lesson is that the sort of “Cyber-Commands” talked about in the US etc are in effect paper tigers. The second is that concentrating your efforts on “Cyber-attack” not “Cyber-defence” is so unbalanced it is untenable as even a short term strategy. Thirdly and especially so if your network on which you have based a large part of your econony, is infact populated by devices from a potential enemy state who things more in the longterm than the shortterm…

However this idea is far from new, I first read about the technological army within in one of Issac Asimov’s Foundation Series books. More recently Futurama had an episode where the manufacturer of all the robots in a fit of peek or boredom threw the “kill switch” she had secretly built in all her robots, thus the robots “revolted” and mayhem followed.

Grauhut September 27, 2016 4:49 AM

@Clive: “The cure is worse than the disease”

Right, asymmetric routing and udp are both valid techniques. Would be difficult to replace them in short term.

The only way to stop this is to force botnet drone owners to clean their crap. But this means even more surveillance if you want to send them a ticket.

z September 27, 2016 9:05 AM

@ 65535

” It feels like these nation states would love to use the booter services of the bot nets for their own ends [a cyber weapon although blunt is still a cyber weapon] – but this is just a guess.”

Russia has used botnets for political uses. In 2008, DDoS attacks on Georgia and Azerbaijan showed a link with the Russian Business Network:

It is speculated that the Russian gov keeps the RBN around for such activities. Makes a great cover and they don’t have to invest in the infrastructure since they can just rent it.

War Geek September 27, 2016 9:12 AM

BrianK mentions BCP58 in his article, but that’s really just a description of the problem and requirements for some potential solutions.

A real technical fix is something more along the lines RFC 5635

Remote Triggered Black Hole Filtering with Unicast Reverse Path Forwarding (uRPF)

Pretty sure the core devices (ie: Juniper) already support this, so question really comes down to

  1. When will the edge devices start to support and get configured with this or a related mechanism?

  2. When will the modern equivalent of the UseNet Death Penalty be applied with the ISPs who willfully refuse to implement it and also other self policing for detected hostile traffic.

Azure September 27, 2016 9:41 AM

“2. When will the modern equivalent of the UseNet Death Penalty be applied with the ISPs who willfully refuse to implement it and also other self policing for detected hostile traffic.”

When it stops being profitable to transmit DDoS attacks… If I’m a seller of bandwidth, and someone is using “too much bandwidth” to attack someone, I’ve just hit the jackpot! That’s the crux of the problem, and why it can’t be “solved”…

Sancho_P September 27, 2016 9:53 AM

Now I don’t know what the Krebs attack really was but I concur with @65535:
The problem isn’t new, it is there because they want it to be there.

We could discuss who is “they”, but for sure it’s not Jane & John Public.

The issue is likely twofold, one part being human intention (@Curious: No, there’s not just a technical solution for a human disorder), the other part is technical.
(Yes, a bug could replace the human part but this is very unlikely, however, the solution should include bug mitigation).

I’m convinced there would be good and bad solutions for the technical part, if they want.

My proposal would be to look at the ISPs first. Their clients (the bots) are protected by “lawful” EULAs and too big to fail business (the Bill G. saga).
The p0wned end device + user is innocent per law.

But the ISPs, taking money for their service, do not flag / limit what’s clearly written in their T&C, this is nailing down one and the same target IP for minutes / hours / days (*) from one dynamic IP address.
They are doing the harm.

(*) If someone wants that for test purpose it may be a payed, time and scope limited option (= business opportunity).

[@War Geek: The BCP58 and RFC5635, thanks for linking, is above my head but sounds pretty good. The only thing is I’d want to identify and cure the p0wned device, at least in civilized countries, to prevent other harm for the user]

… Oh, just forgot to mention Russia. They’ve invented the Internet and all that crap, only to harm peaceful USA and capitalism.
They are working day and night to destroy poor little America, let’s nuke them, NOW!

Grauhut September 27, 2016 1:34 PM

@War Geek: Not so easy…

“As a matter of policy, operators SHOULD NOT accept source-based RTBH
announcements from their peers or customers, they should only be
installed by local or attack management systems within their
administrative domain.”

  • SMB has no BGP Router, just a dumb cable/dsl router…
  • SMB has his own small BGP router, something <= mx5 class…

RFC 5635 does not reflect reality.

tyr September 27, 2016 5:22 PM


This seems to have been built into the design.
Once you decided that the ability to move the
packets without any impediments. The monster
was created. The Net was made to solve another
problem that seemed more pressing at the time.
How do you continue to communicate after a lot
of your network is smoking holes of radioactive
glass? No one ever dreamed that they would let
it scale up to become the planets communication
system by extending the network into everyplace

DDOS is easy to fix. Redesign the Net as a real
communication system without the survivability
requirements built in.

That implies the loss of a few things like net
neutrality and unimpeded traffic across society
boundaries. I’m sure the usual suspects RIAA,
MPAA, TLAs, and LEOs will be glad to help you
make their special needs paramount as well.

As long as saber rattling nitwits abound, I prefer
the ancient Net with all of its ugly warts to
the sleek utopian visions featured in the news.

Cory continues his crusade against DRM and no one
here should continue to think DRM is a good idea
from just the safety standpoint alone. That has
never stopped Luddites from trying to put us all
back in the dark ages because tech has problems.

So is the cure worse than the problem ? Once you
get beyond the personal/ site inconvenience, I
think the cure will be orders of magnitude worse.

Clive Robinson September 27, 2016 9:06 PM

@ tyr,

Once you
get beyond the personal/ site inconvenience, I
think the cure will be orders of magnitude worse.

Yes the repression of privacy etc most certainly will be, it’s been obvious to me we’ve been moving into LEO “We have the right to know, all resistance will be crushed” territory for years (just read the ACPO reports, it makes “going dark” fears look mild).

But there is another side than LEO/Gov paranoia writ large, we appear to be incapable of secure and efficient designs. To stop DDoSing we will come in with a bad design with bad trust etc, thus all that will happen is that the existing DDoSing methods will be partialy solved by tools that will themselves become DoSing tools etc.

Even going to “a real communication network” will only shift the DoS problem not fix it, and in the process give significant power to others who will abuse it.

Thus my view is to fix problems like DDoSing we need to dig down to where the problem realy is, which is crapy OS’s and Applications giving the attack vectors that the DDoSers have been using to build their bot nets. Lest anyone think I’m singling out certain well known companies and organisations I’m not. Because most commercial Closed Source as well as Open Source suffer from the layer nine and up[1] problems being reflected down into the lower layers. Resulting in Johnny not just being unable to encrypt but act or code securely either. It’s that realisation back a quater of a century or so ago that made me keep “my systems” away from others, and keep other systems I used for work etc issolated where possible, and as locked down as tight as I was allowed otherwise.

Thus with the majority of systems insecure we live in a “low hanging fruit world” where the only thing stopping most systems getting raped and plundered by the ravening hords of attackers, is to few attackers in a very target rich environment. Thus chance is what determines where the eyes of the attackers fall and the choice is rich enough that attackers can pick the lower hanging fruit to their hearts desire at their leisure. Which unfortunatly gives the likes of the layer nines and tens over confidence in their abilities, especially with respect to the time element. They see each day they think they are not successfully attacked as the measure of their security, not as being one day closer on the probability curve to them finding they have been attacked.

This under valuing of attack risk by managment and above is exacerbated by attackers “over estimating” target value, thus putting more effort into methods of attack than will pay back. Which means there is a quite wide reality gap in risk / reward assessment that needs to be closed.

However it is that gap that gives us the botnets by which DDoSing can thrive. Mainly due to those attackers who have developed exploits being unable to cash in on them any other way. If the OS and App security improved the supply of bots would decrease, thus the price would rise, and DDoSing become scarcer due to the cost/risk involved…

No doubt Ross J. Anderson and his team over at Cambridge Labs have looked at or know who has looked at the figures for this and drawn up papers for the Security Economics research domain.

[1] The ISO OSI Seven layer model only covered a part of the computing stack. So unoficialy it has been augmented with “physical” layers below and “human” layers above. With the other layers above currently being approximately, Users at eight, Managment at nine, Organisations at ten, and others that cover the external political etc layers of society still getting shaken out by sociologists etc.

Rollo September 27, 2016 9:44 PM

this is a really comprehensive report from Protonmail about the DDOS attacks they experienced, and exactly what solutions they implemented to mitigate them. They were taken down by a state level adversary with a 59gbps attack and then successfully mitigated successive ones after the above implementation. It was expensive but the funds were quickly obtained via crowdsourcing.

It’s well worth reading. They go into a lot of detail and explain practical solutions

incidentally I was hoping they’d have a report by now on the new swiss surveillance laws. They have discussed it a few times on their blog previously and the implications. maybe soon.

TomS. September 27, 2016 9:51 PM

I believe the correct reference is BCP 38 [1], “Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing”. The page linked below is maintained by contributors to NANOG, the North American Network Operators Group.

DDOS attacks and mitigation have been well explored on the mailing list.

It is ingress filtering for the provider, for end organizations it is egress filtering.

Brian’s post mentioned both TCP and GRE attack traffic. I suspect disabling PPTP VPN pass through on many consumer gateways could reduce the GRE volume. The TCP, don’t give a device a gateway if it doesn’t need to communicate off LAN is a simple starting point.

Non-competent network operators & IoT experimenters might want to look at some of the IoT proxy / segment firewalls in a box that have come to market. Some of them look like WiFi repeater, & proxy in a box tied to some cloud update service.

@Clive’s hypothetical is decidedly plausible and considerably unnerving. Gee, Thanks.
[1] http:/

Whack All The Moles September 27, 2016 11:50 PM


That aside DDoSing has been a problem for some time, yet we appear to be at a loss as to how to deal with it.

In times gone past, I recall targets going to torrents or many mirrors (wikileaks in the early days). ISPs often interfere with this sort of mitigation. I mean, what if Krebs had a million fans that added a million hosts to a round-robin mirrored apache server. Wouldn’t that work?



The p0wned end device + user is innocent per law.

Wouldn’t it also work to simply treat p0wned end device + user guilty per law … after reasonable efforts at notification and remediation by the ISP to the relevent subscriber? Obviously if your IoT toaster gets p0wned and participates in a botnet, and your ISP or the target of the botnet notifies you, and you immediately disconnect the device from the network, you ought not be considered guilty per law. If however they notify you, let’s say twice even, and you ignore the notice and allow the device to continue operating, then it seems like you ought to be considered guilty per law. Actually I’ve always kind of operated under the assumption that that was how any reasonable judge and jury would look at things.

Drone September 28, 2016 1:44 AM

Where is the U.S. Government in this? M.I.A. it seems to me – that’s where.

If a bad person was continuously bashing on your house with a baseball bat for days at a time trapping you inside against your will, you bet Law Enforcement would put an end to it! Well, that is essentially what is happening to Brian Krebs and his existence online, and it seems the U.S. Government is doing nothing to help him.

TJ September 28, 2016 2:56 AM

“big toes” it’s DDOS not code execution or cipher attacks.. People who started using computers less than a decade ago and can’t use IDA or do DE make the news for doing advanced reflection attacks almost bi-monthly..

Scariest case: Spammer or botnet owner with big capital

Either way: Wait 72-100 hours either Clodeflare or someone will mitigate or they’ll get tired of wasting money on a blog of someone who basically spins stuff they pull from Russian HTTP forums and public US court docs

It’s pretty sad when someone on the tier of Krebs in the security and information field can make you spend money on counter measures..

Clive Robinson September 28, 2016 3:19 AM

@ Drone,

Where is the U.S. Government in this? M.I.A. it seems to me – that’s where.

I think you answered that one yourself over on the neural net thread, with,

You pay for the silicon version once up-front and it works continuously without complaint. On the other hand, you pay repeatedly for the biological version, it gets sick, complains repeatedly, joins corrupt labor unions, and sues you when it doesn’t get what it wants.

And maybe those left did not have enough time to push the start button 😉

Grauhut September 28, 2016 4:27 AM

@Tom S.: “Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing”

Its not that easy. Afaik a lot of traffic out there is routed asymmetrical.

Assume you are routing your nets with two upstream providers, its ok if you load balance or lcr and receive requests via upstream 1 and answer via upstream 2. This works fine since a router is not a stateful firewall.

Ingress filtering would break these use cases and bind traffic to a provider based on ips.

65535 September 28, 2016 5:32 AM

@ z

“It is speculated that the Russian gov keeps the RBN around for such activities…”

Interesting observation but what about all of the rest of the Nation State Actors?

@ Sancho_P

“But the ISPs, taking money for their service, do not flag / limit what’s clearly written in their T&C, this is nailing down one and the same target IP for minutes / hours / days () from one dynamic IP address. They are doing the harm.
) If someone wants that for test purpose it may be a payed, time and scope limited option (= business opportunity).”

Good observation.

@ gordo

The level 3 research suggest most of the ddos traffic udp. How would one stop such a upd flood like Krebs on Security suffered?

Should Tcp be mandatory for DNS? Should ISP’s monitor huge udp sprays from certain devices? Should the ISPs, Cdn’s and Cloud providers be responsible for enabling udp sprays?

This list only includes commercial services in the market and does not include companies who have their own CDNs, like Netflix, Google, Microsoft, Apple, Twitch, Facebook etc. If you think your company should be added to any of these lists, see the bottom of the post for instructions.

[list of CDN and ISPs intermixing]

“Vendors In The CDN Ecosystem

Alcatel Lucent (carrier platform)
Allot Communications (traffic management)
ARA Networks (traffic management)
Blue Coat (transparent caching)
Broadpeak (carrier platform)
BTI Systems (traffic management)
Cedexis (traffic management)
Cisco (carrier platform)
Conversant (carrier platform)
Conviva (analytics)
DeepField (analytics)
Edgeware (carrier platform)
Ericsson (carrier platform)
Fortinet (traffic management)
Hibernia Networks
Huawei (carrier platform/transparent caching)
Instart Logic
Jetstream (licensed CDN)
Juniper (transparent caching)
Level 3
Limelight Networks
Microsoft (Windows Azure)
Mirror Image
OnApp (traffic management)
PeerApp (transparent caching)
Qwilt (transparent caching)
Revsw (mobile CDN)
Solbox (licensed CDN)
Swiftserve (licensed CDN)
Tata Communications
Verizon EdgeCast
Vidscale (carrier platform)

“We hear a lot about telcos and carriers in the CDN market, but the vast majority of them have built out CDNs for their own internal use and are not selling it as a commercial CDN service. So it’s not accurate to say that they all compete with traditional service based CDNs. There are a few exceptions like Level 3, Verizon (EdgeCast), Comcast and Tata who offer commercial CDN services and compete against other commercial CDNs, but most telco and carrier based commercial CDN services are based off of reselling a traditional CDN, for example AT&T reselling Akamai. This telco/carrier list is far from being complete and many more still need to be added.” -streamingmedia

It seems we are back to the problem of said providers playing both sides of the fence – Taking money from both bot net kingpins and their customers.

War Geek September 28, 2016 7:42 AM


I stand corrected on BCP8 (not 58) being the reference. I’d read through it when I saw it at Brian’s blog, but his blog was offline when Bruce brought this up.

However..SMB point is a moot distraction. These RFCs were never about CPE.

Nor does that the RFC require that black holing necessarily work across peers and customers, if implemented within the IP to enforce subnet assignments it’d probably stop 90+ % of the problem. It was actually something considered on AS701 in a more crude fashion, with scripts drilling the config-fetches for the networks we allowed to be advertised to us (in the ACLs) being proposed as the way to enforce traffic sourcing. And that was many years ago.

Really not rocket science. I think the mobile Internet niche case was the biggest obstacle, but for even if the filtering skips the relatively few physical interfaces where that occurs this would Greatly limit the available attack bandwidth to just that few remaining ints.

Offhand, the people who shouted the most against about trying to implement this were the customers who advertised ‘bullet proof’ hosting (IE: The Problem).

CallMeLateForSupper September 28, 2016 10:53 AM

At 1148 EDT -> KrebsOnSecurity[dot]com
“403 Forbidden

And the DDoS saga continues?

Sancho_P September 28, 2016 3:32 PM

@Whack All The Moles
”… then it seems like you ought to be considered guilty per law.”

Only these times are gone already.
Capitalism (responsibility, guilt for for your sold product) ended with big business.
Hence my reference to the Bill G. saga: Money makes law, law makes money.

Sad times?
No, good times!

Nowadays, whatever you buy, you don’t own it (see EULA /T&C).
You may have a license (not a right, see …) to use it, you are not allowed to look inside let alone reverse engineer it (see …), you are not allowed to deactivate / block parts of it (for “security” reasons), you don’t know who actually controls it, while the vendor / producer / big B. enjoy impunity.
That’s the deal: You pay, you are p0wned.

Sure, they would like to treat the end user as guilty – but I doubt powers will find servile judges in case their wives had to disconnect and trash their toaster or fridge, probably twice.
Our politicos are known for shortsighted decisions.
Yet only a fool would underestimate the power of thousands of angry housewives/men and district judges.
They will have to burn another straw man,
– or don’t move until someone else does.

For the IoT, our “future”, see:

tyr September 28, 2016 7:11 PM

@Sancho P.

At least we’ll see a new cliche for future mystery
stories. The toaster did it, since few folks have a
butler these days.

gordo September 28, 2016 7:59 PM

@ 65535

I don’t have an answer. As minds brighter than mine have commented on this thread, to paraphrase, when resilience eats itself….

Nevertheless, it may be interesting to see how this maker and their cohorts fare:

General Electric Company’s Greatest Hope Could Be Its Greatest Threat (GE)
GE stock could be at risk, as its vaunted Internet of Things is already attacking
By Dana Blankenhorn, InvestorPlace Contributor | Sep 26, 2016, 11:18 am EDT

“so far they’re only building models for what cyber security in this new age will look like.”

Grauhut September 29, 2016 2:20 AM

@War Geek: A DDOS is an anomaly, but its an anomaly mostly only measur- and calculatable at the CPE or or provider uplink port. On the backbone the numbers are foggy.

Are there provider routers with integrated “local or attack management systems” on a per customer port basis? Or CPEs?

node September 30, 2016 12:24 AM

The 3 largest ISPs in Australia are very fond of CDNs and have a majority of the user base. If a problem emerges with any critical part of the network then a majority of users immediately can’t access the internet in any location affected. With parts of the network currently being upgraded with a mix of technologies any problems are greatly exacerbated.

DNS over UDP is helpful when network problems affect DNS over TCP, which is a pretty common problem with the current state of the network. Many connections are routed through other states and the two major routes overseas are on opposite sides of the country. The network is majorly vulnerable not just at these two choke points, but is also afflicted by a lack of alternative routing paths from many remote areas. Most connections are mainly directed through single cables following the one major highway connecting communities, but some towns and cities do have another alternative cable or two connecting them to other communities which allows another route.

A more resilient network would need a greater number of major routing paths and co-operation between providers to freely share CDN resources. This would benefit providers and customers alike, bring down data costs, and make the network less vulnerable to attack and faults. Presently large parts of the network can be crippled easily with a well targeted attack or natural disaster.

A storm shutdown power across South Australia yesterday which has disrupted communications across the country. Most people in my town (and a very large number of other postcodes too) have been complaining of connection problems for weeks and many locals have likely be unable to connect to anything for the last couple days and probably some more days to come, with probably only a handful of people with the knowledge to. Unless content is hosted or mirrored locally in Western Australia, a large number of connections are routed through South Australia to Melbourne in Victoria, and then Sydney in New South Wales and the east coast overseas gateway, although there is now also an alternative route into Asia from Western Australia which was installed a few years ago.

A centralized network like Australia’s network also lends itself to being monopolized by a small number of dominant players, which has increasingly driven up data and bandwidth costs, despite the huge outlay of capital for new infrastructure by taxpayers. Australia’s NBN network upgrade was last costed at AU$60 Billion, yet much of this cost will have to be written off to avoid bankrupting many of the small providers operating in Australia. The NBN will provide less than a 50MB FTTN or wireless connection for the vast majority of users upon completion, with some of the highest data costs in the world. Uploads are also included in Australian data charges for anything but the slowest connection plans.

clone September 30, 2016 1:26 AM

Not exactly value for money you would say?

I know your frustration. I used to monitor systems hosted by my internet provider and report systems that remained compromised and were actively scanning for other vulnerable systems that could be compromised. After an ever increasing number of systems were left in their compromised state I gave up. Systems were still infected with the same malware a year later although the network admin had reportedly contacted them and told them to have their systems cleaned. The malware on these systems could generally be removed by the most basic of free products without trouble but remained connected.

Large Internet Service Providers obviously prefer raking in the money over suspending services to those who refuse to address compromised systems, clean or patch them, or take them offline.

CallMeLateForSupper September 30, 2016 10:39 AM

It could have been much worse for B.K.

French Internet hosting service OVH has been fighting a DDoS assault nearly twice the size of the Krebs attack, with traffic coming from as many as 145,000 webcams and DVRs at a time.

“OVH suffers 1.1Tbps DDoS attack”

These attacks leverage tens or hundreds of thousands of available little soldiers (“smart”, internet-connected devices). Previously I characterized this as “Bloody brilliant”. quoting a line from a movie. Just as bloody brilliant, turning a “smart” device into a little soldier is possible because its owner is clueless or lazy or both.

A Nonny Bunny October 1, 2016 2:58 PM

Wouldn’t in many cases a distributed service be a possible solution to a distributed denial of service attack?
If blogs were cached and redistributed by a significant fraction of visitors, it’d be very hard to DDoS them. a) You’d have to identify all redistributors, and b) divide your DDoS ‘bandwidth’ between them.

CallMeLateForSupper October 2, 2016 7:43 AM

@A Nonny Bunny
“If blogs were cached and redistributed by a significant fraction of visitors, it’d be very hard to DDoS them.”

How could anyone get to a DDos’d site to cache it? They couldn’t. The situation would become static during DDoS: “significant fraction of visitors” holding aging information, unable to cache new data while site is DOWN.

A Nonny Bunny October 8, 2016 4:19 PM


How could anyone get to a DDos’d site to cache it?

As long as you have a central website as single-point-of-failure, that’s a problem. But surely it must be possible to decentralize a site entirely. With bittorrent you can find file using a distributed hash table without needing any central tracker. Something similar might be done for a website.

The situation would become static during DDoS

I wonder how hard it would be to modify something like bittorrent to allow updates (securely). Though you could publish an updated version easy enough (and maybe announce it on something like a blockchain).

Sancho_P October 8, 2016 4:35 PM

@A Nonny Bunny

I think this is what is done by DDoS protection, in some form.
However, it keeps the Net busy, esp. if both sides have strong pipes,
holds the status until another (different) DDoS starts,
until everything suddenly breaks down / comes up / breaks down …

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.