New Rowhammer Technique

Rowhammer is an attack technique involving accessing — that’s “hammering” — rows of bits in memory, millions of times per second, with the intent of causing bits in neighboring rows to flip. This is a side-channel attack, and the result can be all sorts of mayhem.

Well, there is a new enhancement:

All previous Rowhammer attacks have hammered rows with uniform patterns, such as single-sided, double-sided, or n-sided. In all three cases, these “aggressor” rows — meaning those that cause bitflips in nearby “victim” rows — are accessed the same number of times.

Research published on Monday presented a new Rowhammer technique. It uses non-uniform patterns that access two or more aggressor rows with different frequencies. The result: all 40 of the randomly selected DIMMs in a test pool experienced bitflips, up from 13 out of 42 chips tested in previous work from the same researchers.


The non-uniform patterns work against Target Row Refresh. Abbreviated as TRR, the mitigation works differently from vendor to vendor but generally tracks the number of times a row is accessed and recharges neighboring victim rows when there are signs of abuse. The neutering of this defense puts further pressure on chipmakers to mitigate a class of attacks that many people thought more recent types of memory chips were resistant to.

Posted on November 19, 2021 at 8:31 AM20 Comments


Memoria Callas November 19, 2021 10:35 AM

Kind of like making glass goblets sing by running dampened fingers around the rims

Or like shattering goblets by singing a resonant note to them 🙂

Clive Robinson November 19, 2021 12:28 PM

@ ALL,

The old techbique hit everything at the same time so the energy came from all sides at the same rate. Thus the energy gradient is uniform (thus fairly predictable to both attacker and defender).

If you think about each pulse being of constant energy if you have a different frequency you have a different energy.

This enables you to create an energy gradient at will. Which has significant advantages to the attacker as although it is predictable to the attacker, it’s not predictable to the defender.

Think of it if you will as a board on which sand is dropped, each grain represents a single RowHammer hit. If you use a constant patern then the level of sand builds up more or less a flat landscape… If you as the attacker use different frequencies you are in effect “sculpturing the landscape” thus can as with rain run off tailor where the energy flows and at what rate…

Sumadelet November 19, 2021 12:33 PM

I used to think that ECC would be a simple mitigation, and might act to encourage more people to use ECC, but it turns out it is not a 100% cure.

Vrije Universiteit Amsterdam, VUSec
ECCploit: ECC Memory Vulnerable to Rowhammer Attacks After All


I would still like ECC to be more prevalent, though.

Ted November 19, 2021 1:18 PM


Do you know how this research is related to similar research done at the Swiss university ETH Zurich?

(It looks like many researchers in this group are also affiliated with ETH Zurich.)

This second group provided a 1:30 minute video on their technical paper titled: “A Deeper Look Into RowHammer’s Sensitivities”

Clive Robinson November 19, 2021 5:20 PM


What sort of ECC?

Back in the 1970-80’s ECC was occasionaly done as a very seperate part of the memory subsystem often on the CPU board with part on the memory board and only in mini and mainframes that used 16bit or wider memory. 8bit width memory used with single chip CPU’s very rarely had anything. Even parity checking was rare except in certain remote telemetry, aerospce, space and similar operation where cosmic ray bit flipping was another known hazard.

In the late 80’s memory modules on 16/32bit “PC’s” started to became common and bit flipping was known to be a problem. Though the industry kept it’s mouth closed about the actual inherant unreliability of DRAM encoraging engineers to think it was cosmic radiation or poor circuit implementation.

So ECC started to appear on memory modules.

But space ans more importantly “track distance” became an overriding factor, so ECC got moved “on chip” which from the point of RowHammer and similar was the equivalent of putting all the eggs in one string bag…

If you want to know more about why ECC on chip is not such a good idea, then have a look at,

Dave November 20, 2021 1:01 AM

I write code for high-radiation environments, which means random bitflips are the order of the day. Most of the time no-one even notices, but this is one of the few times when I get to sit back and look smug.

The big picture though is that it’s never going to affect rad-hard code because it relies on the attacker running their attack code unfettered on your hardware. Yet another case of “if you want to be secure, don’t use the cloud”.

Ted November 20, 2021 6:49 AM


Re: Writing Radiation-Hardened Code

Is that a specialized skill? Is it much more time-consuming?

Clive Robinson November 20, 2021 6:58 AM

@ Dave,

Yet another case of “if you want to be secure, don’t use the cloud”.

Whilst I would be one of the first to agree with you on that, there is another issue that unfortunately occupies much of my thinking and that is communications and the failings of encryption and signing of code.

Historically there is the old joke about secure computers in that,

“The only truely secure computer, is one with no leads connected, embedded in a huge concrete block, and dropped to the unreachable bottom of the Marianas Trench.”

But that was then, now the bottom of the Marianas Trench is now reachable…

“So what is considered secure today, is unlikely to be so in the near future due to advancment of knowledge”.

A lesson all who are interested in security should take to heart.

But the real unstated reason that computer was secure was not because of where it physicaly “unreachably” was, but because it was also compleatly usless, as it had no “communications”.

No matter how we slice or dice it all computers that have utility have to communicate in some way, and that is both a fundemental and primary security issue.

Even apparently “output only” computers have inputs be it power, signals, or both. They also often use communications protocols on the outputs, that have “error control” of some form. Which is actualy an input signal in it’s own right that can reach right back through systems to the other inputs[1] and so backwards into what are concidered “isolated”, “protected”, or even “gapped” systems.

But there is another need for communications one which nags at satellite payload developers all the time. Which is the issue of,

“All code has defects.”

The simple fact is it does not matter at what level of the computing stack you are at there will be unspecified states that can cause issues[2].

Most often it’s at the specification or standards level that the problem originates, but becomes an issue at the protocol or lower inplementation level. There are only two basic things you can do with it,

1, Live with the problem by mitigation.
2, Upgrade the software to resolve the problem.

For obvious reasons the “live with it” or mitigate at my end issue is undesirable, so the “upgrade” by “patch or replace” is prefered. Which means you have to have communicates where an unknown at design time user can access the system to run their own software.

Of “patch” or “replace, patching carries less “system risk”, which is not the same as “security risk”.

Unfortunately to alow patching you end up with a system that can alow a RowHammer style attack (though untill recent times unlikely due to bandwidth restrictions, something that is unfortunately changing where shared high bandwidth comms is becoming normal).

So to reduce the chance that a future unknown user is malicious, we have two crypto solutions,

1, Encrypted files.
2, Signed files.

As has recently been seen yet again with Intel CPU chips, both of these can be bypassed if access to one or more “secret keys” embedded in the device come into a users hands.

Clearly neither or both encryption and code signing are sufficiently secure.

Not something you want to realy think about with “Military Platforms in Space” going increasingly kinetic[3].

[1] For example, if you jam a network connection, and input from the keyboard keeps happening, the computer keeps trying to output data to the network. But as it can not be sent the network sub system just buffers it. However when the network output buffer is full an error signal goes back and unless trapped and dealt with other buffers keep filling and eventually the computer either drops all further input or it tells the keyboard to stop sending. The same if the input comes from an old serial line TTY terminal (which nearly half a century ago I used to bash away on quite literally as they were electro-mechanical “Keyboard Send and Recieve”(KSR) or punch tape “Automatic Send and Recieve”(ASR) teletype devices).

[2] Down at the lower levels of the computing stack you have logic gates. Amongst these you have data storage elements called “latches” these suffer from “soft errors” caused by meta-stability at their inputs. Look up the NOR or NAND gate “Set Reset”(SR) latches or clocked D-Types and “metastability”,

[3] The recent Russian destruction of one of their old satellites that caused issues for the International Space Station and presumably the Chinese Tiangong space station currently undergoing construction. Some assume Russia did it as a warning to both the Chinese and US about space use. The US Military are finally sort of acknowledging this danger and are now holding an annual “Hack-a-Sat” competition,

Clive Robinson November 20, 2021 7:10 AM

@ Ted,

Is that a specialized skill? Is it much more time-consuming?

If you are doing it properly the answers are “Yes and Yes”.

One way that is being used is by “fault tolerance” by the likes of multiple systems in parallel.

Normally we would think of this as being three or more entirely seperate systems (as NASA used to talk about).

However these days there are “on chip” solutions. Cosmic rays are not very big and their effects can be limited to very small areas. Thus multi-CPU, multi-ROM, multi-RAM can and is available as “System On a Chip”(SoC) devices. Then there are FPGA’s. Which alow you to have parallel systems that can be used in what are “voting” circuits.

But you need to build up appropriate skill sets in other areas as well.

Ted November 20, 2021 7:30 AM


Cosmic rays are not very big and their effects can be limited to very small areas.

I guess that is certainly a type of radiation isn’t it? For some reason I had been thinking more along the lines of radiation from medical equipment.

I heard something about how people were kind of freaked out about microwaves and cancer. But apparently microwaves are non-ionizing radiation. Where things like UV light, X-rays, and gamma rays are ionizing.

If I remember correctly, ionizing radiation can break chemical bonds, and can lead to things like cancer down the road. But microwaves just cause polar molecules, like water, to spin very fast, which generates heat.

I wonder if different types, or levels, of radiation require different coding approaches.

Have you every tried to scarf down something straight out of the microwave after a long heat time? Good luck tasting anything else for about a week.

Clive Robinson November 20, 2021 6:08 PM

@ Ted,

Where things like UV light, X-rays, and gamma rays are ionizing.

Not all UV or sub atomic particles are considered ionizing. But yes the higher the frequency the more damage is done.

The boundary between ionizing and non-ionizing radiation is broad and ill defined and it coresponds to the mid part of the ultraviolet area. The reason for this is is because the electrons of different molecules and atoms get knocked off (ionize) at different energies generally between 10-30 eV.

Very roughly when you look at the periodic table the first ionization energy increases as you move from left to right, and decresses as you move from top to bottom. That is the removal of an electron of an isolated atom is hardest for the noble gases and easiest for the alkali metals.

As an aside the various ionizing energies corespond to the strengths of the chemical bonds between atoms in molecules. Whilst it is easy to give some rules of thumb such as the more positively charged the nucleus is or the number of electron shells, or electrons in a shell getting accurate values is somewhat more complicated.

It is easier to calculate the energy of a photon which increases with frequency or smaller wavelength (f = C/λ) and corresponds to energy according to,

E=hC/λ or E=hf

C is the speed of light
E is Energy
f is the frequency
h is Planck’s constant
λ is wavelength.

To convert energy in electron volts E(eV) to energy in joules E(J) can be calculated from the columb. But to make life simpler

E(J) = E(eV) × 1.6022e-19

Whilst 10eV might appear a tiny amount of energy you have to remember that is per photon, which is effectively “very small” but rarely quantized[1].

Normaly the breaking of a chemical bond does not give rise to a photon of ionizing radiation energies unlike the breaking of nuclear bonds.

[1] As these recent papers both note we rarely talk about the size of a photon…

You might also note that they fundamentally disagree with each other… So caution is advised (that is they are probably both wrong in some way).

Ted November 20, 2021 7:19 PM


Re: Ionizing radiation

Sir, you should teach. My summary responses to your thoughts: good point, interesting, yes true, and hmm yes.

Just to add to what you said, there are some energy-related equations that make you use joules instead if kilojoules. And it’s like, really? On top of all this you want us to multiply everything by 1000?! Argh! I think it was those darn physicists who preferred joules, and made it the SI unit.

As you mentioned ‘first ionization energy,” or the energy required to pop off an atom’s outermost electron, starts low on the left side of the periodic table and increases as you go right.

One of the most memorable images I have seen is of a piece of sodium sculpted into the shape of a bath tub duckie. Talk about a ghastly toy!

I found this from a Nature article:

Computer simulations revealed that sodium atoms at the surface of a small cluster each lose an electron within picoseconds. The positively charged ions rapidly repel each other, causing the explosion, while the protruding metal spikes generate new surface area that drives the reaction.

I’m sure you already know this, so then in essence, I am very much agreeing with you.

Thank you for all your very thought-provoking research. I have not done much studies on photons. I hope in the case of the researchers, that two wrongs do make a right 🙂

6449-225 November 23, 2021 8:50 AM

Faculty of Can You Hear the Shape Of, Department of Memory

Reference [1]

Course: Teh Internetz is a Series of Tubes, Part Deux
Professor: T. Stevens

If you have series of connecting tubes of the right cross-sectional shape (perhaps not circular but oval), playing a somewhat forceful jet of water against them (from the outside) and running the jet along the length of each segment of tubing will result in a rough musical ringing.

Can you figure out the configuration of the tubes just from the sound, that is, can you hear the shape of a series of tubes ?

Likewise, by refining the Rowhammer method, can you cause any desired aggregate of memory states to occur ?


Clive Robinson November 23, 2021 5:00 PM

@ 6449-225, ALL,

Likewise, by refining the Rowhammer method, can you cause any desired aggregate of memory states to occur ?

Long answer quick,

In theory yes with some limitations, in practice we are getting there slowely, but the current state of play favours the attacker not the defender and it’s difficult to see if it could be otherwise.

Long answer,

What you quote being described by the pipes and jets of water is not realy any diferent to X-Ray crystallography, it you think about it. But is it actualy relevant?

Read on 😉

RowHammer is about moving charge across the top of or tunneling through an insulator. The old way of doing it was kind of like “raising the tide to floats all boats” in a chosen locallity.

The downside of the old way is not just that it is relatively easy to detect, it’s also relatively easy to predict in that it’s relatively “flat” so it is relatively easy to find a way to counter the charge build up “measure two points” and assume that if they are equal the points in between are in effect the same amplitude / hight. Thus you need very few points of mesurment.

This new method is more like dropping individual grains of sand such that you build up different amplitudes / hights in diferent places. Thus the charge amplitude forms an approximation to a series of normal growth distribution curves and ends up like a map of foothills with hight measurments not as contour lines but as individual points on the grid intersections.

Whilst not a “one way function” it is certainly a lot easier to calculate in one direction –ie from cause at every point– but a heck of a lot harder to calculate from the other direction –effect from very very few points– due to the limited ability of the defender to measure[1] but the relative ease to calculate for the attacker (not that it is that easy).

The problem for manufacturers is what ever they do to counter act RowHammer is it “robs real estate” so overall the usefull bit density per given area on a chip drops…

So the game so far has been to do just enough to stop one attack method, but not actually solve the underlying problem so stop all attacks no mater how novel they may be.

Expect to hear more in the future, this is likely to turn into one of those “gifts that keep giving”…

[1] Think of it like trying to map a large area of foot hills in the dark your ability to get any meaning is dependent on the smoothing function of the terrain and the number of points of measurment. If the points are to few thus too far appart you miss a lot of edges and valleys. From basic signal analysis if you have three points in a line you have two measurment periods. The maximum frequency you can measure is one cycle across those two periods. To get the real amplitude the phase of the highest frequency has to be such that the maximum and the minimum have to align with the “sampling” measurment points. If there is a phase shift then the sampling is reading lower points on the curve and reaches zero at one phase offset. To avoide this you use IQ qudrature phase measurment at each point and the amplitude is given as A = sqr(I^2 + Q^2) and gets way more complicated as you move from simple one dimensional to two or more dimensions.

6449-225 November 25, 2021 12:50 PM

@ Clive Robinson

… is it actualy relevant?…

Perhaps it isn’t. The thought was perhaps the Rowhammer material context is analogous to an input or forcing signal being fed into a transducer, which responds and reacts to produce an output state and signal. Then if the situation is approximately linear, one could analyze the input, transducer, and output using frequency analysis (maybe temporal and spatial). As you point out, aliasing might be a problem. But probably it’s all too nonlinear for this to work that well.

ResearcherZero November 28, 2021 4:17 AM

@Clive Robinson

ECC only makes Rowhammer exploitation more difficult. There is a lot of marketing spin around modules, like moving ECC on chip like you mentioned. Marketing material in general doesn’t make features very clear to the consumers.

It will remain a viable attack for a long time as many devices cannot or will not be upgraded. The researchers had a 100% success rate on all the modules they tested.

Clive Robinson November 28, 2021 4:59 AM

@ ResearcherZero,

ECC only makes Rowhammer exploitation more difficult.

With the sort of ECC they are currently using yes your conclusion of,

It will remain a viable attack for a long time as many devices cannot or will not be upgraded.

However there are other forms of what is strictly speaking “Forward Error Correction Codes”(FEC / FECC).

But they are either slow or have other disadvantages in current computer architectures.

The question is thus “What changes are acceptable for significantly improved security?

For instance we currently have multiple CPU cores on single bits of silicon, but they share common busses to core RAM, thus an attack on one CPU core can affect a different CPU core execution in several ways, RowHammer being just one of them.

Think about what advantage you would get by having RAM split into multiple entirely seperate parts. Whilst RowHammer would work on one piece of silicon it would not be able to cross from one part to another.

Some years ago now long before RowHammer got a name, I was pointing out the advantages of using small striped down CPU’s with their own memory being integeral with the CPU, that is think of all memory as being faster than L1 cache and only slightly slower than registers…

If each such unit only ran one process then the likes of “reach around” attacks like RowHammer would still exist but be entirely pointless to use. Because your process and only your process would have access to the RAM you could reach with them…

ResearcherZero November 29, 2021 11:32 PM

@Clive Robinson

Better silicon design would increase security significantly. The adjustments it would take are both possible, and commercially viable. There would also be a significant government and enterprise market to take advantage of.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.