Studying Network Incident Response Teams Using Anthropological Methods

This is an interesting paper: “An Anthropological Approach to Studying CSIRTs.” A researcher spent 15 months at a university’s SOC conducting “ethnographic fieldwork.” Right now it’s more about the methodology than any results, but I’ll bet the results will be fascinating.

And here’s some information about the project.

Posted on July 14, 2014 at 6:16 AM20 Comments

Comments

Bruce Schneier July 14, 2014 9:03 AM

“Again with the sensationalised headlines. I hereby cancel my subscription.”

I’ll be sure to give you a full refund.

Not Chuckles July 14, 2014 9:26 AM

um Chuckles, is you jokin? B/c the headline is a description the paper its pointing to..

NobodySpecial July 14, 2014 10:01 AM

@Not chuckles – and here we have the major difficult with an American security agency attempting to produce a system that can automatically detect sarcasm and
irony.

Clive Robinson July 14, 2014 10:41 AM

On reading the information about the project (not the paper) I was struck with the feeling it was describing a 20,000ft view of Soft Systems Analysis…

paul July 14, 2014 10:59 AM

I wonder how much of the final product will be classified, since the intro says they’re going to try and use anthropological techniques to codify implicit knowledge (and then, one supposes, build tools that can work with what was formerly implicit).

This kind of anthropology, which started (more or less) with Suchman’s work at PARC back in the 70s, is a useful antidote to all the people who build “elegant” specs and architectures that ignore all the noise sources in human interaction. But it has a mixed record of results, depending in part on how willing the PHBs are to listen.

AlanS July 14, 2014 11:59 AM

The authors state: “The term ‘tacit knowledge’ means knowledge that cannot easily be put into words.” And they argue that the fieldwork allows them to make explicit what is implicit. Read Harry Collins (e.g. TEA laser study in Changing Order) on the futility of trying to explicitly ‘represent’ tacit knowledge (i.e encode it in rules, procedures, representations etc. for the purposes of knowledge transfer) Wittgenstein on language. I’m not sure the authors fully understand the significance of communities of practice.

There’s not much that is specifically anthropological or novel about this. Qualitative social scientists (not just anthropologists), philosophers and historians have been pondering tacit knowledge in the sciences and technology since at least the late-1960s. You’ll find extensive discussions of tacit knowledge in various schools of science studies that originated in the mid-1960s in the UK (e.g. the  Edinburgh School and Harry Collins at the Bath School). And they themselves point further back to Kuhn, Polanyi, Fleck and others. Much of this work was concerned with communities of practice and insights taken from Wittgensteinian philosophy (see, for example, Barnes’ Wittgensteinian reading of T.S Kuhn). Others associated with Edinburgh include Andy Pickering, Donald MacKenzieSteve Shapin  and Brian Wynne. In the US, individuals whose work intersects with the UK work in some way, and who have some common intellectual points of reference, include  Lucy Suchman (mentioned by the authors and now in the UK), Jean Lave and Michael Lynch.

Daniel Merigoux July 14, 2014 2:58 PM

Thanks for tracing back the history of Tacit Knowledge, AlanS. I’ll add some context. Tacit knowledge appeared a decade or two after the failure of Logical Positivism to find a unified language for expressing every scientific truth (Bertrand Russel, the Vienna Circle…). The ideal of a pure rational language to express the totality of pure reason dates back to the creation of Logic by Aristotle and the influence of Pythagorean (and Platonist) religious and secret conception of mathematics.

In the 70’s, experimental studies in social sciences show that purely rational communication is, precisely, an ideal, not a fact (even “purely” mechanical communication suffers from some sort of interference), since human communication is generally driven by symbols of social authority. Unity of language is (at least partially) constructed, not given.
Contrary to many so-called post-modern, I do not use this fact to diminish rationality, but to assert that everyone is responsible for his use of it, and to always work to improve objectivity.
The social scientists quoted by AlanS (I should add French sociologist Pierre Bourdieu (died in 2002), and not only because he is the 2nd most cited author since 1998) showed that rationality and unity of language do not fall from Heaven, together with an unambiguous operator manual, like a destiny (not sciences, indeed, but many religions claim that).
Rationality and language uses are social, and not predetermined or ready-made, that’s why it is legitimate to use social sciences to study them scientifically. This applies to computing, programming and, obviously, security.
Just to mention recent whistle-blowers, they generally disagree with some use of a security technique, not with security techniques in general or in themselves, nor with the rational necessity of security in itself. Whistle-blowers question the (lack of) rationality of a certain use, and they wouldn’t do it if reason and language where just one forever.
There are not even a single little small and mechanical task that is not linked to some social use. The systematic denial of use, then of choice, then of responsibility, then of plurality, then of expression, then of diversity, then of freedom, is generally the very trademark of totalitarianisms.
And, as a social an language scientist, among the many noteworthy qualities I see in Bruce Schneier, is his very acute social consciousness, a form of rationality mostly rare in the Tech milieu.

Curious July 15, 2014 4:59 AM

With the risk of having misunderstood the linked article, which I only had a glance at, I balked at the notion of “tacit knowledge”.

I am inclined to think that: embracing the notion of ‘tacit knowledge’ as being “knowledge” is the wrong way to go about things. Surely there must be some other way to try explain cultural bias or somesuch.

I find it appalling that anyone, in our time, might think of ‘apriori knowledge’ as being a real thing or even something one could understand as ‘knowledge’? To me, ‘tacit knowledge’ sounds alot like insisting on there being some kind of ‘apriori knowledge’.

Having taken a quick look at this, I am inclined to say that it would not be as much ‘knowledge’ as it would be ‘hearsay’, or more to the point, being bs, given that “knowledge conversion” is the modus operandi.

Footnote 11(about “tacit knowledge”) refers to this downloadable paper (pdf): https://ai.wu.ac.at/~kaiser/birgit/Nonaka-Papers/tacit-knowledge-and-knowledge-conversion-2009.pdf

“(…)(1) tacit and explicit knowledge can be conceptually distinguished along a continiuum.”

“(…)(2) knowledge conversion explains, theoretically and empirically, the interaction between tacit and explicit knowledge.”

Both points raises a red flag with me, and sounds to me alot like saying “I have an explanation for my other explanation”; and it would be as bad as if wanting to make a point, about wanting to make some other point.

I am no professor in neither philosophy, nor computer security, but I want to say that anyone hearing about Ludwig Josef Johann Wittgenstein (1889-1951), as mentioned earlier, might as well get to hear that there was the younger and “the late Wittgenstein”, because those personas are supposedly very different (‘logical positivism/analytic philosophy’ vs hm embracing uncertainty or somesuch).

Curious July 15, 2014 5:26 AM

Btw, I am something of an agnostic (not to be confused with ‘agnosticism’). Just wanted you people to know. 🙂

John Smith July 15, 2014 11:53 AM

Admittedly my knowledge of British operational nomenclature is liable to be out of date by now, but does anyone else sniff either a JDF or Not Invented Here about this list?

Myself July 15, 2014 1:06 PM

@Curious, I do agree with you:
I find it appalling that anyone, in our time, might think of ‘apriori knowledge’ as being a real thing or even something one could understand as ‘knowledge’? To me, ‘tacit knowledge’ sounds alot like insisting on there being some kind of ‘apriori knowledge’.

Tacit and a priori knowledge are a conceptual couple. If one term is refuted, then the concepts it was built from it may also be questioned.
I tend to think that a priori and tacit belong to a series of conceptual couples linked to an older opposition: pure and profane knowledge.
Not that Rational and Divine knowledge may not be incompatible at all to many people, including scientist (and are still not for many Platonists, even if to me they are!).
Since this opposition is religious, it can be absolute.
Then you can have an absolute separation between a priori knowledge (does not rely on the existence of the particular experience of some mortal then profane being) and tacit one.
To conceive knowledge as a continuum, as you sugget it, breaks with this original pure / impure or profane / sacred absolutely separated couples, and affirm they’re always related in some degree. And if there are no a priori languages, then formal languages are always linked to some profane mortal that uses them, who can be contradicted and is liable for his language acts. (i.e. formal languages are some pre-Babelian divine creation.)
So, the main point I see here is that the use of computational languages can’t be in absolute attributed exclusively to a machine, but always to some degree to some responsible human, with whom one can disagree. Disagree and become a whistle-blower if the use is illegal albeit done under the cover of official power. One can be forced by official authority to do illegal things and has the right to refuse.(war crimes in some places do exist on that ground).
Then the question of a priori knowledge seems obvious to you but it is not obvious at all. Google, for instance, has been sued many times because of the results of its search engine rankings. The company simply alleged that those results were (absolutely) objective. It’s like saying that its algorithm exists a priori. It is presented as pure in itself, but it is its use by profanes that profanes it. But it is proven that Google itself uses its algorithm to influence minds, don’t they? Is this licit?
http://www.npr.org/blogs/alltechconsidered/2014/07/09/330003058/in-google
newsroom-brazil-defeat-is-not-a-headline
Bruce deals with, say, IT security. He may have chosen to specialize in that field because he became aware that it is central to understand present society and act in it. But does it mean he uses IT security illegally? This is the point. There is no absolute determination, or apriori relationship, between the person who produces a tool, the tool itself and the way to use it. It always depends on some choice (which may depend on possibilities, like techniques or ideas in use, characteristic to the social and historical context we live in).
Then if there is an illicit relation, the relation has to be proven. Then, for instance, you should not be guilty just because you used Tor if one cannot prove you made an illegal use of it, unless you ban the use of this soft itself. But then you have to prove it was built on purpose for illegal uses, like viruses.
In any case proof cannot be given by any sort of priori knowledge. And if true knowledge depends on use (the so-called 2nd Wittgenstein, of the Philosophical Investigations, not the Tractatus “one”) then a priori knowledge and truth do not exist effectively.
(I do like Nonaka works. It seems Bourdieu’s work do comply with Nonaka’s , but Bourdieu offers specifically to explain the social genesis of Tacit Knowledge, through the concept of habitus, and certainly shows that a priori/a posteriori are a false opposition, and refutes it, just as he claims some continuum between theory and practice, as theory itself is a social practice distinctive of scholars.)

gordo July 16, 2014 1:57 PM

Uncertainty, being what it is epistemology its never-ending story with glimmers along the way. Test it out; innovate; collaborate.

Leaving aside the dances-with-wolves-like flavor and trust issues built in to the research narrative, the research does show that analysts were never asked what they could use to improve their work performance, or how that might be done. These folks are smart people, intimately involved with their day-to-day work, trying to connect the dots under hot-house conditions; no time to look back; we’re understaffed as it is.

If you’re able to run a forward-looking business you want your teams to ask: “What if we could…?” This is basic process improvement, if not R&D, and you don’t need ethnographers to do this work. The researchers have, however, revealed or resurfaced what appear to be core business model, if not industry ecosystem problems. Others have touched upon some of these earlier in this thread.

In addition to workplace employee participation and business model issues, another way to frame this may be as a kind of data control issue (and yeah, data control is power). “What’s my feed and what’s my need?” Again, others have touched upon this notion, this variation on the theme, earlier in this thread.

Put simply, the real skill gap may be one of industry leadership, due, possibly to so-called market pressures, FUD, and misallocated resources, to name a few. I mean, and loaded question that it is, and not to hijack the thread: Should analysts even be seeing half of what comes across their desks?

gordo July 17, 2014 2:45 PM

A red-team lens on the knowledge-production question from Dr. Dan Geer and John Harthorne:

Quote:

While there is wisdom in that ancient English aphorism that “It is the poor carpenter what curses his tools,” in penetration testing the best carpenters make their own tools. These tools are part labor productivity for the penetration tester – and advancing labor productivity is ever the core supply-side defense of profit margins – and part complexity rigging. These bespoke tools are, if anything, the intellectual content of the penetration testing field and the flux of these tools into the marketplace measures the stage of commodotized market development. Password crackers are a fine example – who would write one today now that first rate crackers are available for so little money that all you are really paying for is a user interface? Network service inventory takers are just as fine an example – who would write one of these when the Internet is so full of them that over 10% of total Internet traffic is the sort of low level scans these tools are built to do? In some sense, the point at which an artist’s intuition moves beyond mere suspicion and s/he writes down (codes) what s/he knows in the form of a tool the state of the art is advanced – not everywhere and at once, but in the sense that the future is already here, just unevenly distributed. It is the tools of the artist class that define the state of their art, even if they will not show them to you. (section 3 Time Line and Drivers, para. 3)

Geer, D., & Harthorne, J. (2002, December 12). Penetration testing: A duet. Keynote, Annual Computer Security Applications Conference (ACSAC), Las Vegas, Nevada.

http://geer.tinho.net/acsac.final.02xii.pdf

Buck July 17, 2014 9:20 PM

@gordo

Great quote! I love how that comment could have been equally on-topic if it instead was posted on the latest GCHQ thread… 🙂

AlanS July 18, 2014 5:47 PM

@Curious
The people I referenced were primarily concerned with the philosophy of the latter Wittgenstein of the Philosophical Investigations, which deals with the problems of language and meaning.

Following up on Daniel’s comments on Bourdieu see Outline of a Theory of Practice, especially discussion of Wittgenstein on what it means to follow a rule in section 1 and habitus in section 2.

Add Hubert Dreyfus to the list. The latter might be of more interest to some of the people on this blog because he deals with computers and artificial intelligence in some of his best known works. He wrote a famous text called What Computers Can’t Do (1972). (see also Harry Collins’ book Artificial Experts: Social Knowledge and Intelligent Machines.)

The authors of the text cited by Bruce aren’t making “what is tacit explicit”. There is no pure representational medium that reflects “reality”; meaning exists in the context of social relations/practices. What they are doing is getting designers, developers etc. to live in / experience the practical, lived-world of others (so called ‘end-users’ if you will). The “making explicit” is merely their way of talking about the language they use as part of their own practical actions within a community of designers/developers, and that language itself is dependent on tacit understandings. For a better understanding of all this see Gilbert Ryle’s “The Thinking of Thoughts: What is ‘Le Penseur’ Doing?”(Ryle was one of the British ‘ordinary language’ philosophers who took a Wittgensteinian approach of language). Ideas from the latter are taken up in Clifford Geertz’s “Thick Description: Toward an Interpretive Theory of Culture”.

In Dreyfus and Rabinow’s book on Michel Foucault, Beyond Structuralism and Hermeneutics, they make the interesting observation that the natural or ‘normal’ sciences can mostly bracket the social practices that make them possible (as described by T.S. Kuhn in The Structure of Scientific Revolutions–Kuhn discusses practical problem solutions, ‘exemplars’, around which communities form to develop and extend). Foucault (who, unlike Kuhn, never mentioned Wittgenstein, as far as I know) mostly writes about the ‘abnormal’ sciences, which try to bracket the social practices on which they depend but can’t because they are internal to the objects they study. One of Foucault’s insights is that human sciences came into existence as part of what were the original police states and developed along with the rise of liberal and neoliberal societies, knowledge and power being two sides of the same coin that arise from a configuration of practices. The social/human sciences, the sciences that surveil and theorize ‘man’, if you will, are both resistance and oppression, destruction and creation. As Foucault wrote: “Man appears in his ambiguous position as object of knowledge and as a subject that knows: enslaved sovereign, observed spectator”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.