Seeing Like a Data Structure

Technology was once simply a tool—and a small one at that—used to amplify human intent and capacity. That was the story of the industrial revolution: we could control nature and build large, complex human societies, and the more we employed and mastered technology, the better things got. We don’t live in that world anymore. Not only has technology become entangled with the structure of society, but we also can no longer see the world around us without it. The separation is gone, and the control we thought we once had has revealed itself as a mirage. We’re in a transitional period of history right now.

We tell ourselves stories about technology and society every day. Those stories shape how we use and develop new technologies as well as the new stories and uses that will come with it. They determine who’s in charge, who benefits, who’s to blame, and what it all means.

Some people are excited about the emerging technologies poised to remake society. Others are hoping for us to see this as folly and adopt simpler, less tech-centric ways of living. And many feel that they have little understanding of what is happening and even less say in the matter.

But we never had total control of technology in the first place, nor is there a pretechnological golden age to which we can return. The truth is that our data-centric way of seeing the world isn’t serving us well. We need to tease out a third option. To do so, we first need to understand how we got here.

Abstraction

When we describe something as being abstract, we mean it is removed from reality: conceptual and not material, distant and not close-up. What happens when we live in a world built entirely of the abstract? A world in which we no longer care for the messy, contingent, nebulous, raw, and ambiguous reality that has defined humanity for most of our species’ existence? We are about to find out, as we begin to see the world through the lens of data structures.

Two decades ago, in his book Seeing Like a State, anthropologist James C. Scott explored what happens when governments, or those with authority, attempt and fail to “improve the human condition.” Scott found that to understand societies and ecosystems, government functionaries and their private sector equivalents reduced messy reality to idealized, abstracted, and quantified simplifications that made the mess more “legible” to them. With this legibility came the ability to assess and then impose new social, economic, and ecological arrangements from the top down: communities of people became taxable citizens, a tangled and primeval forest became a monoculture timber operation, and a convoluted premodern town became a regimented industrial city.

This kind of abstraction was seemingly necessary to create the world around us today. It is difficult to manage a large organization, let alone an interconnected global society of eight billion people, without some sort of structure and means to abstract away details. Facility with abstraction, and abstract reasoning, has enabled all sorts of advancements in science, technology, engineering, and math—the very fields we are constantly being told are in highest demand.

The map is not the territory, and no amount of intellectualization will make it so. Creating abstract representations by necessity leaves out important detail and context. Inevitably, as Scott cataloged, the use of large-scale abstractions fails, leaving leadership bewildered at the failure and ordinary people worse off. But our desire to abstract never went away, and technology, as always, serves to amplify intent and capacity. Now, we manifest this abstraction with software. Computing supercharges the creative and practical use of abstraction. This is what life is like when we see the world the way a data structure sees the world. These are the same tricks Scott documented. What has changed is their speed and their ubiquity.

Each year, more students flock to computer science, a field with some of the highest-paying, most sought-after jobs. Nearly every university’s curriculum immediately introduces these students to data structures. A data structure enables a programmer to organize data—about anything—in a way that is easy to understand and act upon in software: to sort, search, structure, organize, or combine that data. A course in data structures is exercise after exercise in building and manipulating abstractions, ones that are typically entirely divorced from the messy, context-laden, real-world data that those data structures will be used to store.

As students graduate, most join companies that demand these technical skills—universally seen as essential to computer science work—who see themselves as “changing the world,” often with even grander ambitions than the prosaic aims of state functionaries cataloged by Scott.

Engineers are transforming data about the world around us into data structures, at massive scale. They then employ another computer science trick: indirection. This is the ability to break apart some sociotechnical process—to “disrupt”—and replace each of the now-broken pieces with abstractions that can interface with each other. These data structures and abstractions are then combined in software to take action on this view of reality, action that increasingly has a human and societal dimension.

Here’s an example. When the pandemic started and delivery orders skyrocketed, technologists saw an opportunity: ghost kitchens. No longer did the restaurant a customer was ordering from actually have to exist. All that mattered was that the online menu catered to customer desires. Once ordered, the food had to somehow get sourced, cooked, and packaged, sight unseen, and be delivered to the customer’s doorstep. Now, lots of places we order food from are subject to this abstraction and indirection, more like Amazon’s supply chain than a local diner of yore.

Facebook sees its users like a data structure when it classifies us into ever more precise interest categories, so as to better sell our attention to advertisers. Spotify sees us like a data structure when it tries to play music it thinks we will like based on the likes of people who like some of the same music we like. TikTok users often exclaim and complain that its recommendations seem to uncannily tap into latent desires and interests, leading many to perform psychological self-diagnosis using their “For You” page.

Data structures dominate our world and are a byproduct of the rational, modern era, but they are ushering in an age of chaos. We need to embrace and tame, but not extinguish, this chaos for a better world.

Machines

Historian of technology Lewis Mumford once wrote that clocks enabled the division of time, and that enabled the regimentation of society that made the industrial revolution possible. This transformation, once fully underway around the world in the 20th century, fundamentally changed the story of society. It shifted us away from a society centered around interpersonal dynamics and communal interactions to one that was systematic and institutional.

We used to take the world in and interpret it through human eyes. The world before the industrial revolution wasn’t one in which ordinary people interacted with large-scale institutions or socio-technical systems. It wasn’t possible for someone to be a “company man” before there was a corporate way of doing things that in theory depended only on rules, laws, methods, and principles, not on the vicissitudes of human behavior.

Since the beginning of the industrial revolution, workers and the natural world have been subject to abstraction. This involves the use of abstract reason over social preferences. Knowledge about the world was no longer in our heads but out in the world. So we got newspapers, instruction manuals, bylaws, and academic journals. And we should be clear: this was largely an improvement. The era of systems—of modernity—was an improvement on what came before. It’s better for society to have laws rather than rulers, better for us to lean on science than superstition. We can’t and shouldn’t go back.

The tools of reason enabled the “high modernists,” as Scott calls them, to envision a world shaped entirely by reason. But such reason was and is never free of personal biases. It always neglects the messiness of reality and the tacit and contextual knowledge and skill that is needed to cope with that mess—and this is where trouble began to arise.

Workers were and are treated as cogs in the industrial machine, filling a narrow role on an assembly line or performing a service job within narrow parameters. Nature is treated as a resource for human use, a near-infinite storehouse of materials and dumping ground for wastes. Even something as essential and grounding as farming is seen as mechanistic—”a farm is a factory in a remote area,” as put by one John Deere executive—where plants are machines that take in nitrogen, phosphorus, and potassium and produce barely edible dent corn. There’s even a popular myth that eminent business theorist W.E. Deming said: “If you can’t measure it, you can’t manage it”—lending credence to the measurement and optimization mindset.

The abstractions nearly write themselves. Though, leaving nothing to chance, entrepreneurs and their funders have flocked to translating these precomputing abstractions for the age of data structures. This is happening in both seen and unseen ways. Uber and Lyft turned people into driving robots that follow algorithmic guidance from one place to another. Amazon made warehouse workers perform precisely defined tasks in concert with literal robots. Agtech companies turn farms into data structures to then optimize the application of fertilizer, irrigation water, pesticides, and herbicides.

Beyond simply dividing time, computation has enabled the division of information. This is embodied at the lowest levels—bits and packets of data flowing through the Internet—all the way up to the highest levels, where many jobs can be described as a set of information-processing tasks performed by one worker only to be passed along to another. But this sort of computing—that’s just worn-out optimization techniques that date back to last century’s Taylorism—didn’t move us into the unstable world we’re in today. It was a different sort of computation that did that.

Computation

Today we’re in an era where computing not only abstracts our world but also defines our inner worlds: the very thoughts we have and the ways we communicate.

It is this abstracted reality that is presented to us when we open a map on our phones, search the Internet, or “engage” on social media. It is this constructed reality that shapes the decisions businesses make every day, governs financial markets, influences geopolitical strategy, and increasingly controls more of how global society functions. It is this synthesized reality we consume when the answers we seek about the world are the entire writings of humanity put into a blender and strained out by a large language model.

The first wave of this crested a decade ago only to crash down on us. Back then, search engines represented de facto reality, and “just Google it” became a saying: whatever the search engine said was right. But in some sense that was a holdover from the previous “modern” era but with a large data structure—the search engine’s vast database—replacing some classic source of truth such as the news media or the government. We all had a hope that with enough data, and algorithms to sift through it all, we could have a simple technological abstraction over the messiness of reality with a coherent answer no matter what the question was.

As we move toward the future promised by some technologists, our human-based view of the world and that of the data structures embedded in our computing devices will converge. Why bother to make a product at all when you can just algorithmically generate thousands of “ghost products,” in the hopes that someone will buy.

Scott’s critiques of datafication remain. We are becoming increasingly aware that things are continuous spectra, not discrete categories. Writing about the failure of contact tracing apps, activist Cory Doctorow said, “We can’t add, subtract, multiply or divide qualitative elements, so we just incinerate them, sweep up the dubious quantitative residue that remains, do math on that, and simply assert that nothing important was lost in the process.”

A pair of augmented-reality glasses may no longer let us see the world unfiltered by data structures but instead dissect and categorize every experience. A person on the street is no longer an individual but a member of a subcategory of “person” as determined by an AI classifier. A street is no longer the place you grew up but an abstraction from a map. And a local cafe is no longer a community hangout but a data structure containing a menu, a list of reservation options, and a hundred 5-star ratings.

Whether as glasses we look through or simply as screens on our devices, reality will be augmented by the data structures that categorize the world around us. Just as search engines caused the rise of SEO, where writers tweak their writing to attract search engines rather than human readers, this augmented reality will result in its own optimizations. We may be seeing the first signs of this with “Thai Food Near Me” as the literal name of businesses that are trying to satisfy the search function of mapping apps. Soon, even the physical form of things in the world may be determined in a coevolution with technology, where the form of things in the real world, even a dish at a restaurant, is chosen by what will look best when seen through our technological filters. It’s a data layer on top of reality. And the problems get worse when the relative importance of the data and reality flip. Is it more important to make a restaurant’s food taste better, or just more Instagrammable?

People are already working to exploit the data structures and algorithms that govern our world. Amazon drivers hang smartphones in trees to trick the system. Songwriters put their catchy choruses near the beginning to exploit Spotify’s algorithms. And podcasters deliberately mispronounce words because people comment with corrections and those comments count as “engagement” to the algorithms.

These hacks are fundamentally about the breakdown of “the system.” (We’re not suggesting that there’s a single system that governs society but rather a mess of systems that interact and overlap in our lives and are more or less relevant in particular contexts.) Systems work according to rules, either ones made consciously by people or, increasingly, automatically determined by data structures and algorithms. But systems of rules are, by their nature, trying to create a map for a messy territory, and rules will always have loopholes that can be taken advantage of.

The challenge with previous generations of tech—and the engineers who built them—is that they got stuck in the rigidity of systems. That’s what the company man was all about: the processes of the company, of Taylorism, of the McKinsey Way, of Scrum software development, of effective altruism, and of so many more. These all promised certainty, control, optimality, correctness, and sometimes even virtue: all just manifestations of a rigid and “rational” way of thinking and solving problems. Making systems work in this way at a societal level has failed. This is what Scott was saying in his seminal book. It was always doomed to fail.

Fissures

Seeing like a state was all about “legibility.” But the world is too difficult to make legible today. That’s where data structures, algorithms, and AI come in: humans no longer need to manually create legibility. Nor do humans even need to consume what is made legible. Raw data about the world can be fed into new AI tools to create a semblance of legibility. We can then have yet more automated tools act upon this supposed representation of the world, soon with real-life consequences. We’re now delegating the process of creating legibility to technology. Along the way, we’ve made it approximate: legible to someone or something else but not to the person who actually is in charge.

Right now, we’re living through the last attempts at making those systems work, with a perhaps naive hope and a newfound belief in AI and the data science that fuels it. The hope is that, because we have better algorithms that can help us make sense of even more data, we can somehow succeed at making systems work where past societies have failed. But it’s not going to work because it’s the mode of thought that doesn’t work.

The power to see like a state was intoxicating for government planners, corporate efficiency experts, and adherents to high modernism in general. But modern technology lets us all see like a state. And with the advent of AI, we all have the power to act on that seeing.

AI is made up of data structures that enable a mapping from the messy multidimensional reality that we inhabit to categories and patterns that are useful in some way. Spotify may organize songs into clever new musical genres invented by its AI, but it’s still an effort to create legibility out of thin air. We’re sending verbose emails with AI tools that will just be summarized by another AI. These are all just concepts, whether they’re created by a human mind or by a data structure or AI tool. And while concepts help us understand reality, they aren’t reality itself.

The problem we face is at once simple to explain and fiendishly difficult to do something about. It’s the interplay of nebulosity and pattern, as scholar David Chapman puts it: reality is nebulous (messy), but to get on with our lives, we see patterns (make sense of it in context-dependent ways). Generally, we as people don’t have strict rules for how to make breakfast, and we don’t need the task explained to us when a friend asks us for a cup of coffee. But that’s not the case for a computer, or a robot, or even a corporate food service, which can’t navigate the intricacies and uncertainties of the real world with the flexibility we expect of a person. And at an even larger scale, our societal systems, whether we’re talking about laws and governments or just the ways our employers expect us to get our jobs done, don’t have that flexibility built into them. We’ve seen repeatedly how breaking corporate or government operations into thousands of disparate, rigid contracts ends in failure.

Decades ago, the cracks in these rational systems were only visible to a few, left for debate in the halls of universities, board rooms, and militaries. Now, nebulosity, complexity, and the breakdown of these systems is all around for everyone to see. When teenagers are training themselves to see the world the way social-media ranking algorithms do, and can notice a change in real time, that’s how we know that the cracks are pervasive.

The complexity of society today, and the failure of rigid systems to cope, is scary to many. Nobody’s in charge of, or could possibly even understand, all these complex technological systems that now run our global society. As scholar Brian Klaas puts it, “the cognitive shortcuts we use to survive are mismatched with the complex reality we now navigate.” For some, this threat demands dramatic action, such as replacing some big system we have—say, capitalism—with an alternative means of organizing society. For others, it demands throwing out all of modernity to go back to a mythical, simpler golden age: one with more human-scale systems of order and authority, which they imagine was somehow better. And yet others see the cracks in the system but hope that with more data and more tweaks, it can be repaired and our problems will be definitively solved.

However, it’s not this particular system that failed but rather the mode of society that depends on rigid systems to function. Replacing one rigid system with another won’t work. There’s certainly no golden age to return to. And simpler forms of society aren’t options for us at the scale of humanity today. So where does that leave us?

Tension

The ability to see like a data structure afforded us the technology we have today. But it was built for and within a set of societal systems—and stories—that can’t cope with nebulosity. Worse still is the transitional era we’ve entered, in which overwhelming complexity leads more and more people to believe in nothing. That way lies madness. Seeing is a choice, and we need to reclaim that choice. However, we need to see things and do things differently, and build sociotechnical systems that embody this difference.

This is best seen through a small example. In our jobs, many of us deal with interpersonal dynamics that sometimes overwhelm the rules. The rules are still there—those that the company operates by and laws that it follows—meaning there are limits to how those interpersonal dynamics can play out. But those rules are rigid and bureaucratic, and most of the time they are irrelevant to what you’re dealing with. People learn to work with and around the rules rather than follow them to the letter. Some of these might be deliberate hacks, ones that are known, and passed down, by an organization’s workers. A work-to-rule strike, or quiet quitting for that matter, is effective at slowing a company to a halt because work is never as routine as schedules, processes, leadership principles, or any other codified rules might allow management to believe.

The tension we face is that on an everyday basis, we want things to be simple and certain. But that means ignoring the messiness of reality. And when we delegate that simplicity and certainty to systems—either to institutions or increasingly to software—they feel impersonal and oppressive. People used to say that they felt like large institutions were treating them like a number. For decades, we have literally been numbers in government and corporate data structures.

Breakdown

As historian Jill Lepore wrote, we used to be in a world of mystery. Then we began to understand those mysteries and use science to turn them into facts. And then we quantified and operationalized those facts through numbers. We’re currently in a world of data—overwhelming, human-incomprehensible amounts of data—that we use to make predictions even though that data isn’t enough to fully grapple with the complexity of reality.

How do we move past this era of breakdown? It’s not by eschewing technology. We need our complex socio-technical systems. We need mental models to make sense of the complexities of our world. But we also need to understand and accept their inherent imperfections. We need to make sure we’re avoiding static and biased patterns—of the sort that a state functionary or a rigid algorithm might produce—while leaving room for the messiness inherent in human interactions. Chapman calls this balance “fluidity,” where society (and really, the tech we use every day) gives us the disparate things we need to be happy while also enabling the complex global society we have today.

These things can be at odds. As social animals, we need the feeling of belonging, like being part of a small tribe. However, at the same time, we have to “belong” in a technological, scientific, and institutional world of eight billion interconnected people. To feel connected to those around us, we need access to cultural creativity, whether it be art, music, literature, or forms of entertainment and engagement that have yet to be invented. But we also need to avoid being fragmented into nanogenres where we can’t share that creativity and cultural appreciation with others. We must be able to be who we are and choose who we associate with on an ever-changing basis while being able to play our parts to make society function and feel a sense of responsibility and accomplishment in doing so. And perhaps most importantly, we need the ability to make sense of the torrent of information that we encounter every day while accepting that it will never be fully coherent, nor does it need to be.

This isn’t meant to be idealistic or something for the distant future. It’s something we need now. How well civilization functions in the coming years depends upon making this a reality. On our present course, we face the nihilism that comes with information overload, careening from a world that a decade ago felt more or less orderly to one in which nothing has any clear meaning or trustworthiness. It’s in an environment like this that polarization, conspiracies, and misinformation thrive. This leads to a loss of societal trust. Our institutions and economic systems are based upon trust. We’ve seen what societies look like when trust disappears: ordinary social systems fail, and when they do work, they are more expensive, capricious, violent, and unfair.

The challenge for us is to think how we can create new ways of being and thinking that move us—and not just a few of us but everyone—to be able to at first cope, and then later thrive, in this world we’re in.

Fluidity

There’s no single solution. It’ll be a million little things, but they all will share the overall themes of resilience in the form of fluidity. Technology’s role in this is vital, helping us make tentative, contextual, partial sense of the complex world around us. When we take a snapshot of a bird—or listen to its song—with an app that identifies the species, it is helping us gain some limited understanding. When we use our phones to find a park, local restaurant, or even a gas station in an unfamiliar city, it is helping us make our way in a new environment. On vacation in France, one of us used our phone’s real-time translation feature to understand what our tour guide was saying. Think of how we use weather apps, fitness apps, or self-guided museum tour apps to improve our lives. We need more tools like this in every context to help us to understand nuance and context beyond the level we have time for in our busy lives.

It’s not enough to have software, AI or otherwise, interpret the world for us. What we need is the ability to seamlessly navigate all the different contexts in our life. Take, for instance, the problem of understanding whether something seen online is true. This was already tricky and is now fiendishly difficult what with the Internet, social media, and now generative AI all laden with plausible untruths. But what does “true” mean, anyway? It’s equally wrong to believe in a universal, singular, objective truth in all situations as to not know what to believe and hold everything to be equally false (or true). Both of these options give propagandists a leg up.

Instead, we need fluidity: in Chapman’s terms, to be able to always ask, “In what sense?” Let’s say you see a video online of something that doesn’t seem physically possible and ask, “Is this real?” A useful technology would help you ask, “In what sense?” Maybe it’s something done physically, with no trickery involved, and it’s just surprising. Maybe it’s a magic trick, or real as in created for a TV show promotion, but not actually something that happened in the physical world. Maybe it was created by a movie special effects team. Maybe it’s propaganda created by a nation state. Sorting through contexts like this can be tedious, and while we intuitively do it all the time, in a technologically complex world we could use some help. It’s important to enable people to continue to communicate and interact in ways that make us feel comfortable, not completely driven either by past social custom or by algorithms that optimize for engagement. Think WhatsApp groups where people just talk, not Facebook groups that are mediated and controlled by Meta.

Belonging is important, and its lack creates uncertainty and a lack of trust. There are lessons we can learn from nontechnological examples. For example, Switzerland has a remarkable number of “associations”—for everything from business groups to bird watching clubs—and a huge number of Swiss residents take part. This sort of thing was once part of American culture but declined dramatically over the 20th century as documented in Putnam’s classic book Bowling Alone. Technology can enable dynamic new ways for people to associate as the online and offline worlds fuse—think of the Internet’s ability to help people find each other—though it must avoid the old mindset of optimization at all costs.

We all struggle with life in our postmodern society, that unplanned experiment of speed, scale, scope, and complexity never before seen in human history. Technology can help by bridging what our minds expect with how systems work. What if every large institution, whether a government or corporation, were to enable us to interact with it not on its terms, in their bureaucratic language and with all the complexity that large systems entail, but with computational tools that use natural language, understand context and nuance, and yet can still interface with the data structures that make its large systems tick. There are some promising early prototypes, such as tools that simplify the process of filling out tedious paperwork. That might feel small, almost trivial. But refined, and in aggregate, this could represent a sea change in how we interact with large systems. They will come to feel no longer as impersonal and imposing bureaucracies but as enablers of functioning and flourishing societies.

And it’s not all about large scale either. Scale isn’t always desirable; as Bill McKibben wrote in Eaarth, we’d probably be better off with the Fortune 500,000 than the Fortune 500. Scale brings with it the ills of Seeing Like a State; the authoritarian high modernist mindset takes over at large scale. And while large organizations can exist, they can’t be the only ones with access to, or ability to, afford new technologies. Enabling the dynamic creation and destruction of new organizations and new types of organization—and legal and technical mechanisms to prevent lock-in and to prevent enclosure of public commons—will be essential to keep this new fluid era thriving. We can create new “federated” networks of organizations and social groups, like we’re seeing in the open social web of Mastodon and similar technologies, ones where local groups can have local rules that differ from, but do not conflict with, their participation in the wider whole.

This shift is not just about how society will work but also how we see ourselves. We’re all getting a bit more used to the idea of having multiple identities, and some of us have gotten used to having a “portfolio career” that is not defined by a single hat that we wear. While today there is often economic precarity involved with this way of living, there need not be, and the more we can all do the things that are the best expressions of ourselves, the better off society will be.

Ahead

As Mumford wrote in his classic history of technology, “The essential distinction between a machine and a tool lies in the degree of independence in the operation from the skill and motive power of the operator.” A tool is controlled by a human user, whereas a machine does what its designer wanted. As technologists, we can build tools, rather than machines, that flexibly allow people to make partial, contextual sense of the online and physical world around them. As citizens, we can create meaningful organizations that span our communities but without the permanence (and thus overhead) of old-school organizations.

Seeing like a data structure has been both a blessing and a curse. Increasingly, it feels like it is an avalanche, an out-of-control force that will reshape everything in its path. But it’s also a choice, and there is a different path we can take. The job of enabling a new society, one that accepts the complexity and messiness of our current world without being overwhelmed by it, is one all of us can take part in. There is a different future we can build, together.

This essay was written with Barath Raghavan, and originally appeared on the Harvard Kennedy School Belfer Center‘s website.

Posted on June 3, 2024 at 7:06 AM35 Comments

Comments

What price common sense? June 3, 2024 9:15 AM

@Bruce Schneier
@ALL

Firstly we can not go back. This was looked at in the 1960’s and later when it was assumed there would be a massive nuclear war or worse knocking mankind of it’s feet.

The reason,

“We have taken yesterday’s future and burnt it.”

Basically we do not have the raw resources available where we can get at them any longer.

Secondly the primary reason future planing around humans tends to fail is because not only do humans have agency, they have incentive to “buck the system for their own gain”.

Thirdly most humans don’t think beyond simple addition or multiplication with subtraction causing problems and anything beyond simple division beyond most. Most readers here should know that life is Bout % growth etc which gives you the base for natural logarithms (Napiers Bones, the Slide-Rule etc).

This limitation applies to most people even those developing algorithms. For example see if you can follow

https://github.com/francisrstokes/githublog/blob/main/2024%2F5%2F29%2Ffast-inverse-sqrt.md

The original way to deal with this was only found after we had developed graphs by which we could see things, then develop forms of approximation. For instance the rule for cooking meat in an oven that says “and one pound for the oven” is a way to do a two line approximation to a power curve.

When we understood the advantages of logarithms we produced graph paper scaled in them which enabled us to see difficult curves as straight lines and thus make small variations more visible. Modern technology requires multidimensional graphs to visualise things and our brains mostly can not do so.

Thus for all mankinds advances we have limitations in our very foundations and thus we have had to make machines to do much of the work for us.

As I note over on the current squid page this month is the second centenary of Lord Kelvin’s birth. He built a mechanical analog computer to calculate tide heights any where in the world in any time frame that man would find useful.

The parts that made up that computer were still in use into the 1970’s and even went into space as part of the CCCP craft instrumentation.

For good or bad often both mankind employes technology to move forward. In the past it was difficult to stop technology being used by others in new ways.

As many know the Holy Roman Empire tried to hold back knowledge and failed. It’s worth examining the history of that.

Because modern Corporations are trying to do the same, and thus putting the brakes on rational hunan development.

It will fail, the only real question is how much harm will happen in the process?

echo June 3, 2024 10:01 AM

Well, that’s a lot of words and not the easiest of reads but it had some interesting and necessary things to say. I’m not sure I agree with some of the conclusions it makes and would put them differently, myself. After saying this and reviewing the topic again what makes it hard to read for me is the article is written from the wrong starting point for me. I look at everything the other way around which would with a little work also make for a much shorter topic. Mind you the same could be said of some gender studies material I have read. It can be a painful read as it goes through its twists and turns and discoveries and reappraisals.

I can only speak for myself but I use systems as tool by seeing systems within systems. For me code is interchangeable with policy and law. The security model I prefer (technical, human rights, economic, social) is simple but each quadrant contains all the others. People usually miss that… Again, it’s just my point of view but I feel people can overthink things. It’s just a tool. I still have a world view and subjective sense of myself and the world, and I don’t need to know 90% of the stuff out there.

I’m left scratching my head a bit as while the article is illuminating to a degree it casts a shadow over Keynesianism and feminism. Women have always had to multi-task or experience 1-2 or even 2-3 career changes in our lives, and navigate horizontal relationships; and have a different view of investment as a socio-economic action.

The article mentions magic. My life is plagued by synchronicity. After picking up on an old childhood interest in magic I didn’t know that the very next week that Pen of Penn and Teller fame did some television interviews. After posting about magic and Fay Presto I discovered this and a few other videos too. Penn went on to discuss magic being a compact between the magician and audience based on trust. The magician knows it’s a trick. The audience knows it’s a trick. They’re both in on it. With CGI less so.

https://www.youtube.com/watch?v=rKFMa-GH0bQ
Penn Jillette (Penn & Teller) Answers Magic Questions From Twitter | Tech Support | WIRED

And:

https://www.youtube.com/watch?v=OIfu7JbFgQE
Penn Jillette shares the tricks that have stumped him

Exactly who the authors filched the example from I have no idea and it doesn’t matter really. I’ve pinched enough in my time. I also commented my interest in magic is instructive not just because of what is seen but what is not seen. I feel the authors are trying to articulate a very difficult point but there’s also the unsaid bit. It’s a bit of mystery but there’s another layer unsaid: empathy. I think, really, this is the thing missing from all this. Not just truth or society but empathy and this is one thing Penn makes a point of consistently in his interviews. He believes the performance needs to be kind. It’s just my point of view but I think this is what the entire article boils down to – a search for kindness.

JPA June 3, 2024 11:45 AM

From my perspective as a psychiatrist, the realities that matter to the people I treat are not numerical. However, the “guidelines” for treatment often demand that I assign numerical values to such non-numerical realities via “rating scales”. If the number goes up, then the patient is getting “worse”. If the number goes down the patient is getting “better”. However, the patient whose numbers change may not describe their condition as changing in the same ways.

New pharmaceutical treatments are evaluated by how the numbers change on the rating scales. But since the numbers on the scales are a poor proxy for the patient’s experience we have psychotropics that are “proven effective in clinical studies” but that are failing to stem the tide of increasing mental distress.

I imagine this is true in other fields as well. For example I can imagine that the health of a company is to some degree related to the amount of camaraderie among the employees. But this is not something that has a numerical value. Proxies such as absenteeism or productivity will not capture a phenomenon that is not numerical. But our algorithms demand numbers even if they are misleading, and we have eliminated the non-numerical as “biased” and so are stuck with perceptions that are increasingly misleading. And this leads to interventions that are increasingly ineffective

AlanS June 3, 2024 12:17 PM

We don’t live in that world anymore. Not only has technology become entangled with the structure of society, but we also can no longer see the world around us without it.

Did you really mean that? Was there ever a time that technology wasn’t “entangled with the structure of society”? All technologies are inherently social. On abstraction you might want to read Chapter 3 of Security, Territory, Population, a lecture given in 1977/8.

mark June 3, 2024 12:51 PM

I haven’t finished the article, but I had a feeling where you were going… and then I saw “the map is not the terriroty”.

Funny, how general semantics keeps showing up as correct…

echo June 3, 2024 12:51 PM

Feminist technoscience is a thing too. From the wiki: Science was originally seen as an alien entity opposed to women’s interests.[8] Sciences and technologies developed under the misconception that women’s needs were universal and inferior to the needs of men, forcing women into rigid, determined sex roles.[6]: 18  A shift happened in the 1980s – Sandra Harding proposed “the female question in science” to raise “the question of the science in feminism”, claiming that science is involved in projects that are not only neutral and objective, but that are strongly linked to male interests.

I didn’t want to mention the article leaned towards patriarchal tech fetishist but it’s something which leapt out to me. I can’t remember name of the book extract I was reading the other week on the commodification of lesbianism in popular culture. It was an extension of Taylorism in a sense is that the system deems it useful while dehumanising its subject. It’s this kind of reasoning which made me dig past tools and values and principles and society to draw out the issue of empathy which was really a shortcut for all the other stuff.

The wiki is worth a skim even if nobody goes full Judtih Butler on everything.

Not really anonymous June 3, 2024 1:31 PM

I don’t like how this article uses “data structure”. I think that most cases in the article are really about data modelling rather than data structures. I see work in data structures as being about being able to efficiently do certain kinds of operations. So that you use different structures in different situations dependin on frequently you expect to be doing different kinds of of operations on that data. Data modelling is about how you represent the real world as an approximation with data. This seems to be mosly what the article is about.

What price common sense? June 3, 2024 1:50 PM

@JPA

“From my perspective as a psychiatrist, the realities that matter to the people I treat are not numerical.”

It’s not just psychiatry where this is a problem, other areas of the medical profession see it with pain relief and other things not subject to overly precise measurands (as well as the measures are in the main linear and the hunan nervous system is rather more logarithmic, thus a large difference in measure may mean little as would a measure down at the bottom of the scale).

But I suspect you are all to familiar with “white coat syndrome” where too many doctors only see it via measures such as blood preasure and heart rate. Thus see it as a problem.

I suspect you might see the flilp side when you say

‘… we have psychotropics that are “proven effective in clinical studies” but that are failing to stem the tide of increasing mental distress.” I’ve been involved with measuring some studies that in effect have concluded that simply talking to patients makes their symptoms improve… Not something a large chunk of the medical profession especially in some parts of the world really want to acknowledge.

The simple fact is trying to run all types of medicine like a “sausage machine” whilst it might make obscene profit for “makework billing” organisations it does significant harm not just to the medical professionals but those in their care.

In Japan they did some research that showed that recovery times were significantly shortened if meditation and where possible the likes of Tai Chi both of which appear to correlate with improved health and longevity in those who shall we say are into retirement.

What was never clear was how much the “social aspect” played in this, my own thoughts on reviewing a number of papers is that it’s something that we really should look at in this modern world where “social life” is almost looked on as “theft of time from employer”.

My less than popular view is “Everyone should work to contract” and that “employment contracts should be equitable”.

cybershow June 3, 2024 1:57 PM

This essay paints humankind’s schism with reality as a result of
information technology. Mumford is rightly mentioned a few times, but
it feels more resonant with Ellul. (@Bruce, have you read Jacques
Ellul?) What is this “data structure” or “algorithm” then?
(philosophically interesting to me as a computer scientist that these
things are being conflated). Is “data” therefore a measure of our
collective break with reality – the latent psychosis of humanity?

With that in mind, Max Frisch’s remark that “Technology is a way of
not having to experience the world”, makes more sense
backwards. Experience (living) is not having to squint through the
dark glass of technology.

For me as a technological optimist, that doesn’t quite add up – for
Kaczynski type reasons – thus there’s apparently something good to be
salvaged in technology, something we’ve lost. Or had stolen? Or have
never yet really discovered? Technological optimism, then, is that
this current, psychotic trajectory can be cured, or will simply burn
itself out (probably, sadly, at much human cost).

JPA June 3, 2024 4:03 PM

@What price common sense?

I limited my comment to my area of expertise but I totally agree that this is a problem in medicine in general, at least in American medicine.

Thanks for elaborating on that. Your comments on the social benefits of an intervention are important. Just knowing that someone cares about you affects your physiology.

What price common sense? June 3, 2024 4:12 PM

@AlanS

It had two points, whilst the first

“Not only has technology become entangled with the structure of society,”

Your objection is valid to, it is the second

“but we also can no longer see the world around us without it.”

That is the crux of the issue. Before the mid 1800’s technology although developing fast was in the main something that most did not consider part of life as such.

In the UK the majority depended on candles and wood for light and heat. Whilst coal was becoming more common in towns due to it’s energy density, gas lighting was a novelty.

As for communications by electromechanical signalling it was scarcely into the crib. Which is why the serious CME of the Carrington Event of late 1859 had next to know effect on mankind just a few telegraph operators in their offices.

Horses were still the main stay of power for the majority of people and entire industries were dependent on their supply, feeding, grooming, and disposal via the “rag and bone” or “knacker man”.

Less than half a century later life was very much different in Britain both gas and to a lesser extent electricity were common in cities and towns along with municipal water works and much improved sanitation. And health and longevity shot up. The invention of the standardized “Whitworth” screw thread as a result of Charles Babbage’s work made a true industrial revolution around standard parts not just possible but thriving manufacturing arose and machines were shipped around the world.

Of the “Cornish Beam Engine” a joke appeared,

“It’s only a mine if there is a cornish engineer at the bottom.”

Iron ships and iron horses appeared in great quantity and ended up all over the world, even in high mountain lakes in South America.

It really did appear to many as though “Britain Ruled the World” but then as always an idiot who had been given too much power decided it was time to change things and the “Great War” some say “The first scientific war” brought much of Europe and the World Economy crashing down.

It was after this that engineering was seen as an integral part of society with science not getting that position untill after the Second World War when the use of electricity had become essential to so many industries and peoples homes.

What price common sense? June 3, 2024 4:21 PM

@CyberShow

“What is this “data structure” or “algorithm” then?”

Try the “linked list” and “search algorithms”

These are the “evil” or “foundation” that underlie all reliable data sources in one way or another.

Note the important word “reliable” as we know now (or should do) AI LLMs and ML systems do not store data but very limited data relations. Good to “Bedazzle” with party tricks and steal PPI to “Betray” but of limited use at best for “reliable” work.

What price common sense? June 3, 2024 4:46 PM

@JPA

“Thanks for elaborating on that. Your comments on the social benefits of an intervention are important. Just knowing that someone cares about you affects your physiology.”

That’s all right, I just hope more people will realise that there is way more to a good life than the grind of the modern world.

You might have noticed there has been a bit of a political panic caused by “near slave labour” employers who have discovered, that post C19 Lockdown people don’t want to go back to that sort of life. Thus they have labour shortages at the rates they are offering. So as they are not getting “bodies to burn” they are kicking up a stink and demanding the equivalent of serfdom be brought back.

The thing is it’s incredibly short sighted, after all where do they think their customers come from?

I’ve been watching this nonsense via “out sourcing” and “off shoring” for over three decades now. The latest “Gig Economy” is more of the same nonsense.

Front line health care providers know that intervention is required be it mental or physiological. But these are “man-power intensive”. Back when I was young people used to go “down the neck of a bottle” for “poor man’s painkiller” as “a penny worth of your finest” was nolonger available. These days frontline healthcare knowing the help actually needed is not available for political reasons have in effect opened that locked door to “a penny worth” again, with the expected results.

It actually makes me very sad because of the millions of people it harms, who are seen by some “as the chaf that falls to the road side”.

ResearcherZero June 3, 2024 9:48 PM

A model that can automatically infer an agent’s computational constraints by seeing just a few traces of their previous actions.

‘https://scitechdaily.com/mits-new-ai-model-predicts-human-behavior-with-uncanny-accuracy/

A leap into the future of predictive analytics and behavioral science.
https://medium.com/kinomoto-mag/beyond-prediction-unveiling-mits-ai-revolution-in-human-behavior-analysis-de6495c276e8

The structure of semantic knowledge.

What does it really mean to say that “context matters”?

‘https://insights.princeton.edu/2022/10/machine-learning-and-human-behavior/

“We have perpetual problems where we don’t know what to do — inequality, climate change action, etc., etc. And many of those things hinge not on the technology or the systems that we engineer but on human behavior.”

“It is a good thing to improve social exploration between communities, using knowledge of social interactions. Everything from infection rates, to innovation rates, to intergenerational mobility, all of those things depend on [social interactions] as a principal causal element.”

https://www.nationalacademies.org/news/2023/10/how-ai-can-help-predict-human-behavior-and-accelerate-solutions-to-societal-challenges

“individuals are more easily identified by rarer, rather than frequent, context annotations”

The role played by four contextual dimensions (or modalities), namely time, location, activity being carried out, and social ties, on the predictability of individuals’ behaviors.

‘https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-021-00299-2

Discovering explanations rather than performing predictions.
https://www.nature.com/articles/s41598-022-08863-0

ResearcherZero June 3, 2024 11:27 PM

Without the screams of a maniac chef (or cook) emanating from the din of the kitchen, will my replicated meals be quite as enjoyable without the unique atmosphere of the smokey pub?

ResearcherZero June 3, 2024 11:42 PM

After all it is the tension I enjoy. The love that the chef inserts into the meal, or occasionally the hate and poison, that will narrow my carotid artery and deposit a buildup of cholesterol throughout my battered and neglected meat suit. How do I thank the chef?

Winter June 4, 2024 3:21 AM

Scott found that to understand societies and ecosystems, government functionaries and their private sector equivalents reduced messy reality to idealized, abstracted, and quantified simplifications that made the mess more “legible” to them.

This is much older than the “state”. Religion does this too. It is unavoidable whenever there has to be a society with interactions between people who do not know each other personally.

Robin June 4, 2024 4:48 AM

@JPA, @What price common sense?

Many years ago I worked with research groups that used a system called “Teler” (see for example teler.co.uk). The underlying idea is that simply trying to assign a score to many experiences, phenomena or treatments is a dead-end because they are not quantifiable, measurable in the usual sense. Sensation of pain (physical or mental) is a good example.

Teler assigns a score by posing questions that relate to a human experience and are relative to each person, not absolute measures. So, for example, instead of posing the traditional question “On a scale of 1 to 10 how bad is the pain?” (which is universal, but pretty much useless) you pose a set of questions such as: “Is the pain enough to stop you sleeping?”; “Is the pain enough to stop you sitting comfortably?”; “Is the pain enough to stop you going to work?” etc. There are different ways to deal with the answers: either you add up how many “yes” answers you get, or you grade the questions in order and see how far through the list you get before you get a “No” (there are other ways but that’s the gist).

There are lots of advantages to this approach: the questions can be designed to suit a particular project, or an individual’s problems and treatments; they can be applied across a very wide range of disciplines*; they are relevant to the experience of a person; they can be used systematically to measure progress; they are easy and quick to record; they can be treated rigorously and statistically. A slight disadvantage is that creating the questions is a skill and can be time consuming at the entry point of a study, although vast libraries of questions sets existed at the time I was involved. (Since then the system has been commercialised and changed hands).

  • IIRC the original system was developed in studies of agricultural development in Africa; all the use cases I saw were in healthcare, but I think there was also a project in the UK in the logistics of food distribution.

echo June 4, 2024 6:30 AM

@ResearcherZero, @Winter, @Robin

Those were some good comments. Before I get going I’ll put at the top this isn’t a gripe. It’s just an observation. The problem with deleting some (not all) of my stuff because I wander off the approved path is it’s a simplified and popularised scan of some very obvious and deep and long-term structural and institutional problems. Some people get their PhD’s in this stuff which dear reader will miss as they know nothing about the field.

I’m not wholly happy with the computational people poking their beaks into equality issues or the humanities in general. Computation can be useful as a tool and when used appropriately at the right time and in the right place but…

@ResearcherZero

Without the screams of a maniac chef (or cook) emanating from the din of the kitchen, will my replicated meals be quite as enjoyable without the unique atmosphere of the smokey pub?

Nope and the entire art world and hospitality world are built on this. At the intersection (jargon alert) of art and hospitality is the Michelin restaurant. This is where environment, product design, and quality meet. Then there’s the logistics and timing. An extra five minutes during the production pipeline (and it is usually just that) makes the difference between scoff you eat at home and fine dining. Then there’s the fact restaurants get first pick of the produce often with a trusted supplier pre-picking the best of the best. (I’m quite lucky in that I can easily and cheaply source ingredients. One thing I buy is normally reserved for restaurants and you can’t buy them anywhere unless you know where.)

Should every home cook or diner feel jealous of the grand fine dining experience? Not at all. A good simple meal in the most ordinary of places can be equally good or even better. It all depends how you measure it.

@Winter

This is much older than the “state”. Religion does this too. It is unavoidable whenever there has to be a society with interactions between people who do not know each other personally.

This is all very true and necessary. In a now deleted link to a French futurist the video contained an explanation of why Europe drives on the left and the UK drives on the right. It’s just an accident of history. The fact of the matter is that laissez faire driving was impossible so “rules of the road” had to be developed and standardised.

So where’s all this going? I’m looking at a load of men building systems for men to be used within legacy systems created by men to have control over men. Men tend not to do autobiographical history or feelings. Women do. Men tend not to notice the fact the default is built for men. Women do.

I get the sense from the topic article and supplied links in commentary that fields dominated by men are trying to struggle their way through to understand a bi-model system (or even multi-variate system) they’re not normally used to dealing with in a tacit sense.

What am I saying here? Not a lot but then the topic doesn’t say a lot either. Like, how do you explain it?

Many years ago I worked with research groups that used a system called “Teler” (see for example teler.co.uk). The underlying idea is that simply trying to assign a score to many experiences, phenomena or treatments is a dead-end because they are not quantifiable, measurable in the usual sense. Sensation of pain (physical or mental) is a good example.

The UK is experiencing a political crisis. One reason is healthcare and welfare systems are being optimised by the Tories either for privatisation or reduction of expenditure. You have a one size fits all top down system which doesn’t understand chronic conditions versus a rigged system which apriori assumes a lot of the wrong defaults. It’s mostly a matter of political dogma on the one hand and empire building on the other.

The mechanism of goals, relationships, and outcome have a schism between politicians, institutions, and public. Behind all the many words spoken and written you have worldview, control, and feelings. People can think they are rational and use many words to paint a picture of rationality. In reality most decisions come first and are backwards rationalised. Decisions come first because of perception and habit and feelings. It’s almost always feelings which are the driver until something breaks. With science it’s contained in the lab. In the real world the “natural experiment” has somewhat larger consequences.

When people think of science they think of hard science (or use “hard science” as a justification for switching their brains off) but the soft sciences are science too. They inhabit the same hierarchy of science and use the same mathematical and other methods. One is predictable to a known point and the other is a fuzzy collection of probabilities to a hazy future. The fundamental conceit of this topic and some of the supplied links is that everything can be measured or data collected or decisions made with certainty but the fact is they can’t and almost certainly likely shouldn’t.

I feel the post-WII international human rights based order got a lot right. Since then there has been a massive change insofar as women’s rights and participation of none elites and, I think, this is another key issue the topic is struggling to understand and misses.

The so-called “crisis of masculinity” and a mythical golden age which never existed is driving a lot of political discourse and creating a lot of problems hence the complaint about messy governance. I don’t personally see a problem. It’s just an intergenerational transition period. Pigeons coming home to roost for some but the future for everyone else.

What price common sense? June 4, 2024 7:13 AM

@Winter

“This is much older than the “state”. Religion does this too. It is unavoidable whenever there has to be a society with interactions between people who do not know each other personally.”

You missed the important point

“Enforced Power Hierarchy”

That is those that are “self entitled” in some way believe they have the right to force not just willing others but everyone to do not just their biding but live and die at their command.

This is the antithesis of what most would consider a society.

There are always some who try to enforce their harm on others as you are aware often this is due to a failure of mental development. Part of which when told “no they can not behave that way in a society” they as befitting the lack of development

“throw the toys out of the pram”

If they are limited in agency, resources or both then the harm they cause is not initially sufficient to cause “compliance by fear” or what is more commonly seen as a “Dictatorship” of oppressive, draconian or societally undesired overwhelming extant of antisocial behaviours.

What price common sense? June 4, 2024 7:48 AM

@ Robin

“Teler assigns a score by posing questions that relate to a human experience and are relative to each person, not absolute measures.”

I’m aware of the system and whilst it is very definately a step in the right direction it has “issues”

The first is one that there is no ethical answer to. Take pain it can be expressed as “relative to xxx” such as say a broken arm. There are three problems,

  1. Not all broken arms hurt equally
  2. Most people have not had broken arms
  3. Those formulating the questions have not had broken arms either.

I came across this first with being asked

“Is it a stabing pain?”

Being one of the few that have actually been stabbed multiple times I pointed out to the Dr truthfully that

“It’s not comparable”

Where upon I got given a press of a button and a bolus of CNS pain killer I really did not want and all the breathing and hallucination problems that go with it.

I guess I should have said “It’s totally different” and given an example.

But lets be honest there are some things that you have never experienced so can not quantify by being asked questions.

But then there is “The Political Problem” politicians have their “Wars on XXX” one of which is CNS depressants that get abused by people often through no fault of their own (I have a series of back injuries that got medicated by increasingly stronger meds and was becoming hooked, getting off the meds was a hard struggle and only worked due to accidental physical intervention of a sufficiently serious leg injury).

Thus the political mantra view is that

“They will learn the questions and lie to get a fix!”

(With political mentalities like that what hope for mankind…).

Winter June 4, 2024 7:59 AM

@Man behind a thousand names (neo-Clive)

That is those that are “self entitled” in some way believe they have the right to force not just willing others but everyone to do not just their biding but live and die at their command.

That is literally as old as history, ie, it was in the oldest known stories. And it is at the root of all organized religion from the dawn of time.

Nothing new at all. A digital world where the digits held cudgels and stone knives.

Robin June 4, 2024 10:11 AM

@What price common sense
Thank you for the reply, and yes Teler is by no means perfect but it does offer an alternative way of getting data in quantitative form.

“Take pain it can be expressed as “relative to xxx” such as say a broken arm.”

Pain measurement is very complex indeed and you describe one way in which it fails. Another is selective memory; perhaps the most celebrated example being childbirth – many a woman has said “Never again!” only to happily give birth a year or two later. Pain is very subjective but the advantage of the questions posed is that they are functional and based on the subject’s daily life experiences* and not on arbitrary comparisons. The context I was working in was the treatment of chronic, non-healing wounds and the aim was to get a handle on which treatments work better – and to provide evidence for the conclusions that was more rigorous than anecdotal. My colleagues were using the method in quite a few different contexts. It’s a useful method for quantifying experiences that are essentially qualitative.

  • Of course it’s always hard to eliminate extraneous factors. Eg: “Does the pain stop you going to work?”; the answer might be “yes” one day and “no” the next, depending on what deadlines need to be met, and how brutal the line-manager can be.

Robin June 4, 2024 11:33 AM

@echo:
My project leader and principal investigator was a woman and our clinical and scientific teams were mixed; all team members were aware that the issues were multifaceted, with social and cultural angles as well as the more obvious medical side. This scenario was replicated in other projects I was involved in at the time.

As for data vs anecdote: it is a fact of life that in making choices between treatments where annual costs could run into 7 figures, decision makers will ask for data. And at a personal level, if a clinician is going to treat a painful, chronic condition they have a moral (and probably legal) obligation to have a measure of confidence that the treatment will be effective.

echo June 4, 2024 11:51 AM

It might be hard, but it has to be said. I think most people are simply not interested in what you write. I expect there are other phora where you’ll find a more receptive audience.

Probably. I mean, it is a bit like the Garrick club in here at times. But then I’m not interested in fetishising tech or playing a part in round robin backslapping of pull up the ladder performative authority. The thin skinned clinging to job titles because of status and ferocious territorialism, or pearl clutching when they step outside of their field because it pulls the rug from under them? It’s a thing. There’s plenty of papers on this if you dig around a bit.

In this instance I think 99% of it is they can’t admit to not being the solution and knowing nothing. It’s human. We can all flub it sometimes. The thing is “the map is not the territory” and the thing about continuous learning. Hah hah. No not that kind of learning!!!! lol.

That said it kind of provoked me to dig into the science of kindness and make a few connections with geopolitics and security so I learned something. I also have thoughts on academic practice and literature in general too so that’s interesting. I know other women have griped about it so it was interesting to get a perspective. Feminist literature of around 1915 is quite interesting too. But my mind wanders off like this. I’m eclectic. It’s why I nearly (but didn’t) post a comment on art and the intersection with security the other day. That could get quite involved and isn’t too far removed from some military and intelligence operational discussion I scalped from a second tier operative with a big mouth (which is why they are second tier).

Meh. I just follow my nose and what makes me feel good. That doesn’t always fit with the hierarchy but we are where we are! That’s me. Trouble. Bad penny. Grit in the engine. Just born this way, I guess.

echo June 4, 2024 11:57 AM

@Robin

Cool. I get what you mean and I know why you do what you do. There’s obviously a professional and personal perspective too. It’s just when systems break things get funny.

There are those, I know, who have a vested interest or head in the sand and others who are more open minded and acknowledging of faults. This I know is a systemic issue and a matter of much professional debate in various quarters. I’ll leave things at that as I am sure you appreciate discussion can get into the weeds very deeply very fast and create quite the hairball. I love going off piste but even that one is too much for me to swallow at least for today.

What price common sense? June 5, 2024 3:54 AM

@Robin

“Thank you for the reply, and yes Teler is by no means perfect”

Ahh I think you misunderstand me, the two faults I identified are not “Teler specific” faults.

They are general faults of all systems that do not directly measure but have an “uncalibrated highly nonlinear translator” in between. Hence my mention of “ethics” of what might be thought of calibration.

The political mantra fault is one that comes with a form of abuse of power. Politicians do not solve what is in their capabilities because they are neither quick or have vote grabbing publicity built in. So they do something else and all to often stupid because history indicates not just the short term failure, but the longterm disaster.

The politicians “build an enemy” by faux-rhetoric and attack what is a straw-man. In the process they end up building hugh agencies of very much the wrong type, that have to go out and find real flesh and blood enemies, so they make them in various ways. Unfortunately when the politicians have built these monstrosities they then can not get rid of them as it’s “man power on the books” etc.

I’m told the US currently has something like just under fifty “Wars on XXX” none of which really do anything other than soak up tax dollars and make reports that nobody cares about because part of the game is to build empires that do not do anything of societal use. Obviously hundreds of people with guns doing “busywork” is going to cause issue and there is nothing like having pitched gun battles to make your boondongle agency newsworthy as a “victim of insufficient resources” so bonus time all round come the next agency funding time (it’s why “around the hill” you would have heard “having a fund raiser” in connection with such incidents).

@vas pup and @Winter have just been talking about one side effect of how doing things the “might is right” wrong way does real harm.

denton scratch June 5, 2024 5:04 AM

This stuff is very evocative of Adam Curtis’s work – e.g. All Watched Over By Machines Of Loving Grace (https://www.youtube.com/playlist?list=PLD2023C59EAE67F77). Curtis, of course, is much more allusive and elliptical than this article; I find his films engaging, but also frustrating, because I can’t figure out what point he’s trying to make.

It’s partly because Curtis’s MO is to mine BBC film archives, and stitch together a sort of collage or scrapbook. The result is a bit like a cubist painting: there are many ways to read it. Some of his work even lacks narration, he just lets the footage speak for itself. I wish more documentary makers would do that.

Anonymous June 5, 2024 9:21 AM

“logs are easier for judges to understand than cryptography.”

Anderson, Ross. Security Engineering: A Guide to Building Dependable Distributed Systems (p. 426). Wiley. Kindle Edition.

fib June 6, 2024 10:58 AM

For me code is interchangeable with policy and law.

Hence my supposition that you are not a coder at all, and do not know what you are talking about.

Watching which way the weather blows June 6, 2024 12:18 PM

@fib

A supposition many who read here would probably not disagree with.

Archives of this blog suggests that the supposition issue is one that goes back several years, going on maybe a decade and was a major cause of what followed and still causes problems through this day.

In their own words,

“In this instance I think 99% of it is they can’t admit to not being the solution and knowing nothing”

Yup I guess that covers it…

What price common sense? June 6, 2024 7:37 PM

@yet another bruce

“…is not at all typical of our host’s published writing”

A couple of points to consider…

Firstly there are two authors to the article so not all of it is the work of Bruce.

Secondly is the place the article is published, they have “styles”.

If you ever have to read some “learned papers” from non science journals the papers can get quite lengthy. The legal paper I read last week was just over fifty pages before references. And trust me it was more yawn worthy than a bottle of night cough medicine.

As for philosophy… Lets just say I’ve read smaller books and the printed journals can be used as a good way to stop bullets.

As for other fields of academic endeavor that shall not be mentioned 😉 the modern world of electronic publishing has taken the restraints off the outpourings from over active fingers.

Why do they not realise that some poor devils have to get stiff necks and chronic eye strain from squinting at laptop screens in poorly lit student accomodation etc?

fib June 7, 2024 10:12 AM

I can’t believe people are complaining about this writing. I love long form blog posts and am grateful to have access to such material. My admiration only grows. Thanks, @Bruce Schneier, for all the fish.

Matt June 15, 2024 3:06 AM

While the points in this essay are well-taken, I feel like any analysis of how viewing the world through the lens of data is premature without first teasing out the immense, fell influence of concentrated wealth. The essay puts misguided governance right alongside corporate control, without engaging with the tremendous damage the latter invariably does, lacking as it does in sufficient checks on its power.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.