Sometime during the spring I read the short Ars Technica post Pre-thoughtcrime: Russian think tank app catches protestors before they protest and I have been meaning to comment on it.
Possibly a classic case of hypocrisy given what I posted on g+ a while back, but well… it is the Internet after all…
Background: The article is about a claimed Russian software used to predict when and where demonstrations and protests are most likely to happen. Now, attempts to predict political unrest may work to a certain degree. Perhaps. Just as perhaps it is possible to detect certain trends. But, the venture makes me think of problems of self-organized criticality.
Say, earth quake forecasts: One may be able to predict that something will happen, but not for sure and not when. In a rational world, relying on such a system attempting outbreak of unrest forecasting (and thus preemptive policing), with a unreliable system would not be sustainable. The system would either be prone to false positives or to missed outbreaks. Deployment of government forces in all potential situations without a crisis is expensive, missing a crisis would probably be politically expensive. It would not hold. In reality however, few are rational, and once the software is in use, it will cause feedback, potentially changing the system.
How? Deployed police will likely not just stand by waiting for clear evidence of unrest requiring action. Especially if there's a consensus that unrest will happen. In addition, it is likely that some, perhaps most, of the groups identified will react violently to the presence of government forces. To continue the physical analogy: if mountain rangers perceives a potential avalanche building up in a location deemed threatening, they will set it off with explosives. That is arguably fine for snow and ice, but in society it is perhaps more sensitive.
Provocation. Both as a tool and a reaction. So, the prediction program becomes a self-fulfilling prophecy (and don't get me started on the temptation for politicians to ask the question of exactly whom is disagreeing with them; which grain of sand, which snowflake will cause the slide). Anyway - computer supported crackdowns on dissidents and others disagreeing with whomever is controlling such a system. Sad and, I guess, in intent and purpose nothing new.
Except that there's been a technological advance.
Couldn't it be used for something better? I think so. So, you've got a tool designed to detect dissatisfaction? Great, or perhaps not so great, but you can't put it back in the box and pretend it didn't happen. (Should have thought about that first.) Then use it for something good, like addressing the dissatisfaction.
I am not talking about Russia specifically here (sure, the news piece does, and yes Russia is weird these days, but let's not pretend that any government who believe themselves to do the right thing - and most of them do - would not glance at this kind of technology). So, I mean it. Do something about the problem. Detect the unrest, analyse it, try to address it.
If a population is treated as a system, then why not govern that system with some kind of finesse instead of force? No, your dissidents probably won't go away just because you fix a bump in the road, and no their issue might not be something you can, or want to, or even should fix, but… at least analyse it the problem and govern with something less crude than violence or oppression. If you have hubris enough to think you can detect unrest, then surely there must be an equally advanced solution to address it? That is where such technology could be useful.
Not by silencing the people, but listening to them.
Governance, is in this view, in many ways about steering. However, not in an absolute sense of control. It is about prodding a dynamic system that the governor is part of, and not above. Like an oarsman on a river, or a driver on a winter road. Seeing dissidents as your enemy is to invite a solution based on force. To parry a drift on that winter road by a panic stricken jerk of the wheel is to invite a series of events that rarely ends well.
(Many years ago I worked for a company in the haptic feedback industry and witnessed that in some rare situations the cybernetic circuit made up by user and the feedback-providing robotic arm became unstable in a quick series rapidly more powerful jerking motions - this was not due to any particular design flaw, but usually because either man or machine had pushed with too much force all too sudden and the counterpart responded by pushing back.)
Now technological advance has led to this software tool providing better than ever before information about discontent in a society. Its application (and motif for creation) is in policing however, and I find it uninspiring. It can't be unmade, so make a better use of it.
If we provided an oarsman, steering her vessel through a never before travelled rapid, with a device mapping every submerged rock and log she would not use the knowledge to blow up the obstacles in her path, thereby changing the river and creating new hazards.
No, she would strive to steer around the threats, maintaining her speed, and safely arrive at her destination.