​which Of The Following Is Not True Regarding Why Fraud Prevention Controls Are Important?
Reading this tweet by Maciej Ceglowski makes me want to set downwards a conjecture that I've been entertaining for the terminal couple of years (in part thanks to having read Maciej's and Kieran's previous piece of work also as talking lots to Marion Fourcade).
The conjecture (and it is no more than a plausible conjecture) is simple, but it straightforwardly contradicts the commonage wisdom that is emerging in Washington DC, and other places also. This collective wisdom is that Cathay is becoming a kind of all-efficient Technocratic Leviathan thanks to the combination of machine learning and authoritarianism. Authoritarianism has ever been plagued with bug of gathering and collating information and of being sufficiently responsive to its citizens' needs to remain stable. Now, the story goes, a combination of massive data gathering and machine learning volition solve the basic authoritarian dilemma. When every transaction that a citizen engages in is recorded by tiny automatons riding on the devices they carry in their hip pockets, when cameras on every corner collect data on who is going where, who is talking to whom, and uses facial recognition engineering science to distinguish ethnicity and place enemies of the state, a new and far more than powerful course of authoritarianism will emerge. Absolutism then, can sally as a more efficient competitor that tin can beat democracy at its dwelling game (some fearfulness this; some welcome it).
The theory behind this is one of forcefulness reinforcing strength – the strengths of ubiquitous data gathering and analysis reinforcing the strengths of authoritarian repression to create an unstoppable juggernaut of nearly perfectly efficient oppression. Yet there is another story to exist told – of weakness reinforcing weakness. Authoritarian states were always particularly decumbent to the deficiencies identified in James Scott'due south Seeing Like a Land – the desire to make citizens and their doings _legible_ to the state, past standardizing and categorizing them, and reorganizing collective life in simplified ways, for example past remaking cities then that they were not organic structures that emerged from the doings of their citizens, merely instead one thousand chessboards with ordered squares and boulevards, reducing all complexities to a foursquare of planed woods. The grand state bureaucracies that were congenital to deport out these operations were responsible for multitudes of horrors, but as well for the crumbling of the Stalinist state into a Brezhnevian desuetude, where anybody pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of land activeness, and its need to reduce the world into something simpler that it could cover and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.
So what might a similar analysis say about the marriage of authoritarianism and car learning? Something like the following, I recall. At that place are 2 notable problems with machine learning. One – that while it tin do many extraordinary things, it is not nearly as universally constructive as the mythology suggests. The other is that it can serve as a magnifier for already existing biases in the data. The patterns that information technology identifies may be the product of the problematic information that goes in, which is (to the extent that it is accurate) often the production of biased social processes. When this information is then used to make decisions that may plausibly reinforce those processes (by singling e.g. particular groups that are regarded as problematic out for particular police attention, leading them to be more liable to be arrested and and then on), the bias may feed upon itself.
This is a substantial problem in democratic societies, but it is a problem where there are at least some counteracting tendencies. The corking advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up republic to a specific set of destabilizing attacks merely information technology likewise means that there are countervailing tendencies to self-reinforcing biases. When at that place are groups that are victimized by such biases, they may mobilize against it (although they will observe it harder to mobilize confronting algorithms than overt discrimination). When at that place are obvious inefficiencies or social, political or economic problems that result from biases, and so in that location will exist ways for people to bespeak out these inefficiencies or problems.
These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist. Groups that are discriminated against will have no obvious recourse. Major mistakes may go uncorrected: they may exist almost invisible to a state whose information is polluted both by the ways employed to detect and allocate information technology, and the policies implemented on the basis of this data. A plausible feedback loop would see bias leading to error leading to further bias, and no ready ways to right it. This of course, volition exist probable to exist reinforced by the ordinary politics of authoritarianism, and the typical reluctance to correct leaders, even when their policies are leading to disaster. The flawed ideology of the leader (We must all report Comrade 11 thought to detect the truth!) and of the algorithm (machine learning is magic!) may reinforce each other in highly unfortunate means.
In curt, there is a very plausible set of mechanisms under which machine learning and related techniques may turn out to be a disaster for authoritarianism, reinforcing its weaknesses rather than its strengths, by increasing its tendency to bad decision making, and reducing further the possibility of negative feedback that could assistance right against errors. This disaster would unfold in two ways. The first will involve enormous homo costs: self-reinforcing bias will likely increase bigotry against out-groups, of the sort that nosotros are seeing against the Uighur today. The second will involve more ordinary cocky-ramifying errors, that may lead to widespread planning disasters, which will differ from those described in Scott's account of High Modernism in that they are not equally immediately visible, but that may as well be more pernicious, and more dissentious to the political health and viability of the regime for but that reason.
So in short, this conjecture would suggest that the conjunction of AI and absolutism (has someone coined the term 'aithoritarianism' even so? I'd really prefer not to take the blame), will take more or less the opposite effects of what people expect. It volition not be Singapore writ large, and maybe more than savage. Instead, it will be both more radically monstrous and more than radically unstable.
Like all monotheoretic accounts, you should treat this post with some skepticism – political reality is always more complex and muddier than whatever abstraction. At that place are surely other effects (another, specially interesting one for big countries such every bit China, is to relax the assumption that the land is a monolith, and to think nigh the intersection between machine learning and warring bureaucratic factions within the middle, and between the eye and periphery).Yet I think that it is plausible that it at least maps one significant set of causal relationships, that may push (in combination with, or confronting, other structural forces) towards very unlike outcomes than the conventional wisdom imagines. Comments, elaborations, qualifications and disagreements welcome.
​which Of The Following Is Not True Regarding Why Fraud Prevention Controls Are Important?,
Source: https://crookedtimber.org/2019/11/25/seeing-like-a-finite-state-machine/
Posted by: doyleobed1994.blogspot.com
0 Response to "​which Of The Following Is Not True Regarding Why Fraud Prevention Controls Are Important?"
Post a Comment