Tuesday, March 14, 2023

The Human Use of Human Beings, Chapter 10




Summary of Chapter 10: Some Communication Machines and Their Future


Whereas the last chapter was about automata replacing workers, this one will address “a variety of problems concerning automata,” more specifically, automata of three categories: 1) some which “serve either to illustrate and throw light on the possibilities of communicative mechanisms in gen­eral,” 2) a few which serve as “the prosthesis and replacement of human functions which have been lost or weakened in certain unfortunate individuals,” and finally 3) those with a more sinister potential (163).

He discusses his tropism machine, called alternately the Moth or the Bedbug depending on whether it has been programmed to seek or avoid light; this has been developed to illustrate the role of competing types of feedback in the tremors of people with Parkinson’s. [There is a website with photos and discussion.]

Such machines may appear to be “exercises in virtuosity” (167) but they have been actually useful to a degree; there is another class of machines which provide more direct health benefits: better prostheses, readers for the blind, etc. He discusses his idea for a machine to communicate language using touch, as better than visible speech; (the so-called “hearing glove” which was apparently later tried with Helen Keller, but did not meet with much success).

Wiener gives a cybernetic three-stage description of “language,” by which he means speech (168-9; cf. Chapter 4). He notes that “deaf-mutes” can easily learn lip reading, but speak harshly and this is “inefficient.”

The difficulties lie in the fact that for these people the act of conversation has been broken into two entirely separate parts. (170)

He discusses this in relation to the “sidetone” feedback of hearing one’s own voice in telephony, and also to the Vocoder speech synthesizer by Bell, which greatly reduces the information in human speech but is still understandable and recognizable, leading to a distinction between “used and unused information in speech:”

When we distinguish between used and unused in­formation in speech, we distinguish between the maximum coding capacity of speech as received by the ear, and the maximum capacity that penetrates through the cascade network of successive stages con­sisting of the ear followed by the brain. (172)

The reduction of information in the message is necessary to be able to transfer the information from the medium of speech through “an inferior sense like touch.”

From this point on, the chief direction of investigation must be that of the more thorough training of deaf-mutes in the recogni­tion and the reproduction of sounds. (173)

[In other words, his focus is on getting “deaf-mutes” to be able to speak more clearly; basically to invent a device to assimilate them to the speaking population, rather than using sign language (which he has not mentioned as a fascinating alternative medium, which loses certain capacities of speech but opens up many more).]

He gives the example of an artificial lung which uses “the nor­mal feedback in the medulla and brain stem of the healthy person will be used even in the paralytic to supply the control of his breathing. Thus it is hoped, that the so-called iron lung may no longer be a prison in which the patient forgets how to breathe, but will be an exerciser for keeping his residual faculties of breathing active, and even possibly of building them up to a point where he can breathe for himself and emerge from the machinery enclosing him.” (174)

He now turns to more sinister machines, beginning with his own idea for a chess machine, and discusses the limited possibilities of chess machines in his day: one that could plan two steps ahead was thought of as optimal, the idea of creating an actually perfect or good player was “hopeless.”

The number of combinations increases roughly in geometrical pro­gression. Thus the difference between playing out all possibilities for two moves and for three moves is enor­mous. To play out a game—something like fifty moves— is hopeless in any reasonable time. (175)

The problem is the slowness; Shannon has an idea to take the game further than two moves, but it would probably get slower and slower (and not make the time limits in the rules). Its play would be “stiff and uninteresting” but possibly good, and chance could be introduced to prevent humans from beating it methodically.

Though we have seen that machines can be built to learn, the technique of building and employing these machines is still very imperfect. (177)

He makes a comment that now seems prescient in regard to various recent chat AIs which turned racist, etc.:

A chess-playing machine which learns might show a great range of performance, dependent on the quality of the players against whom it had been pitted. The best way to make a master machine would probably be to pit it against a wide variety of good chess players. On the other hand, a well-contrived machine might be more or less ruined by the injudicious choice of its opponents. A horse is also ruined if the wrong riders are allowed to spoil it. (177)

[Though on stating this it occurs to me that I am treating racism the same way as I have accused Wiener of doing, as an irrational anomaly rather than as a central part of the functioning of social inequality.]

He notes two kinds of learning machines, those characterized by preference (“a statistical preference for a certain sort of behavior, which nevertheless admits the possibility of other behavior”) or by constraint (“certain features of its behavior may be rigidly and unalterably deter­mined”). [And the chess playing machine he mentions would be a hybrid of these, with the rules programmed in as constraints, but still learning “tactics and policies” through preference.]

Shannon has already pointed out the potential military applications of such learning machines, as has a Dominican priest Dubarle, in a review of Wiener’s Cybernetics. Wiener quotes Dubarle at length regarding the possible misuse of a machine à gouverner. Dubarle makes a point that machines can only understand human behavior through probability:

At all events, human realities do not admit a sharp and certain determination, as numerical data of computa­tion do. They only admit the determination of their prob­able values. A machine to treat these processes, and the problems which they put, must therefore undertake the sort of probabilistic, rather than deterministic thought, such as is exhibited for example in modern computing machines. (179)

The machines à gouv­erner will define the State as the best-informed player at each particular level; and the State is the only su­preme co-ordinator of all partial decisions. These are enormous privileges; if they are acquired scientifically, they will permit the State under all circumstances to beat every player of a human game other than itself by offering this dilemma : either immediate ruin, or planned co-operation.

[This is] the adventure of our century: hesitation between an indefinite turbulence of human affairs and the rise of a prodigious Leviathan. In comparison with this, Hobbes’ Leviathan was nothing but a pleasant joke. We are run­ning the risk nowadays of a great World State, where deliberate and conscious primitive injustice may be the only possible condition for the statistical happiness of the masses: a world worse than hell for every clear mind. (180)

Dubarle’s somewhat weak proposal in response:

Perhaps it would not be a bad idea for the teams at present creating cybernetics to add to their cadre of technicians, who have come from all horizons of science, some serious anthropologists, and perhaps a philosopher who has some curiosity as to world matters.

Wiener notes that the machine itself would not be all-powerful (because “too crude and imperfect”) but would enable those who control it to become so:

or that political leaders may attempt to control their populations by means not of machines themselves but through political techniques as narrow and in­different to human possibility as if they had, in fact, been conceived mechanically. (181)

[or as it turns out so far, corporations focused only on manipulating partial identities for profit.]

The great weakness of the machine—the weakness that saves us so far from being dominated by it—is that it cannot yet take into account the vast range of probability that character­izes the human situation. The dominance of the ma­chine presupposes a society in the last stages of increasing entropy, where probability is negligible and where the statistical differences among individuals are nil. Fortunately we have not yet reached such a state.

He provides an interesting reflection on how this sort of philosophical possibility becomes the foundation of a non-technological (per se) way of thinking in the context of the cold war:

A sort of machine à gouverner is thus now essentially in operation on both sides of the world con­flict, although it does not consist in either case of a single machine which makes policy, but rather of a mechanistic technique which is adapted to the exigen­cies of a machine-like group of men devoted to the formation of policy. (182)

Wiener echoes Dubarle’s call for getting some kinder, gentler experts in on the decision-making:

In order to avoid the manifold dangers of this, both external and internal, he is quite right in his emphasis on the need for the anthropologist and the philosopher. In other words, we must know as scientists what man’s nature is and what his built-in purposes are, even when we must wield this knowledge as soldiers and as statesmen; and we must know why we wish to control him.

[And so, the Macy conferences. But isn’t it this very, Dewey-esque or Kerr-esque view of the university/scholarly world that is currently dissolving, the idea that somehow the humanists and social scientists (and Dubarle perhaps hoped, the theologians) would temper the excesses of the technocrats?]

He emphasizes that “the machine’s danger to society is not from the machine itself but from what man makes of it,” and distinguishes between “know-how” and “know-what:”

Our papers have been making a great deal of Amer­ican “know-how” ever since we had the misfortune to discover the atomic bomb. There is one quality more important than “know-how” and we cannot accuse the United States of any undue amount of it. This is “know­-what” by which we determine not only how to accom­plish our purposes, but what our purposes are to be. (183)

Again, the problem is not actual exploitation or capitalism or anything like that per se, but a lack of sense of direction in where we want to develop technology, or thoughts on how it will actually affect the world (and this appears today in the “Oops, our bad” discourse on the accidental side effects of ChatGPT, art generators, etc.). Wiener turns to the lessons of fairy tales (e.g., you find a bottle with a genie in it, leave the genie in the bottle and don’t make wishes) as illustrations of “the tragic view of life which the Greeks and many modern Europeans possess” and which Americans need to learn (183-4). The myth of Prometheus serves as an example of the ambivalent attitude of the ancient Greeks toward technology, which we moderns could learn from.

The sense of tragedy is that the world is not a pleasant little nest made for our protection, but a vast and largely hostile environment, in which we can achieve great things only by defying the gods; and that this defiance inevitably brings its own punishment. It is a dangerous world, in which there is no security, save the somewhat negative one of humility and restrained ambitions. (184)

If a man with this tragic sense approaches, not fire, but another manifestation of original power, like the splitting of the atom, he will do so with fear and trembling. He will not leap in where angels fear to tread, unless he is prepared to accept the punishment of the fallen angels. Neither will he calmly transfer to the machine made in his own image the responsi­bility for his choice of good and evil, without con­tinuing to accept a full responsibility for that choice.

Modern Americans, lacking a sense of “know-what,” continually get trapped by their blind faith in technology. He compares intelligent machines to two kinds of fairy-tale device, the magical monkey’s paw (which is always very literal-minded), and the genie in the bottle (which is mercurial and disinterested in human happiness). The former is the more constrained and thus literal device; the latter the kind which learns through preference. “For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind” (185).

Moving beyond literal machines, he returns to the point he had made earlier about the dangerous rise of machine-like organization and thinking in the Twentieth Century:

When human atoms are knit into an organization in which they are used, not in their full right as responsible human be­ings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine. Whether we entrust our decisions to ma­chines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right an­swers to our questions unless we ask the right questions. (185-6)

He ends with another reference to evil, perhaps meant to help accustom American readers to a “tragic” mindset:

The hour is very late, and the choice of good and evil knocks at our door. (186)



 

No comments:

Post a Comment