Randolph Harris II International

Home » #RandolphHarris » It Was What We Are Really Paid for

It Was What We Are Really Paid for

Anyone can see and touch the telephone or computer on the nearest desk. This is not true of the networks that connect them to the World. Thus we remain, for the most part, ignorant about the high-speed advances that are fashioning them into something resembling the nervous system of our society. If not downright balderdash, the networks that Morse, Western Union, Bell, and others set up when they first began stringing wires were unsophisticated. Common sense taught that a straight line is the shortest distance between two points. So engineers sought this straight line, and messages sent from one city to another were always sent over this pathway. As these first-stage networks expanded, however, it was discovered that in the World of the network, a straight line is not necessarily the best way to get a message from one place to another. In fact, more messages could flow faster if, instead of always sending a call, say, from Tallahassee to Atlanta via the same route, the network could count the calls in each leg of the system and then shunt the Atlanta-bound call onto available lines, sending it as far away as New Orleans or even St. Louis, rather than delaying it because the shortest straight-line route happened to be busy. Primitive though it was, this was an early injection of “intelligence” or “smarts” into the system, and it meant, in effect, that the entire system leaped to a second stage of development. This breakthrough led to many additional innovations, often of marvelous ingenuity, that eventually allowed the telephone network to monitor many more things about itself, to check its components and anticipate and even diagnose breakdowns. It was as though a once-dead or inert organism suddenly began checking its own blood pressure, pulse, and breathing rate. The network became self-aware. #RandolphHarris 1 of 21

Crisscrossing the entire planet, with wires running into hundreds of millions of homes, with whole copper mines of cable snaking under the streets of cities, with complex switching systems and transmission technologies in them, these second-stage networks, constantly refined, improved, extended, and given more and more intelligence, were among the true marvels of the industrial age. Because they are largely invisible to the ordinary user, our civilization has radically underestimated the congealed brilliance and conceptual beauty of these hidden networks as well as their evolutionary significance. For while some human populations still lack even the most rudimentary telephone service, researchers are already hard at work on another revolutionary leap in telecommunications—the creation of even more sophisticated third-stage networks. Nowadays, as millions of computers are plugged into them, from giant Crays to tiny laptops, as new networks continually spring up, as they are linked to form a denser and denser interconnected mesh, a still higher level of intelligence or “self-awareness” is needed to handle the incredibly vast volumes of information pulsing through them. As a result, researchers are racing to make networks even more self-aware. Their goal is so-called neural networks. These will not only route and reroute messages, but actually learn from their own past experience, forecast where and when heavy loads will be, and then automatically expand or contract sections of the network to match the requirements. This is as though the San Diego Freeway or German Autobahn were clever enough to widen and narrow itself according to how many cars it expected at any moment. #RandolphHarris 2 of 21

Yet even before this major effect is complete, another even more gigantic leap is being taken. We are moving not into a fourth-stage system, but to another kind of intelligence altogether. Until now, even the smartest networks, including the new neural networks, had only what might be called “intra-intelligence.” All their “smarts” were aimed inward. Intra-intelligence is like the intelligence embedded in our own autonomic nervous system, which regulates the involuntary operations of the body, such as heartbeat and hormonal secretions—the functions we seldom think about, but which are necessary to sustain life. Intra-intelligent networks deliver the message precisely as sent. Scientists and engineers struggle to maintain the purity of the message, fighting to eliminate any “noise” that might garble or alter the message. They may scramble it or digitize it or packetize it (id est, break it into short spurts) to get it from here to there. However, they reconstitute it again at the receiving end. And the message content remains the same. Today we are reaching beyond intra-intelligence toward networks that might be called “extra-intelligent.” They do not just transfer data. They analyze, combine, repackage, or otherwise alter messages, sometimes creating new information along the way. Thus massaged or enhanced, what comes out the other end is different from what is fed in—changed by software embedded in the networks. These are the so-called “Value Added Networks,” or VANs. They are extra-intelligent. At present most VANs merely scramble and rescramble messages to adapt them to different media. For example, in France the Atlas 400 service of France Telecom accepts data from a mainframe computer, say, then repackages it in a form that can be received by a PC, a fax machine, or a videotex terminal. #RandolphHarris 3 of 21

Not very exciting, it would appear. However, the concept of adding value to a message does not stop with altering its technical characteristics. The French Minitel network, which links 5 million homes and businesses, offers Gatrad, Mitrad, Dilo, and other services that can accept a message in French and automatically deliver it in English, Arabic, Spanish, German, Italian, or Dutch—and vice versa. While the translations are still rough, they are workable, and some services also have the specialized vocabularies needed for subjects involving, say, aerospace, nuclear, or political topics. Other networks receive data from a sender, run them through a computerized model, and deliver an “enhance” message to the end-user. A simple hypothetical example illustrated the point. Imagine that a trucking firm based in the outskirts of Paris must regularly dispatch its trucks to forty different European distributors, restocking their shelves with a product. Road conditions and weather differ in various parts of Europe, as do currency exchange rates, gasoline prices, and other factors. In the past each driver calculated the best route, or else phoned the transport company each day for instructions. However, imagine instead that an independent VAN operator—a common carrier—not only can send signals to truck drivers all over Europe, but also collects current information on road conditions, traffic, weather, currencies, and gas prices. The Paris trucker can now load its daily messages and routing instructions onto the VAN for distribution to its drivers. However, the messages, before reaching the drivers, are run through the network’s software program, which automatically adjusts routes to minimize driving time, milage, gas costs, and currency expenses in light of the latest data. #RandolphHarris 4 of 21

In this case, the instructions sent by the transport firm to its drivers are altered enroute and “enhanced” before reaching them. The telecommunications carrier firm—the operator of the Value Added Network—has added value by integrating the customer’s message with fresh information, transforming it, and then distributing it. This, however, suggests only the simplest use of an extra-intelligent net. As the networks come to offer more complex services—collecting, integrating, and evaluating data, drawing automatic inferences, and running input through sophisticated models—their potential value soars. In short, we are now looking toward networks whose “smarts” are no longer aimed at changing or improving the network itself but which, in effect, act on the outside World, adding “extra-intelligence” to the messages flowing through them. Still largely a gleam in their architects’ eyes, extra-intelligent nets represent an evolutionary leap to a new level of communication. They also raise to a higher level the sophistication required of their users. For a company to load its messages on a VAN and permit them to be altered without a deep understanding of the assumptions buried in the VAN’s software is to operate on blind faith, rather than rational decision. For hidden biases built into the software can cost a user dearly. Foreign airlines, for example, have complained to the U.S. Department of Transportation that they are discriminated against in the electronic network that thousands of U.S. travel agents use in choosing flights for their clients. Called Sabre, the computerized reservation system is run by AMR Corp., which also owns American Airlines. The system, which monitors reservations on many airlines, has extra-intelligence embedded in it in the form of a software model that tells the travel agent the best available flights. At issue in the complaint were the assumptions built right into this software. #RandolphHarris 5 of 21

Thus, when a travel agent searches, say, for a flight from Frankfurt, to San Jose, California, her computer screen displays the flights in order depending upon the length of time they take. The shorter the flight the better. However, the Sabre software automatically assumed that changing the planes and transferring from one airline to another takes ninety minutes, irrespective of the actual time required. Since many of their flights to the United States of America required a change of plane and transfer to a domestic American airline, the foreign carriers charged that the hidden premises of the software unfairly penalized those whose interline transfers require less than ninety minutes. For this reason, they argued, their flights were less likely to be chosen by travel agents. In short, the extra-intelligence was biased. Imagine, soon, not a handful of such disputes and networks, but thousands of VANs with tens of thousands of built-in programs and models, continually altering and manipulating millions of messages as they whiz through the economy along these self-aware electronic highways. Britain alone already boasts eight hundred Vans, West Germany seven hundred, and more than five hundred companies in Japan have registered with the Ministry of Posts and Telecommunications to operate VANs. The existence of VANs promises to squeeze untold billions of dollars out of today’s costs of production and distribution by slashing red tape, cutting inventory, speeding up response time. However, the injection of extra-intelligence into these fast-proliferating and interlinked nets has a larger significance. It is like the sudden, blinding addition of a cerebral cortex to an organism that never had ones. Combined with the automatic nervous system, it begins to give the organism not merely self-awareness and the ability to change itself, but the ability to intervene directly in our lives, beginning first with our business. #RandolphHarris 6 of 21

Because of this, networks will take on revolutionary new roles in business and society. And even though, so far as we know, no one has yet used extra-intelligence for pernicious or even criminal purposes, the spread of extra-intelligent networks is still in its infancy, with rules and safeguards yet to be defined. Who knows what will follow? By creating a self-aware electronic neural system that is extra-intelligent, we change the rules of culture as well as business. E-I, as we may call it, will raise perplexing questions about the relationships of data to information and knowledge, about language, about ethics and the abstruse models concealed in software. Rights or redress, responsibility for error or bias, issues of privacy and fairness will all cascade into executive suits and the courts in the years to come as society tries to adapt to the existence of extra-intelligence. As the implications of E-I will someday reach far beyond mere business matters, they should cause deep social, political, and even philosophical reflection. For prodigies of labour, intellect, and scientific imagination that dwarf anything involved in constructing Egyptian pyramids, medieval cathedrals, or Stonehenge are now being poured into the construction of the electronic infrastructure of tomorrow’s super-symbolic society. E-I, as we shall see in the following reports, is already upsetting power relationships in whole sectors of the emerging economy. In the past, economic development and poverty reduction depended mainly on a country’s domestic factors—the availability of capital, local resources and environment, along with the propensity of the population to save, the drive, energy and work habits of the labour force, and so forth. Since the mid-1950s, this has been less and less the case. As the World economy has become more integrated, with trade, people, capital and especially knowledge moving across boundaries, external factors have risen in importance. #RandolphHarris 7 of 21

And that includes indirect, second-order effects all too often unnoticed or ignored. The future of poverty cannot be understood until these spillover implications are taken into account. A good example can be seen in the amazing chain reaction that helpful fuel Asia’s economic rise—a rise that has seen more than a billion Asians climb above the two-dollar-a-day poverty line in just forty years. The story actually began in the mid-1950s as the United States of America started its development of a knowledge-based wealth system. Across the Pacific, Japan’s industrial economy, having been ground into dust during World War II, was still pathetic. Its defeated military was nonexistent and its politics were shaky at best. At this pivotal moment, the United States of America, facing an ascendant, nuclear-armed Soviet Union, reached a three-part deal with Japan. Militarily, Japan would ally with the United States of America against the threat posed by the communist U.S.S.R. In return, politically, the United States of America would tacitly support the conservative Liberal Democratic Party; economically, it would fling its doors wide open to Japanese exports. The problem with this last point was that Japan had little to sell that Americans might want. Around the World, some Japanese products were not the most desirable at the time. In a British play as late as the 1970s, actor Robert Morley still got a laugh by referring to “typical Japanese muck.” However, by then Japanese exports were no longer “muck.” Japan solved its muck problem by drawing on two largely American innovations. The first involved statistical quality-control methods spread throughout Japan by Joseph M. Juran and W. Edwards Deming during the 1950s and ‘60s. Assembly-line perfection became a national passion. (For their contributions, the emperor awarded both men the Order of the Sacred Treasure.) #RandolphHarris 8 of 21

Quality did not always become a catchword in U.S.A. manufacturing for decade or two, even though Americans were making some of the most durable products known to man. And while Toyotas, Honda, and Nissans used to routinely outrank the cars of Detroit and Europe in J.D. Power quality surveys, Cadillac, Chevy, Ford, Audi, BMW, and Mercedes-Benz are becoming more desirable to the America consumer. The other American contribution was the industrial robot, about which a similar story can be told. In 1956, engineer Joseph F. Engelberger and entrepreneur George C. Devol met one evening over cocktails and discussed I, Robot, Isaac Asimov’s classic science-fiction novel. Together they set up a company, named it Unimation (for “universal automation”), and, five years later, delivered the World’s first working industrial robot. General Motors installed it in its plant outside Trenton, New Jersey, but other American companies showed little enthusiasm for the new computer-driven technology. “I had a hard time with American industrialist[s],” Engelberger later said. By contrast, he continued, “the Japanese caught on right away. That’s why robotics is a $7 billion industry and it’s dominated by Japan.” In 1965, according to the Japan Automobile Manufacturers Association, “new technologies…became a top priority.” More specifically, by 1970, digital technology, much of it imported from the United States of America, led “in short while to a computerization of the entire manufacturing process,” while robots “gradually eliminated the need for humans to perform dangerous work.” By the late 1970s, according to John A. Kukowski and William R. Bolton in a Japanese Evaluation Center report, “Japan was the World leader in industrial assembly robots, and in 1992 it operated 69 percent of all installed industrial robots in the World, compared with 15 percent operated by Europe and 12 percent by the United States.” #RandolphHarris 9 of 21

The combination of U.S. technological knowledge and an American hunger for Japanese products, plus Japan’s own technological savvy and underestimated innovativeness, shot adrenaline into its economy. While its factories poured out such consumer products as VCRs, TVs, cameras and stereos, Japan also moved aggressively into semiconductor chips and computer components for the American market, brining itself father toward knowledge-based production. By 1979, Japan was IMB’s chief rival in the manufacture of computers, and a book entitled Japan as No. 1 attracted attention on both sides of the Pacific. The author attributed much of the Japanese corporation’s success to its ravenous hunger for knowledge and its emphasis on training—brining in foreign consultants and sending countless teams out to visit World centers where the most advanced knowledge was being pursued. The first secret of Japanese success was “learning, learning, learning.” The second was creative commercial application of new knowledge. The third was speed. Thus, by the 1980s, Japanese chip technology was advancing so rapidly that Washington slapped trade limits on the importation of Japanese semi-conductors. Cars, consumer electronics, computers, chips, copiers—none of these, on the surface, seemed relevant to the lives of some less affluent people in Asia. Or to the attack of poverty. However, they were. Now, many people are left wondering if microtechnology will lead to nanotechnology? Can bulldozers be used to make wristwatches? At most, they can help to build factories in which watches are made. Though there could be surprises, the relevance of microtechnology to molecular nanotechnology seems similar. Instead, a bottom-up approach is needed to accomplish engineering goals on the molecular scale. #RandolphHarris 10 of 21

What are the main tools used for molecular engineering? Almost by definition, the path to molecular nanotechnology must lead through molecular engineering. Working in different disciplines, driven by different goals, researchers are making progress in this field. Chemists are developing techniques able to build precise molecular structures of sorts never before seen. Biochemists are learning to build structures of familiar kinds, such as proteins, to make new molecular objects. In a visible sense, most of the tools used by chemists and biochemists are rather unimpressive. They work on countertops cluttered with dishes, bottles, tubes, and the like, mixing, stirring, heating, and pouring liquids—in biochemistry, the liquid is usually water with a trace of material dissolved in it. Periodically, a bit of liquid is put into a larger machine and s trip of paper comes out with a graph printed on it. As one might guess from this description, research in the molecular sciences is usually much less expensive than research in high-energy physics (with its multibillion-dollar particle accelerators) or research in space (with its multibillion-dollar spacecraft). Chemistry has been called “small science,” and not because of the size of the molecules. Chemists and biochemists advance their field chiefly by developing new molecules that can serve as tools, helping to build or study other molecules. Further advances come from new instrumentation, new ways to examine molecules and determine their structures and behaviours. Yet more advances come from new software tools, new computer-based techniques for predicting how a molecule with a particular structure will behave. Many of these software tools let researchers peer through a screen into simulated molecular Worlds much like those toured in past reports. #RandolphHarris 11 of 21

Of these fields, it is biomolecular science that is most obviously developing tools that can build nanotechnology, because biomolecules already form molecular machines, including devices resembling crude assemblers. This path is easiest to picture, and can surely work, yet there is no guarantee that it will be fastest: research groups following another path may well win. Each of these paths is being pursued Worldwide, and on each, progress is accelerating. Physicists have recently contributed new tools of great promise for molecular engineering. These are the proximal probes, including the scanning tunneling microscope (STM) and the atomic force microscope (AFM). A proximal-probe (and sometimes modify) the surface and any molecules that may be stuck to it. An STM brings a sharp, electrically conducting needle up to an electrically conducting surface, almost touching it. The needle and surface are electrically connected so that a current will flow if they touch, like closing a switch. However, at just what point do soft, fuzzy atoms “touch”? It turns out that a detectable current flows when just two atoms are in tenuous contact—fuzzy fringes barely overlapping—one on the surface and one on the tip of the needle. By delicately maneuvering the needle around over the surface, keeping the current flowing at a tiny, constant rate, the STM can map shape of the surface with great precision. Indeed, to keep the current constant, the needle has to go up and down as it passes over individual atoms. The STM was invented by Gerd Binnig and Heinrich Rohrer, research physicists studying surface phenomena at IMB’s research labs in Zurich, Switzerland. After working through the 1970s, Rohrer and Binnig submitted their first patent disclosure on an STM in mid-1979. In 1982, they produced images of a silicone surface, showing individual atoms. #RandolphHarris 12 of 21

Ironically, the importance of their work was not immediately recognized: Rohrer and Binnig’s first scientific paper on the new tool was rejected for publication on the grounds that it was “not interesting enough.” Today, STM conferences draw interested researchers by the hundreds from around the World. In 1986—quite promptly as these things go—Binnig and Rohrer were awarded a Noble Prize. The Swedish Academy explained its reasoning: “The scanning tunneling microscope is completely new and we have so far seen only the beginning of its development. It is, however, clear that entirely new fields are opening up for the study of matter. STMs are no longer exotic: Digital Instruments of Santa Barbara, California, sells its system (The Nanoscope@) by mail with an atomic-resolution-or-your-money-back guarantee. Within three years of their commercial introduction, hundreds of STMs had been purchased. How does an AFM work? The related atomic force microscope is even simpler in concept: A sharp probe is dragged over the surface, pressed down gently by a straight spring. The instrument senses motions in the spring (usually optically), and the spring moves up and down whenever the tip is dragged over an atom on the surface. The tip “feels” the surface just like a fingertip in the simulated molecular World. The AFM was invented by Binnig, Quate, and Gerber at Stanford University and IBM in San Jose in 1985. After the success of the STM, the importance of the AFM was immediately recognized. Among other advantages, it works with nonconducting materials. AFM-based devices might be used as molecular manipulators in developing molecular nanotechnology. (Note that AFMs and STMs are not quite as easy to use as these descriptions might suggest. For example, a bad tip or a bad surface can prevent atomic resolution, and pounding on the table is not recommended when such sensitive instruments are in operation. Further, scientists often have trouble deciding just what they are seeing, even when they get a good image.) #RandolphHarris 13 of 21

Speaking of images, the bias toward the peaks of television content is possibly most tragic when it comes to news. Since much of life is now removed from our direct experience, the news that we get from afar becomes our total information on the forces that shape and move our lives. That makes the distortions in it a very serious matter. When Walter Cronkite says, “And that’s the way it is,” he is surely aware that that’s the way it is only within those events, only in those aspects that fit the standards of “good television.” Presenting events exactly as they occur does not fit with the requisites of television news….Given the requirement that a network news story have definite order, time and logic, it would be insufficient in most cases to record from beginning to end the natural sequence of events, with all the digression, confusions and inconsistencies that more often than not constitute reality. Cameramen and women seek out the most action-packed moments; and editors then further concentrate the action. Even when an event is characterized by an unexpected low degree of activity, television can create the illusion of great activity. The relatively unenthusiastic reception General MacArthur received in Chicago during his homecoming welcome in 1951 thus appeared to be a massive and frenetic reception on television because all the moments of action were concentrated together. In collapsing the time frame of events and concentrating the action into a continuous flow, television news tends to heighten the excitement of any group or other phenomena it pictures, to the neglect of the more vapid and humdrum elements. Their jobs are to cut out all the dead wood and full moments. The procedure involves routinely eliminating the intervals in which little of visual interest occurs, and compressing the remaining fragments into one continuous montage of unceasing visual action. #RandolphHarris 14 of 21

For instance, an attempt by the SDS faction at Columbia University to block the registration of students in September of 1968, involved, according to my observations, a few speeches by SDS leaders, hours of milling about, in which the protest more or less dissipated for lack of interest, and about one minute of violence when five SDS leaders attempted to push their way past two campus patrolmen. The hours of film taken that day by an NBC camera crew recorded various views of the crowd from 9.00 am until the violence at about 2.00 pm, and the minute or so of violent confrontation. However, when the happening was reduced to a two-minute news story for the NBC Evening News, the editors routinely retained the violent scenes, building up to them with quick cuts of speeches and crowd scenes. The process of distilling action from preponderantly inactive scenes was not perceived as any sort of distortion by any of the editors interviewed. On the contrary, most of them considered it to be the accepted function of editing; as one chief editor observed, it was “what we are really paid for.” The results of the bias toward highlighted news content were put even more succinctly by John Birt in TV Guide (August 9, 1975). He points out that some of the elements of news fit the needs of the medium more directly than others, and the result is a serious “bias in understanding…trying to come to grips with the often-bewildering complexity of modern problems…is a formidable task, even without trying to put the result on television; and the failure rate is high. The realities one is seeking are abstract—macroeconomic mechanisms, political philosophies, international strategies—and cannot be directly televised like a battle sone or a demonstration.” #RandolphHarris 15 of 21

Even when an effort is made to cover subtle or complex material, the decision is made to choose only the most televisable elements. So a specific case of, say, a starving family will be chosen, rather than an overall look at its cause, which is more complicated and less televisable. The latter, runs the risk of being boring. A well-made report on a famine, or even on one starving family in Appalachia, will be more watchable than a report on the World food problem. A program on living conditions in Watts, Harlem, or Midtown Sacramento will be more diverting than a report on housing policy. I believe that the various forms and techniques of TV journalism can all too easily conspire together to create a bias against the audience’s understanding of the society in which it lives. The problem could be solved by lengthening the time devoted to the main stories of the day, so that a more comprehensive understanding of them might develop. Of course this would result in giving less time to the stores that are not the “main” ones, and so the recommendation seems to contradict earlier remarks. In effect, it would leave some news highlighted to an even greater extent in the World? Obviously not. It would leave people even more transfixed by the out-of-context information which is chosen. The reason for the contradiction is that we cannot bear to face the implications of what is happening. To face the inevitable drift of this reasoning leads straight to the observation that news, like all other information on television, is inevitably and irrevocably biased away from some forms of content and toward others. If this is true, then we really do not know which end is up and which is down. We take things as they come. The evolutionary approach is based on a simple principle; whatever is successful is likely to appear more often in the future. The mechanism can vary. In classical Darwinian evolution, the mechanism is natural selection based upon differential survival and reproduction. #RandolphHarris 16 of 21

In Congress, the mechanism can be an increased chance of reelection for those members who are effective in delivering legislation and services for their constituency. In the business World, the mechanism can be the avoidance of bankruptcy by a profitable company. However, the evolutionary mechanism need not be a question of life and death. With intelligent individuals, a successful strategy can appear more often in the future because other players convert to it. The conversation can be used based on more or less blind imitation of the successful players, or it can be based on a more or less informed process of learning. The evolutionary process needs more than differential growth of the successful. In order to go very far it also needs a source of variety—of new things being tried. In the genetics of biology, this variety is provided by mutation and by a reshuffling of genes with each generation. In social processes, the variety can be introduced by the “trial” in “trial and error” learning. This kind of learning might not reflect a high degree of intelligence. A new pattern of behaviour might be undertaken simply as a random variant of an old pattern of behaviour, or the new strategy could be deliberately constructed on the basis of prior experience and a theory about what is likely to work best in the future. The ability to detect radiation has been bestowed on a group of experimental cats, each of which is wired into a portable, miniature Geiger counter that telemeters electrical impulses directly to the feline brain via implanted electrodes. The square-wave electrical impulses are similar to normal nervous impulses. They are transmitted to a portion of the brain that is associated with fear reactions, causing cats to shy away from radioactive sources. It is reasonable to speculate that in near future the stimoreciever [instruments for radio transmission and reception of electrical messages to and from the brain] may provide the essential link from man to computer to man, with a reciprocal feedback between neurons and instruments which represents a new orientation for the medical control of neurophysiological functions. For example, it is conceivable that the localized abnormal activity which announces the imminence of an epileptic attack could be picked up by implanted electrodes, telemetered to a distant instrument room, tape-recorded, and analyzed by a computer capable of recognizing abnormal electrical patters. #RandolphHarris 17 of 21

Identification of the specific electrical disturbance could trigger the emission of radio signals to activate the patient’s stimoreceviver and apply an electrical stimulation to a determined inhibitory area of the brain, thus blocking the onset of the convulsive episode. By the turn of the century, it is likely that every major organ except the brain and the central nervous system will have artificial replacements. Scientists are working on replacement parts for organs such as the pancreas, heart, ear, eyes and more. The concept of total prothesis seems plausible. Creating an artificial human brain, however, is a little more difficult. Some say it will never happen. Since the first Artificial Intelligence experiments, attempts to mimic complex human neural activity with the crudities of current electronic hardware have been plagued with challenging problems. Breakthroughs in this line of research might take place through electro-biological engineering or hybridization of computer architecture with molecular engineering. The U.S. Defense Advanced Research Projects Agency is working on what is known as the Molecular Electronic Device (MED) or “biochip.” There are several designs for these organic microprocessors, but the essential idea is to use protein molecules or synthetic organic molecules as computing elements to store information or act as switches with the application of voltage. Signal flow in this case would be by sodium or calcium ions. Others feel that artificial proteins can be constructed to carry signals by electron flow. Still another idea is to “metalize” dead neuronal tissues to produce processing devices. The ultimate scenario is to develop a complete genetic code for the computer that would function as a virus does, but instead of producing more virus, it would assemble a fully operational computer inside a call. That is nanotechnology at work. #RandolphHarris 18 of 21

The very notion that computer chips could be “grown” or that living and inert matter could be fused together on a molecular level promises surprises ahead for those with orthodox nations of mind and body. As machines become more and more responsive to human internal experiences (from the desire to move a limb or even rage or pleasures of the flesh), we will probably reach a stage at which every subtle nuance of imagination and consciousness can be realized, stored and displayed through machinery. And at some point in the future it will be possible to “will” events to occur. New twists in the evolution of the brain might be brought about through our own manipulation of the elements of biological science. If we seriously consider that hand tools must have come into existence together, then it follows that the tool’s transformation into an “organism” capable of monitoring and responding to our biological functions transforms us as well. The human body can be seen as a complex machine, but it is so much more. Bureaucracy is an attempt to rationalize the flow of information, to make its use efficient to the highest degree by eliminating information that diverts attention from the problem at hand. There is a prime example of such bureaucratic rationalization in the 1884 decision to organize time, on a Worldwide basis, into twenty-four time zones. Prior to this decision, towns only a mile or two apart could and did differ on what time of day it was, which made the operation of railroads and other businesses unnecessarily complex. By simply ignoring the fact that solar time differs at each node of a transportation system, bureaucracy eliminated a problem of information chaos, much to the satisfaction of most people. However, not everyone. It must be noted that the idea of “God’s own time” had to be considered irrelevant. This is important to say, because, in attempting to make the most rational use of information, bureaucracy ignores all information and ideas that do not contribute to efficiency. The idea of God’s time made no such contribution. #RandolphHarris 19 of 21

A university is a bureaucracy, and successful ones are the proof that society can be devoted to the well-being of all, without stunting human potential or imprisoning the mind to the goals of the regime. The deepest intellectual weakness of democracy is its lack of taste or gift for the theoretical life. However, the issue is not whether we possess intelligence but whether we are adept at reflection of the broadest and deepest kind. We need constant reminders of our deficiency, now more than in the past. The great European universities used to act as our intellectual conscience, but with their decline, we are on our own. Nothing prevents us from thinking too well of ourselves. It is necessary that there be an unpopular institution in our midst that sets clarity above well-being or compassion, that resists our powerful urges and temptations, that is free of all snobbism but has standards. Those standards are in the first place accessible to us from the best of the past, although they must be such as to admit of the new, if it actually meets those standards. If nothing new does meet them, it is not a disaster. The ages of great spiritual fertility are rare and provide nourishment for others less fertile ones. What would be a disaster would be to lose the inspiration of those ages and have nothing to replace it with. This would make it even more unlikely that the rarest talents could find expression among us. The Bible and Homer exercised their influence for thousands of years, preserved in the mainstream or in backwaters, hardly every being surpassed in power, without becoming irrelevant because they did not suit the temper of the times or the spirit of a regime. They provided the way out as well as the model for reform. In the creation of man, the two urges are set in opposition to each other. The Creator gives them to man as His two servants which, however, can only accomplish their service in genuine collaboration. #RandolphHarris 20 of 21

 The “evil urge” is no loess necessary than its companion, indeed even more necessary than it, for without it man would woo no woman and beget no children, build no house and engage in no economic activity, for it is true that all travail and all skill in work is the rivalry of a man with his neighbour. Hence this urge is called the yeast in the dough, the ferment placed in the soul by God, without which the human dough does not rise. Thus, a man’s status is necessarily bound up with the volume of “yeast” within him; whoever is greater than another, his urge is greater than the other’s.  Wonderful civilization? I will not object to the adjective—it rightly describes it—but I do object to the large and complacent admiration which it implies. By all accounts—yours in chief, Excellency—the pure and sweet and unenlightened and unsordid civilization of Eden was worth a thousand millions of it. What is a civilization, rightly considered? Morally, it is the evil passions repressed, the level of conduct raised; spiritually, idols cast down, God enthroned; materially, bread and fair treatment for the greatest number. That is the common formula, the common definition; everybody accepts it and is satisfied with it. Our civilization is wonderful, in certain spectacular and meretricious ways; wonderful in scientific marvels and inventive miracles; wonderful in material inflation, which it calls advancement, progress, and other pet names; wonderful in its spying-out of the deep secrets of Nature—and its vanquishment of her stubborn laws; wonderful in its extraordinary financial and commercial achievements; wonderful in its hunger for money, and in its indifference as to how it is acquired; wonderful in the hitherto undreamed of magnitude of its private fortunes and the prodigal fashion in which they are given away to institutions devoted to the public culture. “And it came to pass that there were sorceries, and witchcrafts, and magics; and the power of the evil one was wrought upon all the face of the land, even unto the fulfilling of all words of Abinadi, and also Samuel and Lamanite,” reports Mormon 1.19. #RandolphHarris 21 of 21


Cresleigh Homes

The design of #Havenwood Model 3 is ideal for anyone who loves to entertain – and we know you do! 🥂 The open floor plan allows for maximum seating and mingling space, and it’s just waiting for a brilliant host/hostess to set out the premimum cranberry juice 🍷 and charcuterie! 🧀


Lot 75 is up for sale and all ready for you! Contact us with questions and let’s get the ball rolling – don’t let summer 2022 🌞 pass you by without making moves to your brand new digs at #CresleighHomes!

Explore our exciting collection of ranch-style floor plans, ready to personalize with your choice of finishes, flooring, faucets and home technology. https://cresleigh.com/havenwood/

 Provided in Cresleigh Homes are spacious great room sand an open dinings area that flow into a spacious kitchens with large center islands and walk-in pantries.

Residence Three is the largest of the single story homes offered in Cresleigh Havenwood. At 2,827 square feet you’ll be hard pressed to a contemporary floorplan that offers this much space. There are four bedrooms, two and one half bathrooms, and a three car garage. https://cresleigh.com/havenwood/residence-three/

#CresleighHomes