
When it comes to extinction: not only can you do worse, you can go completely out of business. We can illustrate the conditions that favor encouraging variety by considering the stinking case of the Linux computer operating system and the method used to organize the work of its developers. The method is known as open source software development. This form of software development has been thrown into the limelight by the spectacular growth of Linux, which has become, in certain key areas of application, a serious competitor to operating systems developed and sold by major corporations such as Microsoft, Sun, and IBM. This is a very surprising turn of evens since Linux is given away free by its developers. There is a natural presumption that free software cannot be as reliable as for-profit software. Yet it is precisely for situations demanding high reliability that Linux has found its strongest support. The surprise deepens with the observation tht Linux is not only free but also the handiwork of an enormous, Worldwide cadre of unpaid volunteers. By some estimates, Linux is the result of contributions from many thousands of programmers. A computer operating system is one of the most intricate of human creations. This number of cooks would seem more than sufficient to spoil the soup. How could thousands of scattered volunteers make an operating system that is more reliable (and faster running_ than those created by dozens, or hundreds, of highly talented programmers working full-time for renowned corporations? #RandolphHarris 1 of 19

Considering the development of Linux as a Complex Adaptive System casts light on some important components of the explanation. We can begin by pointing out that Linux is not the only example of the open source approach to software development. There are many earlier examples, such as the scripting language Perl and the e-mail server sendmail. The most widely deployed software for serving up requested World Wide Web pages, a system known as Apache, is also the product of volunteers working together in an open source framework. What all the examples have in common is the free availability of the source code, the human readable computer instructions that specify the program. That arrangement provides the generic label for this approach to team software creation: open source software development. The free access to the source code of Linux means that any programmer with sufficient motivation can craft changes to the code, creating a new version of the program. This is not possible in traditional development with proprietary code. From a Complex Adaptive System point of view, the possibility for variety of the population of operating systems. In successful open source cases such as Linux, that variety has been harnessed to yield a very effective result, although many observers expected chaos to result from the rapid injection of many potentially incompatible variants. #RandolphHarris 2 of 19

Our framework points to several structural arrangements that work to make the added exploration beneficial, averting the prospect of death by eternal boiling. In our terms, when a programmer modifies the source code of Linux, this activity is an endogenously triggered recombination. The trigger is usually an observation of some particular kind of poor performance by the existing standard version of the operating system. The affected user may make an electronic request for help from the large Linux community. Interested individuals respond by suggesting fixes. These small pieces of new code are recombined with the rest of the standard version to produce new variant versions. A period testing and discussion of the performance of the variants follows. Eventually the best-performing variant is accepted by the small team of key Linux developers, who incorporate the new code into a subsequent standard version of Linux. When open source development prospers, a central reason seems to be that “given enough eyeballs, all bugs are shallow.” For Linux, there are certainly enough volunteers about triggering problems and proposed alternatives is precise enough that multiple plausible variants are routinely generated as possible solutions to most problems that bother users. In addition, testing of alternatives is reliable enough that the code that was out is generally very good code, with unwanted side effects being rare. Thus the variety made possible by the free availability is marshaled to produce a rapid rate of improvement in overall quality. #RandolphHarris 3 of 19

By inquiring a bit further into how this is accomplished, we can uncover some clues about when an open source approach is likely to work well—and when it is not. A crucial fact is that there are two types of Linux versions: standard and variant. The few central managers of the Linux community, led by the originator of the operating system, Linus Torvalds, retain the right to label versions of the system as official releases. Each new official release creates another “standard Linux,” and millions of digitally perfect copies are made of it. This control over the definition of the next generation of the operating system is strikingly analogous to a biological mechanism seen in the emergence of multicellular organisms: sequestration of the germline. This is a restriction of reproductive activity to a few specialized cells, while the vast majority of cells in the organism no longer participate in creating the next generation. In both cases, limiting “reproduction” to a tiny fraction of all the agents reduces the chaotic inconsistency that would know if all variants had equal opportunity to shape the future. In the Linux case, the centralized control of changes in the standard code makes higher levels of variety in the proposed changes sustainable, so that the “law of sufficient eyeballs” can come advantageously into play. In the biological case, the restriction functions to alter the evolutionary “incentives” of cells making up the organism. Over succeeding generations their strategies will be far more likely to be those that let them prosper as a “team” rather than those that benefit individual cells at the expense of others. Analogous incentives are created for the programmers in the Linux case. #RandolphHarris 4 of 19

Numerous experiments are being undertaken in an effort to imitate the striking success that the open source approach achieved in the Linux case. As usual, we do not claim to be able to predict the success and failure of particular efforts. However, Complex Adaptive Systems principles do suggest a number of key questions to ask when contemplating an open source software project. Several of these come from the preceding section on whether to encourage variety. As we have seen, variety is the engine of rapid quality improvement in an open source initiative. We can see that Linux has at least three, and perhaps all four, of the conditions favoring exploration that we outlined earlier. Problems that are long-term or widespread. In contrast to computer hardware and applications programs (such as Web browsers), operating systems are among the longest living elements of the computational World. Unix—of which Linux is a free version—dates back to 1969. It runs on mainframe and minicomputer architectures that long predate the microcomputers that now cover the Earth. Thus, an improvement to an operating system is likely to bear fruit over a very long period (as time is measured in the strange universe of computing, with its Moore’s law of doubled computer power every eighteen months). Another example of open source development, the Apache Web server, also seems to occupy a functional niche where improvements can be expected to have long service. In addition, the gains form any improvement in a standard version of an operating system can benefit thousands or even millions of users, providing widespread benefits. #RandolphHarris 5 of 19

Problems that provide fast, reliable feedback. Linux exhibits this characteristic as well. In its typical role in server environments, its features are exercised at very high rates, and defects become evident quickly. Moreover, open source distribution means that every contributor of a proposed variant can make a completely functional new version that can be tested locally. This further increases the rate of feedback. And finally, the quality of proposed variants can be assessed with relatively high reliability. Speed of operation and resistance to crashed are highly valued criteria across the entire community of Linus developers. Disagreements do occur over how these should be measured, and other criteria are also important. However, when compared with other software areas, such as user interface design, the appropriate performance metrics are relatively clear. Problems with low risk of catastrophe from exploration. Various parts of software system have high levels of interdependence. When there is a premium on speed, as there is for may operating systems, there is a strong temptation to increase even further the interdependencies among modules. This can create a substantial risk of catastrophe. However, Unix, from which Linux derives, has long been a partial exception to this tendency. In the Unix/Linux culture there is a well-developed philosophy of modular isolation. A key component, called the kernel, is optimized for speed. However, the numerous other components are expected to honor a different set of constraints. There, interdependence among components is governed by strict principles of modularization that severely limit side effects that any activity might have on other activities. #RandolphHarris 6 of 19

Speed is also important outside the kernel but has to be found within the architectural constraints that give primacy to crash resistance, thus lowering the chances of catastrophic consequences from exploration. Problems that have looming disasters. This last factor favoring exploration is not a property of Linux open software development but rather of the motivation of some of the developers. Among those who have made major contributions to Linux are many who feared the extinction of the Unix operating system family, in which they have invested their expertise. They also feared the rising hegemony of operating systems from Microsoft Corporation. Foe them, joining a relatively high-risk, exploration-maximizing software project may have been an attractive alternative to domination by what they often call “The Beast from Redmond.” Take together, our four conditions show Linux to be a development project for which it is highly promising to strongly encourage variety. It does not follow that open availability of source code is a form of magic that will cause all software projects to prosper as Linux has. Indeed, our analysis suggests that in order for the decentralized generation of proposals to be effective for Linux, several other conditions were important. In particular, Linux development benefited from the ability to identify specific problems, make accurate copies of the current system with only deliberately introduced changes, evaluate the effectiveness of proposed solutions, and centrally control the choice of which proposals are implemented as changes in the standard version. It remains to be seen just how widely applicable the decentralized generations of alternative can be. However, open source software development clearly demonstrates that even very large and highly structure systems, like Linux, can benefit from the encouragement of variety. #RandolphHarris 7 of 19

Do you know someone you would like to change and regulate and improve? Good! That is fine. I am all in favor of it. However, why not begin on yourself? From a purely selfish standpoint, that is a lot more profitable than trying to improve others—yes, and a lot less dangerous. If you and I want to stir up a resentment tomorrow that may rankle across the decades and endure until death, just let us indulge in a little stinging criticism—no matter how certain we are that it is justified. When dealing with people, let us remember we are not dealing with creatures of logic. We are dealing with creatures of emotion, creatures bristling with prejudices and motivated by pride and vanity. Bitter criticism caused the sensitive Thomas Hardy, one of the finest novelists ever to enrich English literature, to give up forever the writing of fiction. Criticism drove Thomas Chatterton, the English poet, to suicide. Benjamin Franklin, tactless in his youth, because so diplomatic, so adroit at handling people, that he was made American Ambassador to France. The secret of his success? “I will speak ill of no man,” he said, “…and speak all the good I know of everybody.” Any fool can criticize, condemn and complain—and most fools do. However, it takes character and self-control to be understanding and forgiving. “A great man shows his greatness,” said Carlyle, “by the way he treats little men.” Often parents are tempted to criticize their children. You would expect me to say “do not.” However, I will not. Just do not be in the habit of finding fault, or reprimanding—you children will consider this a reward. #RandolphHarris 8 of 19

Make sure your children know that sometimes you will criticize them, and this does not mean you do not love them, but that a lot was expected of you as a child and so a lot will be expected of them. A child needs to be ensured that there is so much good and fine and true in their character. And that their little heart is as big as the moon. Then give them examples of their good behavior. Instead of condemn people, let us try to understand them. Let us try to figure out why they do what they do. That is a lot more profitable and intriguing than criticism; and it breeds sympathy, tolerance and kindness. To know all is to forgive all. God Himself, does not propose to judge man until the end of his days. Why should you and I? When it comes to punishment, tit-for-tat is not a good form of punishment. It is nice in that it never initiates cheating. It is provocable, that is, it never lets cheating go unpunished. And it is forgiving, because it does not hold a grudge for too long and is willing to restore cooperation. However, tit-for-tat is a flawed strategy. The slightest possibility of mis perceptions results in a complete breakdown in the success of tit-for-tat. The long-standing feuds between the Hatfields and the McCoys or Mark Tawin’s Grangerfords and Shepherdsons offer more examples of how tit-for-tat behavior leads to mutual loss. Feudists on either side are not willing to end the feud until they consider themselves even. However, in a continuing attempt to get even, they end up knocking each other further and further down. Eventually they end up dead even. Rarely is there any hope of going back and solving the dispute at its origin, for once begun, it takes on a life of its own. #RandolphHarris 9 of 19

What tit-for-tat lacks is a way of saying “Enough is enough.” It is dangerous to apply this simple rule in situations in which misperceptions are endemic. Tit-for-tat is too easily provoked. You should be more forgiving when a defection seems to be a mistake rather than the rule. Even if the defection was intentional, after a long-enough cycle of punishments it may still be time to call it quits and try reestablishing cooperation. At the same time, you do not want to be too forgiving and risk exploitation. When the probability of a misperception is small, it will take a lot longer for the trouble to arise. However, then once a mistake happens, it will also take a lot longer to clear it up. The possibility of misperception means that you have to be more forgiving, but not forgetting, than simple tit-for-tat. This is true when there is a presumption that the chance of a misperception is small. It pays to be more forgiving up to a point. Once the probability of mistakes gets too high, the possibility of maintaining cooperation breaks down. The large chance of misunderstanding makes it impossible to send clear messages through your actions. Without an ability to communicate through deeds, any hope for cooperation disappears. A 50 percent chance of a misperception is the worst possible case. If misperceptions were certain to occur, you would interpret every message as its opposite, and there would be no misunderstandings. A stock forecaster whose advice is always dead wrong is as good as a predictor as one who is always right. You just have to know how to decode the forecast. #RandolphHarris 10 of 19

With this in mind, we look for a way out of the dilemma when there is a chance of misperception, but not too big of a chance. The disciplined ordering of personal front is one way, then, in which the individual is obliged to express one’s aliveness to those about him or her. Another means is the readiness with which one attends to new stimuli in the situation and the alacrity with which one responds to them with body movements. I think that the individual so generally maintains a proper motor level in situations that this is one type of propriety that is very difficult indeed to become aware of. Here again mental wards help us. For example, a common symptom displayed by persons diagnosed as schizophrenic consists of very slow body movements as shown, say, during hallway pacing. While thus engaged, the patient may respond to a question from an attendant by turning one’s head slowly in the direction of the voice, and this only by moving one’s whole trunk, as if one’s neck were completely stiff, while keeping one’s face immobile. (This kind of conduct is somewhat similar to the kind that is popularly thought to occur in sleepwalking, and calls forth a similar response; namely, the feeling of someone being in the situation physically but not fully present for purposes of interaction.) Bleuler has given us fine descriptions of extremes of this deadness to the situation, as he has with so many schizophrenic symptoms, pointing to the inward emigration that presumably occurs at these times. #RandolphHarris 11 of 19

Autism is also manifested by many patients externally. (Naturally, this is, as a rule, unintentional.) Not only do they not concern themselves with anything around them, but they situ around with faces constantly averted, looking at a blank wall; or they shut off their sensory portals by drawing a skirt or bed clothes over their heads. Indeed, formerly, when the patients were mostly abandoned to their own devices, they could often be found in bent-over, squatting positions, an indication that they were trying to restrict as much as possible of the sensory surface area of their skin. This lack of presence may be nicely demonstrated in establishments that are not medial but are none the less similar in many ways to mental hospitals: About the prison yard and the shops one sees inmates for whom smiles, small talk, alertness, and attention to the environment comes easily. One also sees about half as many men who seldom smile, who seldom talk, who stumble as they walk in lines, whose errors in their tasks cause small concern, and who respond normally to social stimuli only when a stimulus is strong or different. Status or social approbation is as nothing. It is reverie-plus that controls them. In general, then, if the individual is to be in the situation in full social capacity, one will be required to maintain a certain level of alertness as evidence of one’s availability for potential stimuli, and some orderliness and organization of one’s personal appearance as evidence that one is alive to the gathering one is in. A problem for analysis, of course, is to go on to isolate analytically the various ways in which insufficient presence may be manifested. #RandolphHarris 12 of 19

Not only did the economic ascent of China, India and, less noticeably, Brazil help drive oil prices to record highs, but so did the war in Ukraine and inflations; prices were seven times as high as deflated 2020 prices. That makes alternative to oil more competitive. It also calls into question how long present oil reserves can last. No one can predict when the last barrel of crude will be pumped, but we already find planners in big oil companies preparing strategies for transition to a post-petroleum economy. General Motors hopes to be the first company to sell one million hydrogen-based fuel-cell cars. And if not GM, why not Toyota? Of China’s own up-and-coming auto industry? Unless governments in the Middle East start now to plan for post-oil, knowledge-intensive service economies, the exit of huge amounts of wealth from the region could well spark even more terror as poverty and hopelessness deepen. Even fuel-cell car introduced elsewhere in the World, every nuclear plant, every solar panel, every windmill, every new source and form of non-oil energy will hasten the demise of the existing business—and religious—elites in the Middle East. Such a collapse could erode Saudi financial resources further undermine its influence within World Islam—shifting the balance between Shia, Sunni and other groups. The Saudi regime’s vast petro-riches have been used to promote Wahabism, a particularly stringent brand of Islam, all over the World. The funds could have been used to educate the younger generation of Muslims in economically valuable skills. #RandolphHarris 13 of 19

Instead, they have funded strictly religious schools that produced the Taliban in Afghanistan and jobless, hopeless, angry youth across much of the globe, including terrorists now trying to overthrow the Saudi regime itself. To many outsiders, it appears that Islam is already at war with itself. In that war, the enemy is not an anti-Islamic, imperial United States of America or some other non-Islamic nation—or the West. It is the greed, provincialism and myopia with which the leaders of so many Middle East nations have ruled for so long, and their failure to use oil money to ride the Third Wave to a better future. The space near Earth is being polluted with small orbiting projectiles, some as small as a pin. Most of the debris is floating fragments of discarded rocket stages, but it also includes gloves and cameras dropped by astronauts. This is not a problem for life on Earth, but it is a problem as life begins its historic spread beyond Earth—the first great expansion since the greening of the continents, long ago. Orbiting objects travel much faster than rile bullets, and energy increases as the square of speed. Small fragments of debris in space can do tremendous damage to a spacecraft, and worse—their impact on a spacecraft can blast loos yet more debris. Each fragment is potentially deadly to a spacefaring human crossing its path. Today, the tiny fraction of space that is near Earth is increasingly cluttered. #RandolphHarris 14 of 19

This litter needs to be picked up. With molecular manufacturing, it will be possible to build small spacecraft able to maneuver from orbit to orbit in space, picking up one piece of debris after another. Small spacecraft are needed, since it makes no sense to send a shuttle after a crap of metal the size of a postage stamp. With these devices, we can clean the skies and keep them hospitable to life. We have spoken of waste that just needs molecular changes to make it harmless, and toxic elements tht came from the ground, but nuclear technology has created a third kind of waste. It has converted the slow, mild radioactivity of uranium into the fast, intense radioactivity of nearly created nuclei, the products of fission and neutron bombardment. No molecular change can make them harmless, and these materials did not come from the ground. The products of molecular manufacturing could help with conventional approaches to dealing with nuclear waste, helping to store it in the most stable, reliable forms possible—but there is a more radical solution. Even before the era of the nuclear reactor and the nuclear bomb, experimenters made artificially radioactive elements by accelerating particulars and slamming them into nonradioactive targets. These particles traveled fast enough to penetrate the interior of an atom and reach the nucleus, joining it or breaking it apart. #RandolphHarris 15 of 19

The entire Earth is made of fallout from nuclear reactions in ancient stars. Its radioactivity is low because so much time has passed—many half-lives, for most radioactive nuclei. “Kicking” these stable nuclei changes them, often into a radioactive state. However, kicking a radioactive nucleus has a certain chance of turning it into a stable one, destroying the radioactivity. By kicking, sorting, and kicking again, an atom-smashing machine could take in electrical power and radioactive waste, and output nothing but stable, nonradioactive elements, identical to those common in nature. Do not recommend this to your congressman—it would be far too expensive, today—but it will some day be practical to destroy the radioactivity of the twentieth-century’s leftover nuclear waste. Nanotechnology cannot do this directly, because molecular machines work with molecules, not nuclei. However, indirectly, by making energy and equipment inexpensive, molecular manufacturing can give us the means for a clean, permanent solution to the problem of wastes left over from the nuclear era. In 1985, General Motors, America’s largest car maker bought control of Hughes Aircraft, the company founded by that reclusive, eccentric billionaire Howard Hughes. GM paid $4.7 billion dollars—the single largest amount ever paid for a corporate acquisition until then. A merger mania had begun in the early 1980s, the fourth since 1900, and each year saw more corporate marriages in America, until by 1988 there were 3,487 acquisitions or mergers involving an astronomical $227 billion. Then in 1989, all the old records were smashed again when RJR-Nabisco was taken over for $25 billion. #RandolphHarris 16 of 19

In short, in a single four-year period the maximum size of these mergers increased more than five times. Even allowing for inflation, the growth in a scale was colossal. Of the twenty largest deals in U.S.A. history, all consummated between1985 and 1989, most involved a wedding of American firms. By contrast, hardly a day now goes by without new headlines proclaiming “mixed marriages”—mergers that cross national frontiers. Thus Japan’s Bridgestone acquires Firestone Tire & Ruber. Sara Lee gulps the Dutch company Akzo. England’s Cadbury buys up America’s Grolier. Sony buys Columbia Pictures. The extraordinary increase in World takeover activity…is showing no signs of abatement. Elon Musk just Twitter for $44 billion, an amount that he admitted is “obviously overpaying” for the company. He also lined up a substantial amount of debt financing to pay for the deal. Indeed, the scramble to reorganize several key industries is likely to accelerate…driven by factors that go way beyond the asset-stripping moves that first sparked the U.S. merger boom. As this suggest, while many mergers were originally based on get-rich-quick exploitation of financial or tax quirks, others were strategic. Thus, as Europe raced toward total economic integration, many of its biggest companies merged, hoping to take advantage of these pan-European market and to stave off the advance of Japanese and America giants. American and Japanese grooms looked for European brides. #RandolphHarris 17 of 19

Some companies were thinking on an even bigger scale, preparing themselves to operate all across the so-called “triad market”—Europe, the United States of America, and Japan. And beyond that, a few firms dreamed of truly conquering the “global market.” All this frenetic activity led to deep concern over the concentration of economic power in a few hands. Politicians and labor unions attacked the so-called “deal mania.” Financial writers compared it to the feeding frenzy of sharks. Looking only at the question of financial size, one might be led to believe that power in the economy of the future will eventually be controlled by a tiny handful of enormous, hierarchical monoliths, not unlike those depicted in the movies. Yet that scenario is far too simple. First, it is a mistake to assume all these mega-firms will stay pasted together. Previous merger manias have been followed, a few years later, by waves of divestiture. A new round of divorces looms ahead. Sometime the anticipated market evaporates. Other times the cultures of the merged firms clashed. Often the basic strategy was wrong in the first place. Indeed, as we saw earlier, many recent buy-outs have actually been designed with divestiture in mind, so that after a gigantic merger various units are spun off from a central core, shrinking, rather than enlarging, the scale of the resultant firm. Second, we are witnessing a growing disjuncture between the World of finance and the “real” economy in which things and services produced and distributed. #RandolphHarris 18 of 19

As two heart-stopping stock markets crashed in the late 1980s proved, it is sometimes possible for the financial markets to collapse, at least temporarily, without significantly disrupting the actual operations of the larger economy. For capital itself is growing less, not more, important in economic wealth production. Third, bulk does not necessarily add up to power. Many giants firms possess enormous power resources but cannot deploy them effectively. As the United States of American has learn from the War Department, sheer size is no guarantee of victory. More important, however, to know how power in any industry or economy is going to be distributed, we need to look at relationships, not just structures. And when we do, we discover a surprising paradox. At the same time that some firms are swelling (or bloating) in size, we also see a powerful conuntermovement that is breaking big business into smaller and smaller units and simultaneously encouraging the spread of small business. Concentration of power is thus only half the story. Instead of a single pattern, we are witnessing two diametrically opposed tendencies coming together in a new synthesis. Rising out of the explosive new role of knowledge in the economy, a novel structure of power is emerging: the powermosaic. The economy is become like pieces of colorful ceramics that come together to make a coherent whole. The idea of a mosaic makes it possible to look at the economy in many different ways, allowing new pictures to emerge. Jobs will not longer be scarce. The economy is becoming a dynamic system in which individuals and organizations have the power to create supply and demand, to launch new projects, and to create opportunities to work. This is the concept of the mosaic economy. #RandolphHarris 19 of 19

Cresleigh Homes

‘Tis the season for enjoying family and friends – and when you’re living in the perfect #CresleighHome, there’s space for everyone. In Model 4, the open floorplan flows seamlessly with the great room opening up to the large covered outdoor living space.

You’ll get a two-story, 3,377 square foot home featuring four bedrooms – including one suite on the first floor, three and one half bathroom, and a true three-car garage. It’s the largest home in the #Havenwood development, and we think it’s perfect for you!

Need more info? Head to cresleigh.com/havenwood to take a virtual tour!
